content
stringlengths
86
994k
meta
stringlengths
288
619
Manvel, TX Precalculus Tutor Find a Manvel, TX Precalculus Tutor I am currently a CRLA certified level 3. I have been tutoring for close to 5 years now on most math subjects from Pre-Algebra up through Calculus 3. I have done TA jobs where I hold sessions for groups of students to give them extra practice on their course material and help to answer any question... 7 Subjects: including precalculus, calculus, statistics, algebra 1 ...I received the AP Scholar award, became a member of the California Scholarship Federation, and received the scholarship athlete award. I have my soccer coaching license and have coached middle and high school teams. I gained tutoring experience as a math teaching assistant and as a private tutor for students at my high school. 22 Subjects: including precalculus, chemistry, calculus, physics I can help you ingest and digest biological concepts, terms and processes. These things, by now, are like second nature to me. I studied Genetics, Developmental Biology, and Evolutionary Biology at the graduate level. 8 Subjects: including precalculus, reading, chemistry, biology I have been a private math tutor for over ten(10) years and am a certified secondary math instructor in the state of Texas. I have taught middle and high-school math for over ten (10) years. I am available to travel all over the greater Houston area, including as far south as Pearland, as far north as Spring, as far west as Katy and as far east as the Galena Park/Pasadena area. 9 Subjects: including precalculus, calculus, geometry, algebra 1 ...They range from Differential Geometry to Ordinary differential Equations. I am well versed in this topic and can pull from a wide variety of real world examples that can help ease the complicated problems. Through my years of experience teaching and tutoring all levels of math, from 6th grade ... 16 Subjects: including precalculus, calculus, ACT Math, logic Related Manvel, TX Tutors Manvel, TX Accounting Tutors Manvel, TX ACT Tutors Manvel, TX Algebra Tutors Manvel, TX Algebra 2 Tutors Manvel, TX Calculus Tutors Manvel, TX Geometry Tutors Manvel, TX Math Tutors Manvel, TX Prealgebra Tutors Manvel, TX Precalculus Tutors Manvel, TX SAT Tutors Manvel, TX SAT Math Tutors Manvel, TX Science Tutors Manvel, TX Statistics Tutors Manvel, TX Trigonometry Tutors Nearby Cities With precalculus Tutor Alvin, TX precalculus Tutors Arcola, TX precalculus Tutors Brookside Village, TX precalculus Tutors Dickinson, TX precalculus Tutors Fresno, TX precalculus Tutors Galena Park precalculus Tutors Hillcrest, TX precalculus Tutors Hitchcock, TX precalculus Tutors Iowa Colony, TX precalculus Tutors Pearland precalculus Tutors Piney Point Village, TX precalculus Tutors Santa Fe, TX precalculus Tutors South Houston precalculus Tutors Webster, TX precalculus Tutors West University Place, TX precalculus Tutors
{"url":"http://www.purplemath.com/Manvel_TX_precalculus_tutors.php","timestamp":"2014-04-18T01:00:03Z","content_type":null,"content_length":"24260","record_id":"<urn:uuid:6d55faec-fed3-480b-aa00-80ef68a5b3cf>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Montclair, NJ Prealgebra Tutor Find a Montclair, NJ Prealgebra Tutor ...When students come for extra help, I aide them in the process of studying whether it be through extra examples, making notecards, or dances to remember information. I help my students figure out the tricks they need in order to recall the information when it comes to test time. I specifically tutored for the NJASK and taught test taking tips and strategies. 4 Subjects: including prealgebra, elementary (k-6th), study skills, elementary math ...I am looking to tutor children, high school students, college students and adults in these various areas. My tutoring plan is to be as open and flexible as possible because I have learned that everyone learns best in different ways. I am willing to work closely with my student in order to find a comfortable pace by which they can learn the subject matter. 11 Subjects: including prealgebra, calculus, algebra 1, algebra 2 I am presently a business founder and owner for an online marketing and advertising company. Over the past few years I have given private lessons in the field of business, web design and marketing. I have a very relaxed and comfortable (but firm) approach when it comes to my tutoring style. 22 Subjects: including prealgebra, reading, writing, business ...I have experience in the following areas: elementary math pre-algebra, algebra, and geometry. I look forward to working with you! I attended the University of Scranton where I majored in middle level education with a concentration in mathematics. 6 Subjects: including prealgebra, geometry, algebra 1, elementary (k-6th) ...I am extremely patient and empathetic, and get along great with all kids, especially it seems, those with special needs. I am glad to provide more detailed information and/or documentation if required. Thank you. 29 Subjects: including prealgebra, reading, biology, ASVAB Related Montclair, NJ Tutors Montclair, NJ Accounting Tutors Montclair, NJ ACT Tutors Montclair, NJ Algebra Tutors Montclair, NJ Algebra 2 Tutors Montclair, NJ Calculus Tutors Montclair, NJ Geometry Tutors Montclair, NJ Math Tutors Montclair, NJ Prealgebra Tutors Montclair, NJ Precalculus Tutors Montclair, NJ SAT Tutors Montclair, NJ SAT Math Tutors Montclair, NJ Science Tutors Montclair, NJ Statistics Tutors Montclair, NJ Trigonometry Tutors Nearby Cities With prealgebra Tutor Belleville, NJ prealgebra Tutors Bloomfield, NJ prealgebra Tutors Cedar Grove, NJ prealgebra Tutors Clifton, NJ prealgebra Tutors East Orange prealgebra Tutors Garfield, NJ prealgebra Tutors Glen Ridge prealgebra Tutors Kearny, NJ prealgebra Tutors Livingston, NJ prealgebra Tutors Nutley prealgebra Tutors Orange, NJ prealgebra Tutors Passaic prealgebra Tutors South Kearny, NJ prealgebra Tutors Verona, NJ prealgebra Tutors West Orange prealgebra Tutors
{"url":"http://www.purplemath.com/montclair_nj_prealgebra_tutors.php","timestamp":"2014-04-17T16:10:52Z","content_type":null,"content_length":"24256","record_id":"<urn:uuid:0b318a41-6196-4e53-b71f-78d3ecf4155c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
How long do you expect transit to last? I have a homework problem that I am having troubles with. There are 2 parts A transiting exoplanet with a diameter twice that of the Earth orbits a sun-like star in a circular orbit of radius 1.5 AU a) How much reduction in the flux of the star occurs during the transit? Earth's diameter=Planet's radius (R[p]) =8.5175×10^-5 AU And because it says a "sun-like" star, I used the same values as the sun for radius Star's radius (R[s]) =4.649×10^-3 AU And I can use the formula [itex]\frac{ΔF}{F}[/itex]=[itex]\frac{R^{2}_{p}}{R^{2}_{s}}[/itex] By plugging in the values, i got 0.03% reduction in flux b) How long do you expect the transit to last? I am stuck on this one. I was not told the impact parameter b so do I assume that the transit happens through the centre? or do I use the formula τ=[itex]\frac{2(Rp+Rs)}{V}[/itex] where τ= transit duration, Rp=diameter of planet, Rs=diameter of star, V=velocity Any help is appreciated, thanks in advance!
{"url":"http://www.physicsforums.com/showthread.php?t=720575","timestamp":"2014-04-19T09:40:40Z","content_type":null,"content_length":"29562","record_id":"<urn:uuid:f1e84434-7897-45eb-ba3e-cf50cfd344c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
Meadows Place, TX Algebra Tutor Find a Meadows Place, TX Algebra Tutor ...Algebra and Geometry are my strong points. All my kids are gone for UT Austin so I can devote all my time to help you. Teaching is my passion. 12 Subjects: including algebra 1, algebra 2, reading, Chinese ...I have many hobbies as well. Primarily I love to practice my violin. I have been playing since I was two years old. 38 Subjects: including algebra 2, algebra 1, reading, chemistry ...My approach is to identify the gaps in a student's knowledge and fill those gaps. I don't aim for fun, but we usually laugh along the way. Humor helps to overcome the psychological barriers of learning a new subject. 41 Subjects: including algebra 1, algebra 2, chemistry, English ...I recently received my Master's degree in Medical Sciences at the University of North Texas Health Science Center and before that graduated with magna cum laude honors from the University of Houston in Biology. I have been an official tutor at the University of Houston for over 3 years and have ... 29 Subjects: including algebra 1, algebra 2, English, writing ...As a National Achievement Finalist, magna cum laude graduate, master's degree recipient and PhD candidate, I am fully aware of the value of hard work and education. Throughout my education, I have tutored many students and served as a teaching assistant for multiple classes. I truly enjoy helping students of all ages attain their educational goals. 18 Subjects: including algebra 1, algebra 2, chemistry, reading Related Meadows Place, TX Tutors Meadows Place, TX Accounting Tutors Meadows Place, TX ACT Tutors Meadows Place, TX Algebra Tutors Meadows Place, TX Algebra 2 Tutors Meadows Place, TX Calculus Tutors Meadows Place, TX Geometry Tutors Meadows Place, TX Math Tutors Meadows Place, TX Prealgebra Tutors Meadows Place, TX Precalculus Tutors Meadows Place, TX SAT Tutors Meadows Place, TX SAT Math Tutors Meadows Place, TX Science Tutors Meadows Place, TX Statistics Tutors Meadows Place, TX Trigonometry Tutors Nearby Cities With algebra Tutor Arcola, TX algebra Tutors Bellaire, TX algebra Tutors Brookside Village, TX algebra Tutors Bunker Hill Village, TX algebra Tutors Hedwig Village, TX algebra Tutors Hilshire Village, TX algebra Tutors Inks Lake Village, TX algebra Tutors Iowa Colony, TX algebra Tutors Missouri City, TX algebra Tutors Pleak, TX algebra Tutors Richmond, TX algebra Tutors Spring Valley, TX algebra Tutors Stafford, TX algebra Tutors Sugar Land algebra Tutors Thompsons algebra Tutors
{"url":"http://www.purplemath.com/meadows_place_tx_algebra_tutors.php","timestamp":"2014-04-17T19:41:02Z","content_type":null,"content_length":"24044","record_id":"<urn:uuid:9bc81ca2-337c-4535-b5d4-86fa18e40256>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Eigen Duality and Quantum Measurement For a given square matrix M an eigenvector is a vector x such that the multiplication of x by M simply re-scales x while leaving the direction of x unchanged. In other words, x is an eigenvector of M if and only if Mx = lx for some scalar l, which is called the eigenvalue corresponding to x. This condition can also be written in the equivalent form where I is the identity matrix and 0 is the zero vector. Naturally the equation is trivially satisfied by x = 0, but the equation may also have non-trivial solutions if the determinant of the operator is zero. This requires that l be a root of the characteristic equation which is a polynomial in l of degree equal to the order of the matrix M. If we multiply through equation (1) by (-y[2]/l)M[2] for some arbitrary scalar y[2] and square matrix M[2 ]we get the equivalent expression where y[1] = -y[2]/l and M[1] = M[2]M, but now the “eigenvalue” is represented by two numbers, y[1] and y[2], instead of just the single number l. Of course, only the ratio of these two numbers is significant, so each eigenvalue l is represented by a ray through the origin of the y[1],y[2] plane. Although equation (3) is equivalent to (2), it highlights an important and profound symmetry, because it shows that there is no difference between eigenvalues and eigenvectors. To make this explicit, consider the simple case where M[1] and M[2] are square matrices of order 2, and x is a column vector with two components x[1] and x[2]. Letting m[1ij] and m[2ij] denote the elements of M[1] and M[2] respectively, we can multiply out the left side of equation (3) explicitly and show that it can be written in either of two forms: When written in the first form, y[1] and y[2] are the numerator and denominator of the eigenvalue, and x[1] and x[2] are the components of the eigenvector. But when written in the second form, the roles are reversed, i.e., y[1] and y[2] are the components of the eigenvector, and x[1] and x[2] are the numerator and denominator of the eigenvalue. (Note that only the direction of an eigenvector is constrained by the system of equations, so only the ratio of its components is significant, just as only the ratio of the numerator and denominator of the eigenvalue is significant.) Of course, the coefficient matrices are decomposed differently, in one case being of the form m[1ij] and m[2ij], and in the other case being of the form m[ji1] and m[ji2] respectively, but this simply reflects the fact that x and y lie in different component spaces. There is no justification for giving either of them priority over the other. The duality between eigenvalues and eigenvectors applies for systems of any order. It is simply disguised in most applications because we artificially break the symmetry by forcing one of the matrices to be the identity and forcing the scalar multiplier of the other matrix to be unity. This not only obscures the natural duality between eigenvectors and eigenvalues, it also entails an unwarranted specialization of the general form by not allowing additive partitions of the coefficient matrix. In addition, writing the equation in the symmetrical form suggests two interesting mathematical generalizations, and also has some interesting relevance to the interpretation of physical theories, especially quantum mechanics. We discuss each of these topics below. First, to clarify the symmetry between eigenvalues and eigenvectors, we will define a notation and index convention for matrices and vectors. We’re accustomed to dealing with two different kinds of vectors, which we may call column and row vectors, but these are just two of infinitely many possible kinds of vectors, one for each of the possible dimensions of an array. Let us combine the two original matrices M[1] and M[2] into a single three-dimensional matrix, denoted by M[mns] where m,n,s are indices ranging from 1 to 2, and let X[m11] denote the vector with elements x[111] = x[1] and x[211] = x[2] (the components of the eigenvector x), and let Y[1n1] denote the vector with elements y[111] = y[1] and y[121] = y[2]. In general we indicate multiplications using the summation convention over repeated indices in a single term. Also, if corresponding indices of two arguments are both explicit numerals, or both dummy indices, we set that index of the product to 1, whereas if the index is a numeral or dummy (repeated) in one argument but a variable in the other, we set that index of the argument to the variable. To illustrate, the product P given by ordinary matrix multiplication of the original two matrices could be expressed as With this notation, our eigenvalue system is written in the form Since X and Y are orthogonal, they commute. Thus this eigenvalue problem, when expressed in its natural symmetrical form, simply consists of finding two vectors that, when used as relative weights for summing and contracting a three-dimensional array in two of its dimensions, yields a zero vector in the remaining dimension. Of course, we could also multiply by a vector in the third dimension to collapse the matrix down to a scalar 0, which would result in a continuous locus of eigen-solutions. For any choice of one of those three vectors, the remaining two would have two discrete solutions (up to the arbitrary scale factors). It might seem as if the duality between eigenvalues and eigenvectors exists only for matrices of order 2, because there are only two terms in the traditional eigenvalue equation (3), with coefficients representing the numerator and denominator of the traditional scalar eigenvalue. However, the symmetrical form immediately leads to a natural generalization of (3), such that we can partition the operator into more than just two parts. For example, by partitioning the coefficient matrix into three parts (instead of just two), we have a system described by the equation where A, B, C are square matrices of order N, and a, b, g are scalars. As before, this equation can have non-trivial solutions only if the determinant of the overall operator vanishes, i.e., This represents a polynomial of degree N in the three scalars a,b,g. Again, only the ratios of these components are significant, so the “eigenvalues” of the system consist of rays through the origin of a three-dimensional space, just as (if N = 3) the eigenvectors are rays through the origin of a three-dimensional space. If we normalize the projective space of the eigenvalues (which we can do in various ways, such as by dividing through the characteristic equation by the Nth power of one of the eigen-components, or by stipulating that a + b + g = 1), we get a quadratic equation in two variables, so the eigenvalues now consist of a conic locus of points on the normalized surface. An alternative is to normalize the length of the eigenvalue to unity, i.e., to stipulate that a^2 + b^2 + g^2 = 1, in which case the eigenvalues consist of a continuous locus of points on the unit sphere in three dimensions – as do the normalized eigenvectors. By partitioning the coefficient matrix into N parts, we get a system with N-dimensional eigenvalues and N-dimensional eigenvectors, and the coefficient matrices are N x N square matrices regardless of whether we transpose the eigenvalues and eigenvectors. However, the duality between eigenvalues and eigenvectors is not limited to such cases. In general we can have different numbers of dimensions for those entities. For example, consider a system described by where the two coefficient matrices A and B are of order three. In this case the system equations can be written explicitly in either of the following two equivalent forms Thus the duality between x and y still applies, in the sense that either of those “eigenrays” can equally well be regarded as the eigenvalue or the eigenvector, with the understanding that the coefficient matrices need not be square. The operation represented by these expressions can be depicted schematically as shown below. This shows that, beginning with a 2 x 3 x 3 coefficient array, we collapse this matrix in the vertical dimension by applying the y vector components as weight factors, summing the elements in each vertical column, leaving a 3 x 3 array, which we then collapse in the second dimension by applying the x vector components as weight factors, summing the elements of each row. The result is a one-dimensional vector, which we require to be null. Of course, if the coefficient matrices are not square, the eigen condition can no longer be expressed as the vanishing of the determinant of the sum of the coefficient matrices, because the definition of the determinant applies only to square matrices. Nevertheless, we still have a perfectly well-defined eigen condition, with the understanding that it consists, in general, of a set of simultaneous equations rather than just a single equation. In the above example, the eigen condition on y is expressed by the vanishing of the determinant of the sum of the two square coefficient matrices, which implies just a single polynomial in the y[j] parameters, whereas the eigen condition on x consists of any two of the three simultaneous conditions In this example the basic system equations (6) represent three expressions being set to zero (the three components of the right hand side), and there are five unknowns, but since the equations are homogeneous we can divide through by y[2] and x[3], leading to the following three equations in the three unknowns q[1] = x[1]/x[3], q[2] = x[2]/x[3], and f = y[1]/y[2]. Solving the first equation for f gives Substituting this into the other two equations, we get two conic equations The solutions consist of the four points of intersection (q[1],q[2]) between these two conics, and for each of these four (possibly complex) solutions the corresponding value of f is given by the previous equation. One might think that this is inconsistent with the fact that equation (6) has just three distinct eigen solutions, rather than four. The explanation is that one of the four “eigenvalues” f is infinite. To see this, note that f is infinite when the denominator in the expression given above equals zero, which is to say Solving this for q[2] and substituting into the two conic equations, we find that they both factor as products of two linear expressions, and the two conics share a common factor, so we always have the “fourth solution” This shows that the usual formulation is not fully general, because it excludes this “fourth solution”, whereas the variable f is actually the ratio x[1]/x[2], so this fourth solution simply represents the case x[2] = 0. Incidentally, the remaining linear factors of the two conics are equal to each other if and only if the determinant of the “a” matrix vanishes. To illustrate the duality between X and Y even more clearly, consider the system of equations In this case, using the symbols a[mn] = M[mn1], b[mn] = M[mn2], and so on, we can write the system equations explicitly in either of the two equivalent forms Assuming x[3] and y[3] are not zero, we can divide through these homogeneous equations by x[3]y[3], leading to four equations in the four unknown f[1] = x[1]/x[3], f[2] = x[2]/x[3], q[1] = y[1]/y[3], and q[2] = y[2]/y[3]. We can solve the first two equations for q[1] and q[2] as ratios of quadratic expressions in f[1] and f[2]. Likewise we can solve the last two equations for q[1] and q[2], and we can then equate the corresponding expressions to give two quartics in f[1] and f[2]. The overall solutions are then given by the intersections of these two quartics. As discussed in the note on Bezout’s theorem, there are 16 points of intersection between two quartics, counting complex points, multiplicities, and points at infinity. As a further generalization, we need not be limited to systems with just two “eigenrays” (i.e., systems with eigenvalues and eigenvectors). Using the index notation discussed previously, we can define overall coefficient matrices with any number of dimensions, and then contract it with two or more eigenrays. For example, we can consider systems such as where the indices a,b,g,d need not all vary over the same ranges. Each of the vectors X, Y and Z represents a class of eigenrays for the system. The vectors X,Y,Z can be regarded as weight factors that are used to collapse the M array down to a lower number of dimensions. Even more generally, we can consider “eigenplanes”, etc., by allowing the X “vectors” to be of more than just one dimension. For example, we can consider systems such as These examples show that the artificial distinction between eigenvalues and eigenvectors is completely meaningless. A good illustration of this is given by the simplest expression of this form, where the indices range from 1 to 2. This represents the single constraint In terms of the ratios f = x[1]/x[2] and q = y[1]/y[2], this can be written as Thus we can regard the ratio (x[1]/x[2]) as the eigenvalue corresponding to the eigenvector [y[1],y[2]], and conversely we can regard the ratio (y[1]/y[2]) as the eigenvalue corresponding to the eigenvector [x[1],x[2]], and these ratios are related by a linear fractional (Mobius) transformation. Of course, this simple system doesn’t restrict the set of possible eigenvalues or eigenvectors, but it does establish a one-to-one holomorphic mapping between them. Even this simple system has applications in both relativity and quantum mechanics, since the Mobius transformations can encode Lorentz transformations and rotations, and with stereographic projection onto the Riemann sphere they can be used to represent the state vectors of a simple physical system with two basis states. Equation (7) represents a two-dimensional coefficient matrix being collapsed in both dimensions down to a single null scalar, but we can also begin with a three-dimensional coefficient matrix and collapse it in two dimensions down to a one-dimensional null vector. This is represented by the equation Again letting x[a] denote X[a11] and so on, this corresponds to the two conditions and in terms of the ratios f = x[1]/x[2] and q = y[1]/y[2] these can be written as Equating these two values of f, we get a quadratic condition for q, defining the two eigenvalues, but these can equivalently be regarded as the ratios of the components of the two eigenvectors, and the same applies to the corresponding values of f. One could argue that the conventional expression of problems in terms of eigenvalues and corresponding eigenvectors is due to the constraint on the degrees of freedom that this arrangement imposes on the results. Suppose we have a rectangular array of size a x b x c, and we collapse the array in the a and b directions, leaving just a vector of size c. This represents c homogeneous constraints, and we can divide through these equations to give a-1 and b-1 variables. To make this deterministic (up to the multiple roots), we must have a + b – 2 = c. Thus if either a or b is equal to c (to make square matrices of the bc planes), then the other must equal 2, suggesting that it be treated as an eigenvalue. However, we need not restrict ourselves to cases when all the continuous degrees of freedom are constrained. In fact, there are many real-world applications in which there are unconstrained continuous degrees of freedom. Also, as noted above, we need not restrict ourselves to square matrices. So far we’ve considered only purely mathematical generalizations of eigenvalue problems, but another kind of generalization is suggested by ideas from physics. The traditional asymmetrical form Mx = lx used in the representation of a “measurement” performed by one system on another in the context of quantum mechanics gives priority to one of the two interacting systems, treating one as the observer and the other as the observed. But surely both systems are “making an observation” of each other (i.e., interacting with each other), so just as there is an operator M[1] representing the observation performed by one system, there must be a complementary operator M[2] representing the reciprocal “observation” performed by the other system. This strongly suggests that the symmetrical form, with some non-trivial partition of the coefficient matrix, is more likely than the asymmetrical form to give a suitable representation of physical phenomena. According to this view, the eigenvalue arising from the application of an observable operator is properly seen as part of the state vector of the “observing system”, just as the corresponding state vector to which the “observed system” is projected can be seen as the “value” arising from the measurement which the “observed system” has performed on the “observing system”. To fully establish the equivalence between eigenvalues and eigenvectors in the context of quantum mechanics we need to reconcile the seemingly incongruous interpretations. The eigenvector is associated with an observable state of a system, and its components are generally complex. To give these components absolute significance (as probability amplitudes) they are normalized by dividing each component by the square root of the sum of the squared norms of all the components (so that the sum of the squared norms of the normalized components is unity). In contrast, the eigenvalue – regarded as the purely real scalar result of a measurement – is the ratio of the two components of the representative eigenvector. In other words, given the vector components z[1] and z[2], we traditionally take the ratio l = z[1]/z[2] as the physically meaningful quantity, i.e., the result of a measurement, whereas if we regarded this vector as an eigenvector we would take the quantity z [1]/(|z[1]|^2 + |z[2]|^2)^1/2 and its complement as the physically meaningful quantities, namely, the probability amplitudes for states 1 and 2. This seems to suggest the following mapping between the eigenvalue l the state probabilities P[1], P[2] for a given vector: and hence the ratio of probabilities is P[2]/P[1] = l^2. Another generalization of eigen-value problems is discussed in the note on quasi-eigen systems. Return to MathPages Main Menu
{"url":"http://www.mathpages.com/home/kmath648/kmath648.htm","timestamp":"2014-04-21T09:37:40Z","content_type":null,"content_length":"47320","record_id":"<urn:uuid:74fa4960-0127-4405-a646-02864f42f7a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Mathematics in brief Date: Dec 8, 2012 4:41 AM Author: Zaljohar@gmail.com Subject: Mathematics in brief In philosophy "form" is a universal, however here the term "form" is used to designate a universal that is exemplified by all objects bearing some kind of isomorphic relation between them provided that the collection of all exemplifying objects do have all objects included in its transitive closure. I paraphrase that as: a universal that involves the whole universe. By contrast some forms as used in philosophy involves only a particular sector of Ontology, like for example "cat" which can only be exemplified by animals and so it is a restrictive kind of "form". Here any speech about forms will be meaning the non privet kinds of forms, i.e those that involves the whole universe after some isomorphic relation as mentioned above. Mathematics is "discourse about form" with this it is meant any theory that can be interpreted in the set hierarchy having all its objects being interpreted as forms in the set hierarchy. So for example PA is a piece of mathematics since it can be interpreted in the set hierarchy with an interpretation in which all its "objects" are interpreted as "forms" defined after "bijection" relation in the Fregean manner. So it is a case of discourse about form, thus So here there is a line of separation between what is foundational and what is mathematical, the set\class hierarchy is foundational i.e. it belong "essentially" to logic! it is a sort of extended logic, although it definitely use some mathematics to empower it and actually it needs a mathematician to work it out, yet this doesn't make out of it mathematical, the piece of mathematics used in those foundational theories is just an application of mathematics to another field much as mathematics are used in physics. So what I'm saying here is that a theory like ZFC is not "Essentially" about mathematics, it is not even a piece of mathematics, it is a LOGICAL theory. So Set theory is a kind of LOGIC. However one can easily see that such form of logic can only be handled by mathematicians really, but still that doesn't make out of it a piece of mathematics as mentioned Mathematics is the study of "form" as mentioned above, it is "implemented" in the set\class hierarchy which provides a discourse about forms whether simple or structural. All known branches of mathematics: Arithmetic, Analysis, Geometry, Algebra, Number theory, Group theory, Topology, Graph theory, etc... all can be seen as discourse about form, since all its objects can be interpreted in the set hierarchy as forms. Anyhow it is reasonable for branches of mathematics to be developed along some Foundation back-grounding in logic, and then the mathematical forms be implemented on that background logic, this can be seen clearly with topology which starts from set theory and then go higher to deal with forms like continuity and connectedness. However it can be seen to be essentially about the higher concepts it tries to manipulate, the back-grounding in sets is just the logical part of it, since what it tries to manipulate is a sort of "form", then topology is essentially mathematical. Also I wanted to raise the issue that "any" consistent theory is speaking about a model that is "possible" to exist! So if we secure a consistent discourse about form then, we are speaking about forms that might possibly exist. And that's all what mathematics needs to bring about. Whether those forms really exist or not? this is not the discipline of mathematics. So consistency yields "possible" existence, and that's all what mathematics should yield, i.e. forms that could possibly exist. How those forms are known to us? the answer is through their exemplification as part of the discourse of consistent theories about form. Whether they are platonic in the sense of being in no place no time, etc.., that is not relevant, we come to know about them by their exemplifications which are indeed not so abstract and can be grasped by our intellect. How can such an abstract notion be exemplified by such concrete objects, that's not the job of mathematics to explain.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7934101","timestamp":"2014-04-19T23:38:00Z","content_type":null,"content_length":"5238","record_id":"<urn:uuid:50c8a746-e70b-4132-a330-35a71ea93fd3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
World Meteorological Organization Commission for Basic Systems - Standardised Verification System for Long Range Forecasts Detailed information on the Mean Square Skill Score (MSSS) is available in attachment II.8. It is recommended users read the documentation in the attachment for more details. The Mean Square Skill Score (MSSS) is applicable to deterministic forecasts only. The MSSS is essentially the Mean Square Error (MSE) of the forecasts compared to the MSE of climatology for a station or grid point. The MSE for a forecast at a grid point (or station) may be given by: where x and f denote time series of observations and continuous deterministic forecasts. The MSE for climatology is given by: The Mean Square Skill Score is therefore given as: For the domains (i.e., Tropics and northern and southern Extra-Tropics) over which the MSSS is calculated, it is recommended that an overall MSSS is provided. This is computed as: where the weighting function, w, is unity for verifications at stations and for gridded data is equal to cos(Θ), where Θ is the latitude of each corresponding gridpoint. MSSS[j] for forecasts fully cross validated can be expanded as: where r[fxj] is the product moment correlation of the forecasts and observations at a point or station j. The first three terms of the decomposition of MSSS[j] are related to phase errors (through the correlation), amplitude errors (through the ratio of the forecast to observed variances) and overall bias error respectively of the forecasts (see below). These three terms and MSSS[j] are a requirement of Level 2 and software is provided to calculate these. An example of the MSSS score for Level 2 is shown below:
{"url":"http://www.bom.gov.au/wmo/lrfvs/msss.shtml","timestamp":"2014-04-19T12:24:54Z","content_type":null,"content_length":"5545","record_id":"<urn:uuid:ca0d2d3a-be87-4792-a3af-a05c13b7c98a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] number of required trials Boks, M.P.M. M.P.M.Boks at umcutrecht.nl Sun Oct 19 22:51:43 CEST 2008 Dear Experts, Probably trivial, but I am struggling to get what I want: I need to know how the number of required trials to get a certain number of successes. By example: How many trials do I need to have 98% probability of 50 successes, when the a priory probability is 0.1 per trial. The Negative binomial function may do the job (not sure): NegBinomial {stats} The Negative Binomial Distribution Density, distribution function, quantile function and random generation for the negative binomial distribution with parameters size and prob. dnbinom(x, size, prob, mu, log = FALSE) pnbinom(q, size, prob, mu, lower.tail = TRUE, log.p = FALSE) qnbinom(p, size, prob, mu, lower.tail = TRUE, log.p = FALSE) rnbinom(n, size, prob, mu) I tried finding out how to do this by using examples, but I am at a loss. Any help would be much appreciated! More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2008-October/177321.html","timestamp":"2014-04-19T18:00:11Z","content_type":null,"content_length":"3392","record_id":"<urn:uuid:4ae7c1d1-9e3c-43f0-b2d7-389abe841cbf>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: September 1997 [00163] [Date Index] [Thread Index] [Author Index] Re: matrix • To: mathgroup at smc.vnet.net • Subject: [mg8693] Re: matrix • From: Paul Abbott <paul at physics.uwa.edu.au> • Date: Fri, 19 Sep 1997 02:47:36 -0400 • Organization: University of Western Australia • Sender: owner-wri-mathgroup at wolfram.com Ferruccio Renzoni wrote: > I need to write on file a matrix 100*100 (numerics) produced by Mathematica > in order to read it with a Fortran program (I am not very good with MathLink, > so that's the easiest solution for me to make a "link" between the two > programs). The writing takes too long (it has to be repeated several times). > Which is the fastest way of writing on file compatible with Fortran? If you are using Unix you could use InterCall. InterCall is a Mathematica package designed to make it easy to link Mathematica and fortran. See I n t e r C a l l What is InterCall? InterCall is a Mathematica package that provides: o easy access to all the routines in the NAG, IMSL, LINPACK, MINPACK and ITPACK subroutine libraries. o interactive access to any other library or user-written code. o straightforward declaration of default settings for arguments in With InterCall you can: o import routines written in Fortran, C, or Pascal and call them as if were Mathematica functions. o call external routines on a remote computer. o develop and test the robustness and correctness of external libraries. o write your own interface to other external libraries. Why Use InterCall? o To extend the type of problems that Mathematica can solve. o The full scope of routines in standard numerical libraries become available to Mathematica users. o Intelligent defaults are supplied automatically by InterCall when you call an external routine. o Inspecting and modifying defaults is simple and uses commands named GetDefault and SetDefault. o Independent documentation, for calling external routines from within Mathematica, is not required. Who Should Use InterCall? o Anyone whose work involves numeric processing and who wants Mathematica's ease of use. o Mathematica users who need to access numerical libraries on a remote machine. o Current users of numerical libraries who want a simple development o Teachers of courses such as numerical methods. o Engineers, scientists, economists, physicists, mathematicians, statisticians etc. How Does One Use InterCall? Loads the InterCall package In[1]:= <<InterCall`; Load the numerical library databases In[2]:= <<InterData`; Import IMSL's dqdag integration routine In[3]:= GetDefault[ dqdag ] The ouput indicates the calling syntax Out[3]= dqdag[$F_, $A_, $B_] -> Integrate Sin[x] from x = 0 to x = Pi In[4]:= dqdag[ Sin[#]&, 0, Pi ] using IMSL. Out[4]= 2. Import IMSL's devasb routine for finding In[5]:= GetDefault[ devasb ] eigenvalues of a band-symmetric matrix. Out[5]= devasb[$A_] -> Define a band-symmetric matrix. In[6]:= matrix = Find the three smallest eigenvalues. In[7]:= devasb[ matrix, (NEVAL is documented in the IMSL manual) $NEVAL -> InterCall completely integrates the symbolic capabilities of Mathematica with the numeric routines of any external library. You can pass a Mathematica function, array, or any other expression, as an argument to any external routine and InterCall will send the correct type of information to that external routine. System Requirements: InterCall runs under Mathematica version 3, and requires a Unix kernel or a Macintosh with a TCP/IP network connection. Remote drivers to access external code on a remote computer are available for Alliant, CrayC90, CrayYMP, CM2sun, CM5sun, Convex, DEC, HP9000, HP9000_RISC, HP9000S700, IBMRS6000, NeXT, Sequent,SGI, Solaris, SPARC, VAX, VP. A driver for DEC Alpha (OSF) is under development. InterCall includes: o all the files needed to run InterCall on your computer. o various remote drivers (available upon request) o a detailed TeX manual describing how to use InterCall with Notebook InterCall is distributed by a number of methods: o email/ftp with TeX manuals $275 $475 o email/ftp with manuals sent by post $300 $500 o tar or Mac formatted disk $315 $515 with printed manuals sent by post o full installation done by rlogin $375 $575 via internet - printed manuals post For more information on InterCall, please contact: PO Box 522 Nedlands, WA 6909 Phone/Fax +61 8 9386 5666 Email: john at analytica.com.au WWW: http://www.analytica.com.au/ InterCall was developed by: Dr. Terry Robb Wolfram Research Paul Abbott Phone: +61-8-9380-2734 Department of Physics Fax: +61-8-9380-1014 The University of Western Australia Nedlands WA 6907 mailto:paul at physics.uwa.edu.au AUSTRALIA http://www.pd.uwa.edu.au/~paul God IS a weakly left-handed dice player
{"url":"http://forums.wolfram.com/mathgroup/archive/1997/Sep/msg00163.html","timestamp":"2014-04-19T09:31:02Z","content_type":null,"content_length":"40106","record_id":"<urn:uuid:87a2c203-76f5-4828-91f6-d11c8b0bc920>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Particle representations and limit theorems for stochastic partial differential equations Seminar Room 1, Newton Institute Solutions of the a large class of stochastic partial differential equations can be represented in terms of the de Finetti measure of an infinite exchangeable system of stochastic ordinary differential equations. These representations provide a tool for proving uniqueness, obtaining convergence results, and describing properties of solutions of the SPDEs. The basic tools for working with the representations will be described. Examples will include the convergence of an SPDE as the spatial correlation length of the noise vanishes, uniqueness for a class of SPDEs, and consistency of approximation methods for the classical filtering equations. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/SPD/seminars/2010032914001.html","timestamp":"2014-04-18T05:39:58Z","content_type":null,"content_length":"6590","record_id":"<urn:uuid:fbcc2394-2e44-4649-a7a8-7f0e27e4bd9d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
North Billerica Trigonometry Tutor Find a North Billerica Trigonometry Tutor ...I found the material very intuitive and still remember almost all of it. I've also performed very well in several math competitions in which the problems were primarily of a combinatorial/ discrete variety. I got an A in undergraduate linear algebra. 14 Subjects: including trigonometry, calculus, geometry, GRE My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. 36 Subjects: including trigonometry, English, reading, calculus ...I'm a good teacher, listen and explain things well, enjoy teens and am patient and understanding. I have excellent tutoring references. I am the father of 3 teens, and have been a soccer coach, youth group leader, and scouting leader. 15 Subjects: including trigonometry, calculus, physics, statistics I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. 14 Subjects: including trigonometry, statistics, geometry, algebra 1 ...I teach high school through college students and can teach in person or, if convenient, via Skype. I don't want to take your tests or quizzes, so I may need to verify in some way that I'm not doing that! If you happen to be Mandarin Chinese I know a little of your language: yi, ar, san, si ...I've taught Discrete Mathematics for undergraduates at SUNY Cortland. 14 Subjects: including trigonometry, calculus, geometry, GRE
{"url":"http://www.purplemath.com/North_Billerica_Trigonometry_tutors.php","timestamp":"2014-04-19T05:34:29Z","content_type":null,"content_length":"24347","record_id":"<urn:uuid:09fb3fca-db7f-4bd8-a3da-3dd6a5baddf9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
converting to base 10 08-30-2007 #1 Registered User Join Date Aug 2007 converting to base 10 I'm currently writing a program that takes a number (n) and a base (b) between [1, 10] from the user's input and convert it into base 10. However, I have to make sure that all the digits of n has to be between [0, b-1] with the exception of n being 1. For example: n = 012 and b =3, output is 5 n = 00110 and b=2, output is 6 n = 00110 and b=1, output is 2 for base (b) = 1, the current program cannot check to see if the digits of 'n' is greater than 1 or not because of this code that i use to check it: output[index] = n%b. For example: n=23 and b = 2 : output[index] = 23%2 = 3 ; and 3 is greater than base 2 BUT if n = 23 and b =1; ouput[index] = 23%1 = 0; and I CANNOT TAKE THE 3 to compare to base 1 so that 3 is greater than 1. Please help me! The following is my code: int main(void) int n, b; int output[64]; int index = 0; printf("Enter base(b) between [1,10], and a number(n) between [0,b-1] in this format 'n b': "); scanf("%d %d", &n, &b); printf("n is: %d\n", n); //Test printf("b is: %d\n", b); //TEST if ((b < 1) || (b > 10)) printf("Your base is not between 1 and 10"); return 0; if (n < 0) printf("n must be positive"); return 0; while (n != 0) output[index] = n % b; if ((output[index] >= b) && (output[index] != 1)) //TROUBLE AREA RIGHT HERE printf("one of the digits is greater than the base"); return 0; n = n/b; printf("number to convert: %d\n", n); return 0; Thank you! Well the if() inside the while loop makes no sense, since it can never be true. You've just done n&#37;b, so output[index] >= b will always be false. Also, what is the other printf() supposed to do? Printing output[index] would tell you more IMO. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. Sorry, the while loop is supposed to look like this: int main(void) int n, b; int output[64]; int index = 0; printf("Enter base(b) between [1,10], and a number(n) between [0,b-1] in this format 'n b': "); scanf("&#37;d %d", &n, &b); printf("n is: %d\n", n); //Test printf("b is: %d\n", b); //TEST if ((b < 1) || (b > 10)) printf("Your base is not between 1 and 10"); return 0; if (n < 0) printf("n must be positive"); return 0; while (n != 0) output[index] = n % 10; if ((output[index] >= b) && (output[index] != 1)) //TROUBLE AREA RIGHT HERE printf("one of the digits is greater than the base"); return 0; n = n/b; printf("number to convert: %d\n", n); return 0; For some reason if i put n = 110 and b=2, it goes into the if statement, which it is not supposed to. This problem is WAY easier if you read the "n" value as a string instead of an integer. You are going to be processing it digit-wise, anyway, so why convert from digits to a value only to convert back to digits again? Hi, I'm reading "n" as a string now, but for some reason I'm keep getting a "segmentation fault (core dumped)". Do you know why is it doing this? int main(void) int b; char n; int output[64]; int index = 0; printf("Enter base(b) between [1,10], and a number(n), such that digits of n is between[0,b-1] in this format 'n b': "); scanf("&#37;s %d", &n, &b); printf("n is: %s\n", n); //Test printf("b is: %d\n", b); //TEST if ((b < 1) || (b > 10)) printf("Your base is not between 1 and 10"); return 0; if (n < 0) printf("n must be positive"); return 0; while (n != 0) output[index] = n % 10; n = n/b; printf("number to convert: %d\n", n); return 0; Thank you! You're not reading it as a string, you're reading it as a single char (but with string format). n needs to be an array large enough to hold the input. E.g: char n[64]; Also, you'll have to change other parts of the code, because processing digit-by-digit is completely different than your old method. But the result will be much simpler than what you have now. Since I suspect this is homework, I won't post an actual solution -- but you should be able to solve the entire problem in fewer than 20 lines of code. The function I wrote which converts the number string from its given base is only three lines long. n needs to be an array large enough to hold the input. E.g: char n[64]; Also, you'll have to change other parts of the code, because processing digit-by-digit is completely different than your old method. But the result will be much simpler than what you have now. Since I suspect this is homework, I won't post an actual solution -- but you should be able to solve the entire problem in fewer than 20 lines of code. The function I wrote which converts the number string from its given base is only three lines long. There's even a library function to do that, right? By the way, is base 1 actually a valid base? What's the semantics of "base 1"? Tick marks. 1 = 1 2 = 11 3 = 111 4 = 1111 5 = 11111 EDIT: Base 1 was probably the first method of written counting ever invented. Even more amusing is the concept of "Base 0," where each number gets its own unique symbol: 1 = Harry 2 = Sam 3 = Bobby 4 = Johnny 5 = Terrence Last edited by brewbuck; 08-30-2007 at 03:58 PM. For some reason if i put n = 110 and b=2, it goes into the if statement, which it is not supposed to. Thats becuase of this n = n/b; /* Which should have been n/10 But any way you should ry this problem with the solution, using string. What other sugguested you. HINT: The thing which u are trying to do can be done using strtol function. Actually, I can't use char in this problem, everything has to be integer. But, I got it now! Thanks!! Consider that you may want to input base > 10, in which case the normal numbers don't suffice, so you you need to read the original number in as a string - not a single char of course, but an array of chars (which, when it's terminated with a zero(0 non '0') is a string in C). 08-30-2007 #2 08-30-2007 #3 Registered User Join Date Aug 2007 08-30-2007 #4 08-30-2007 #5 Registered User Join Date Aug 2007 08-30-2007 #6 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England 08-30-2007 #7 08-30-2007 #8 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England 08-30-2007 #9 08-30-2007 #10 08-30-2007 #11 Registered User Join Date Aug 2007 08-31-2007 #12 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England
{"url":"http://cboard.cprogramming.com/c-programming/93111-converting-base-10-a.html","timestamp":"2014-04-19T15:14:54Z","content_type":null,"content_length":"87451","record_id":"<urn:uuid:41371d91-b796-4f12-aa7d-cec743b73f64>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
What is W x H equals F x L? Mathematical analysis is a branch of mathematics that includes the theories of differentiation, integration, measure, limits, infinite series, and analytic functions. These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. Analysis may be distinguished from geometry. However, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space). Elementary algebra encompasses some of the basic concepts of algebra, one of the main branches of mathematics. It is typically taught to secondary school students and builds on their understanding of arithmetic. Whereas arithmetic deals with specified numbers, algebra introduces quantities without fixed values, known as variables. This use of variables entails a use of algebraic notation and an understanding of the general rules of the operators introduced in arithmetic. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real and complex numbers. The use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Most quantitative results in science and mathematics are expressed as algebraic equations. Applied mathematics Social sciences In logic, syntax is anything having to do with formal languages or formal systems without regard to any interpretation or meaning given to them. Syntax is concerned with the rules used for constructing, or transforming the symbols and words of a language, as contrasted with the semantics of a language which is concerned with its meaning. The symbols, formulas, systems, theorems, proofs, and interpretations expressed in formal languages are syntactic entities whose properties may be studied without regard to any meaning they may be given, and, in fact, need not be given any. Applied mathematics Social sciences In mathematics, to solve an equation is to find what values (numbers, functions, sets, etc.) fulfill a condition stated in the form of an equation (two expressions related by equality). These expressions contain one or more unknowns, which are free variables for which values are sought that cause the condition to be fulfilled. To be precise, what is sought are often not necessarily actual values, but, more in general, mathematical expressions. A solution of the equation is an assignment of expressions to the unknowns that satisfies the equation; in other words, expressions such that, when they are substituted for the unknowns, the equation becomes an identity. For example, the equation x + y = 2x – 1 is solved for the unknown x by the solution x = y + 1, since substituting y + 1 for x in the equation results in (y + 1) + y = 2(y + 1) – 1, a true statement. It is also possible to take the variable y to be the unknown, and then the equation is solved by y = x – 1. Or x and y can both be treated as unknowns, and then there are many solutions to the equation, some of which are (x, y) = (1, 0) – that is, x = 1 and y = 0 – and (x, y) = (2, 1), and, in general, (x, y) = (a + 1, a) for all possible values of a. In journalism, a human interest story is a feature story that discusses a person or people in an emotional way. It presents people and their problems, concerns, or achievements in a way that brings about interest, sympathy or motivation in the reader or viewer. Human interest stories may be "the story behind the story" about an event, organization, or otherwise faceless historical happening, such as about the life of an individual soldier during wartime, an interview with a survivor of a natural disaster, a random act of kindness or profile of someone known for a career achievement. Related Websites:
{"url":"http://answerparty.com/question/answer/what-is-w-x-h-equals-f-x-l","timestamp":"2014-04-20T05:44:44Z","content_type":null,"content_length":"30849","record_id":"<urn:uuid:7b4f958b-f188-4bb1-b30e-888552d37a07>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
(1 Pt) A Car Initially Going 63 Ft/sec Brakes At ... | Chegg.com (1 pt) A car initially going 63 ft/sec brakes at a constant rate (constant negative acceleration), coming to a stop in 7 seconds. Graph the velocity for t=0 to t=7 . How far does the car travel before stopping? distance = (include units) How far does the car travel before stopping if its initial velocity is doubled, but it brakes at the same constant rate? distance = (include units)
{"url":"http://www.chegg.com/homework-help/questions-and-answers/1-pt-car-initially-going-63-ft-sec-brakes-constant-rate-constant-negative-acceleration-com-q1065138","timestamp":"2014-04-16T11:09:21Z","content_type":null,"content_length":"18508","record_id":"<urn:uuid:043149b4-6282-4cde-86d4-af86d8dafce4>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics help needed-Grade 11 level! leafs rule 1) Potential is the amount of potential energy per coloumb. 1 joule of energy is required to move one coulomb of charge through an electric field. Would that be the answer they are looking for? What I was hoping you would recognize (and maybe your book doesn't mention this ) is that Work is equal to the change in potential energy, that is [tex] \vert \abs{W} \vert = \vert \Delta U \vert \ \ \ \ \ \ \ \ (1)[/tex] as you mentioned potential is the potential energy of a unit charge, therefore [tex] V = \frac{U}{q} [/tex] [tex]\Rightarrow \Delta V = \frac{ \Delta U }{q} [/tex] if we devide (1) by q, then [tex] \vert \abs{ \frac{W}{q} } \vert = \vert \frac{\Delta U}{q} \vert [/tex] [tex]\Rightarrow \vert \frac{W}{q} \vert = \vert \Delta V \vert [/tex] Thus [itex] \Delta V [/itex] can be thought of the work done on a unit charge by an electric field. Also, what do they mean by "discuss the signifigance of these two points"? Would these two points be the poles or something? I think figuring out the second part of the problem first will help you with this question. 3) Im not sure about all this. All I have is a diagram that shows an electron heading downwards, and the question "Describe the Path of the electron beam as it passed by the south pole". By convention, the magnetic field [itex] \vec{B} [/itex] points toward the south end of a magnet. In order for the electron to change its path, it needs to have an acceleration, so by newton's second law it needs to have a force acting on it. What is the equation of the force acting on a point charge by a magnetic field?, Its [tex] \vec{F} = q \vec{v} \ X \ \vec{B} [/tex] So if you can find the direction of the force, you know the direction of acceleration, and thus the change in the direction of the e's velocity. (Do you know how to find the direction of a cross product? ) Also, I got another question that is 4)A box with an inital speed of 2m/s slides to a rest on a horizontal floor in 3 metres. What is the coefficient of kinetic friction? What I have done: found time. d=[(vi + v2)/2](t) Found acceleration to be 2/3 m/s^2 But i have no idea where to go now. I dont know how to find friction coefficients without being given any forces! Be careful about signs, the equation for acceleration is [tex] a = \frac{v_f - v_i}{t} [/tex] [tex]\Rightarrow a = \frac{ 0 - 2 }{3}[/tex] [tex]\Rightarrow a = - \frac{2}{3} \frac{m}{s^2} [/tex] Set up a Free-Body-Diagram for the box. Then set up Newtons second law equations for the vertical and horizontal components of force. You have the horizontal component of acceleration so see if you can solve what the force of fricion is... then you can use the uquation for friction to find the coefficient.
{"url":"http://www.physicsforums.com/showthread.php?t=61921","timestamp":"2014-04-18T00:30:44Z","content_type":null,"content_length":"47212","record_id":"<urn:uuid:0070200e-3fd0-4ec9-899a-001b2e142b25>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Pseudo-Anosov puzzle 2: homology rank and dilatation Pseudo-Anosov puzzle 2: homology rank and dilatation In fact, following on what I wrote about the two Farb-Leininger-Margalit theorems below, one might ask the following. Is there an absolute constant c such that, if f is a pseudo-Anosov mapping class on a genus g surface, and the f-invariant subspace of H_1(S) has dimension at least d, then log λ(f) >= c (d+1) / g? This would “interpolate” between Penner’s theorem (the case d=0) and the F-L-M theorem about Torelli (the case d=2g). 2 thoughts on “Pseudo-Anosov puzzle 2: homology rank and dilatation” 1. Hey Jordan, I think I can show that g log(lambda) grows like log(d). This follows from Theorem 6.2 from my paper “Ideal Triangulations of Pseudo-Anosov Mapping Tori”, which gives an alternate proof of Farb-Leininger-Margalit. If one has a pA map with a d-dimensional fixed subspace of homology, then the mapping torus has first betti number at least d+1. If one takes the singular fibers of the pA flow, then one can show that they generate H_1 rationally. So the number of singular fibers is at least d+1. When you drill these fibers, you get a cusped hyperbolic 3-manifold with at least d+1 cusps. One can show that the volume of such a manifold grows at least linearly in d (due to Colin Adams), and therefore the minimal number of tetrahedra in an ideal triangulation grows linearly in d. By my result Theorem 6.2, log(number of tetrahedra) gives a linear lower bound for g log(lambda), so it grows at least like log(d). Here’s a link to the paper: Your conjecture sounds plausible, I think the sort of estimates that are made in the paper are far from being sharp. Ian Agol 2. This is an addendum to the comment: Actually, the part I said about the singular fibers generating H_1 rationally might not be right – I thought I had a proof of this, but it’s not right. In any case, it’s really easy to see that the number of tetrahedra has to grow linearly with d (volume is irrelevant), since the rank grows linearly with the number of tetrahedra, so the argument should still work. Tagged dilatation, pseudo-Anosov, puzzles, topology
{"url":"http://quomodocumque.wordpress.com/2010/04/26/pseudo-anosov-puzzle-2-homology-rank-and-dilatation/","timestamp":"2014-04-19T17:01:50Z","content_type":null,"content_length":"61015","record_id":"<urn:uuid:3cbc6072-efb9-4383-8274-6ce4412307b7>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
divergent sequences April 21st 2008, 08:33 AM #1 Apr 2008 divergent sequences is it possible for both the summation of (x sub n) and (y sub n) to be divergent and the summation of (x sub n ysubn) be convergent? One of the best examples is to take $(x_n)=\sum \frac{1}{n}$ and $(y_n)=\sum \frac{-1}{n}$ The two are known to be divergent.. Though the summation is equal to 0 and convergent. April 21st 2008, 08:40 AM #2
{"url":"http://mathhelpforum.com/calculus/35351-divergent-sequences.html","timestamp":"2014-04-17T08:35:01Z","content_type":null,"content_length":"32665","record_id":"<urn:uuid:9eca1865-3d12-4dcc-a136-84e02a6ea5fc>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Loading Spatial Data Loading Spatial Data This chapter describes how to load spatial data into a database, including storing the data in a table with a column of type SDO_GEOMETRY. After you have loaded spatial data, you can create a spatial index for it and perform queries on it, as described in Chapter 4. The process of loading data can be classified into two categories: 3.1 Bulk Loading Bulk loading can import large amounts of ASCII data into an Oracle database. Bulk loading is accomplished with the SQL*Loader utility. (For information about SQL*Loader, see Oracle9i Database 3.1.1 Bulk Loading SDO_GEOMETRY Objects Example 3-1 is the SQL*Loader control file for loading four geometries. When this control file is used with SQL*Loader, it loads the same cola market geometries that are inserted using SQL statements in Example 2-1 in Section 2.1. Example 3-1 Control File for Bulk Load of Cola Market Geometries INFILE * CONTINUEIF NEXT(1:1) = '#' FIELDS TERMINATED BY '|' mkt_id INTEGER EXTERNAL, name CHAR, shape COLUMN OBJECT SDO_ELEM_INFO VARRAY TERMINATED BY '|/' (elements FLOAT EXTERNAL), SDO_ORDINATES VARRAY TERMINATED BY '|/' (ordinates FLOAT EXTERNAL) Notes on Example 3-1: ● The EXTERNAL keyword in the definition mkt_id INTEGER EXTERNAL means that each value to be inserted into the MKT_ID column (1, 2, 3, and 4 in this example) is an integer in human-readable form, not binary format. Example 3-2 assumes that a table named POLY_4PT was created as follows: CREATE TABLE POLY_4PT (GID VARCHAR2(32), GEOMETRY MDSYS.SDO_GEOMETRY); Assume that the ASCII data consists of a file with delimited columns and separate rows fixed by the limits of the table with the following format: geometry rows: GID, GEOMETRY The coordinates in the GEOMETRY column represent polygons. Example 3-2 shows the control file for loading the data. Example 3-2 Control File for Bulk Load of Polygons INFILE * CONTINUEIF NEXT(1:1) = '#' INTO TABLE POLY_4PT FIELDS TERMINATED BY '|' GID INTEGER EXTERNAL, SDO_GTYPE INTEGER EXTERNAL, SDO_ELEM_INFO VARRAY TERMINATED BY '|/' (elements FLOAT EXTERNAL), SDO_ORDINATES VARRAY TERMINATED BY '|/' (ordinates FLOAT EXTERNAL) 3.1.2 Bulk Loading Point-Only Data in SDO_GEOMETRY Objects Example 3-3 shows a control file for loading a table with point data. Example 3-3 Control File for a Bulk Load of Point-Only Data INFILE * CONTINUEIF NEXT(1:1) = '#' FIELDS TERMINATED BY '|' GID INTEGER EXTERNAL, SDO_GTYPE INTEGER EXTERNAL, SDO_POINT COLUMN OBJECT (X FLOAT EXTERNAL, Y FLOAT EXTERNAL) 1| 2001| -122.4215| 37.7862| 2| 2001| -122.4019| 37.8052| 3| 2001| -122.426| 37.803| 4| 2001| -122.4171| 37.8034| 5| 2001| -122.416151| 37.8027228| 3.2 Transactional Insert Operations Using SQL Oracle Spatial uses standard Oracle9i tables that can be accessed or loaded with standard SQL syntax. This section contains examples of transactional inserts into columns of type SDO_GEOMETRY. Note that the INSERT statement in Oracle SQL has a limit of 999 arguments. Therefore, you cannot create a variable-length array of more than 999 elements using the SDO_GEOMETRY constructor inside a transactional INSERT statement; however, you can insert a geometry using a host variable, and the host variable can be built using the SDO_GEOMETRY constructor with more than 999 values in the SDO_ORDINATE_ARRAY specification. (The host variable is an OCI, PL/SQL, or Java program variable.) To perform transactional insertions of geometries, you can create a procedure to insert a geometry, and then invoke that procedure on each geometry to be inserted. Example 3-4 creates a procedure to perform the insert operation. Example 3-4 Procedure to Perform Transactional Insert Operation INSERT_GEOM(GEOM MDSYS.SDO_GEOMETRY) INSERT INTO TEST_1 VALUES (GEOM); Using the procedure created in Example 3-4, you can insert data by using a PL/SQL block, such as the one in Example 3-5, which loads a geometry into the variable named geom and then invokes the INSERT_GEOM procedure to insert that geometry. Example 3-5 PL/SQL Block Invoking Procedure to Insert a Geometry geom mdsys.sdo_geometry := mdsys.sdo_geometry (2003, null, null, mdsys.sdo_elem_info_array (1,1003,3), mdsys.sdo_ordinate_array (-109,37,-102,40)); 3.2.1 Polygon with Hole The geometry to be stored can be a polygon with a hole, as shown in Figure 3-1. Figure 3-1 Polygon with a Hole The coordinate values for Element 1 and Element 2 (the hole), shown in Figure 3-1, are: Element 1= [P1(6,15), P2(10,10), P3(20,10), P4(25,15), P5(25,35), P6(19,40), P7(11,40), P8(6,25), P1(6,15)] Element 2= [H1(12,15), H2(15,24)] The following example assumes that a table named PARKS was created as follows: CREATE TABLE PARKS (NAME VARCHAR2(32), SHAPE MDSYS.SDO_GEOMETRY); The SQL statement for inserting the data for geometry OBJ_1 is: VALUES ('OBJ_1', MDSYS.SDO_GEOMETRY(2003, NULL,NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1,1003,1, 19,2003,3), MDSYS.SDO_ORDINATE_ARRAY(6,15, 10,10, 20,10, 25,15, 25,35, 19,40, 11,40, 6,25, 6,15, 12,15, 15,24))); The SDO_GEOMETRY object type takes values and constructors for its attributes SDO_GTYPE, SDO_ELEM_INFO, and SDO_ORDINATES. The SDO_GTYPE is 2003, and the SDO_ELEM_INFO has 2 triplet values because there are 2 elements. Element 1 starts at offset 1, is of ETYPE 1003, and its interpretation value is 1 because the points are connected by straight line segments. Element 2 starts at offset 19, is of ETYPE 2003, and has an interpretation value of 3 (a rectangle). The SDO_ORDINATES varying length array has 22 values with SDO_ORDINATES(1...18) describing element 1 and SDO_ORDINATES(19...22) describing element 2. Assume that two dimensions are named X and Y, their bounds are 0 to 100, and the tolerance for both dimensions is 0.005. The SQL statement for loading the USER_SDO_GEOM_METADATA metadata view is: VALUES ('PARKS', 'SHAPE', MDSYS.SDO_DIM_ARRAY(MDSYS.SDO_DIM_ELEMENT('X', 0, 100, 0.005), MDSYS.SDO_DIM_ELEMENT('Y', 0, 100, 0.005)), 3.2.2 Compound Line String A compound line string is a connected sequence of straight line segments and circular arcs. Figure 3-2 is an example of a compound line string. Figure 3-2 Line String Consisting of Arcs and Straight Line Segments In Figure 3-2, the coordinate values for points P1..P7 that describe the line string OBJ_2 are: OBJ_2 = [P1(15,10), P2(25,10), P3(30,5), P4(38,5), P5(38,10), P6(35,15), P7(25,20)] The SQL statement for inserting this compound line string in a feature table defined as ROADS(GID Varchar2(32), Shape MDSYS.SDO_GEOMETRY) is: INSERT INTO ROADS VALUES ('OBJ_2', MDSYS.SDO_GEOMETRY(2002, NULL, NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1,4,2, 1,2,1, 9,2,2), MDSYS.SDO_ORDINATE_ARRAY(15,10, 25,10, 30,5, 38,5, 38,10, 35,15, 25,20))); The SDO_GEOMETRY object type takes values and constructors for its attributes SDO_GTYPE, SDO_ELEM_INFO, and SDO_ORDINATES. The SDO_GTYPE is 2002, and the SDO_ELEM_INFO_ARRAY has 9 values because there are 2 subelements for the compound line string. The first subelement starts at offset 1, is of SDO_ETYPE 2, and its interpretation value is 1 because the points are connected by straight line segments. Similarly, subelement 2 has a starting offset of 9. That is, the first ordinate value is SDO_ORDINATES(9), is of SDO_ETYPE 2, and has an interpretation value of 2 because the points describe a circular arc. The SDO_ORDINATES_ARRAY varying length array has 14 values, with SDO_ORDINATES(1..10) describing subelement 1, and SDO_ORDINATES(9..14) describing subelement 2. Assume that two dimensions are named X and Y, their bounds are 0 to 100, and tolerance for both dimensions is 0.005. The SQL statement to insert the metadata into the USER_SDO_GEOM_METADATA view is: INSERT INTO USER_SDO_GEOM_METADATA VALUES ('ROADS', 'SHAPE', MDSYS.SDO_DIM_ARRAY(MDSYS.SDO_DIM_ELEMENT('X', 0, 100, 0.005), MDSYS.SDO_DIM_ELEMENT('Y', 0, 100, 0.005)), 3.2.3 Compound Polygon A compound polygon's boundary is a connected sequence of straight line segments and circular arcs, whose first point is equal to its last point. Figure 3-3 is an example of a compound polygon. Figure 3-3 Compound Polygon In Figure 3-3, the coordinate values for points P1 to P8 that describe the polygon OBJ_3 are: OBJ_3 = [P1(20,30), P2(11,30), P3(7,22), P4(7,15), P5(11,10), P6(21,10), P7(27,30), P8(25,27), P1(20,30)] The following example assumes that a table named PARKS was created as follows: CREATE TABLE PARKS (GID VARCHAR2(32), SHAPE MSSYS.SDO_GEOMETRY); The SQL statement for inserting this compound polygon is: INSERT INTO PARKS VALUES ('OBJ_3', MDSYS.SDO_GEOMETRY(2003, NULL,NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1,1005,2, 1,2,1, 13,2,2), MDSYS.SDO_ORDINATE_ARRAY(20,30, 11,30, 7,22, 7,15, 11,10, 21,10, 27,30, 25,27, 20,30))); The SDO_GEOMETRY object type takes values and constructors for its attributes SDO_GTYPE, SDO_ELEM_INFO, and SDO_ORDINATES. The SDO_GTYPE is 2003, the SDO_ELEM_INFO has 3 triplet values. The first triplet (1,1005,2) identifies the element as a compound polygon (ETYPE 1005) with two subelements. The first subelement starts at offset 1, is of ETYPE 2, and its interpretation value is 1 because the points are connected by straight line segments. Subelement 2 has a starting offset of 13, is of ETYPE 2, and has an interpretation value of 2 because the points describe a circular arc. The SDO_ORDINATES varying length array has 18 values, with SDO_ORDINATES(1...14) describing subelement 1, and SDO_ORDINATES(13...18) describing subelement 2. This example assumes the PARKS table was created as follows: CREATE TABLE PARKS (GID VARCHAR2(32), SHAPE MSSYS.SDO_GEOMETRY); Assume that two dimensions are named X and Y, their bounds are 0 to 100, and tolerance for both dimensions is 0.005. The SQL statement to insert the metadata into the USER_SDO_GEOM_METADATA view is: INSERT INTO USER_SDO_GEOM_METADATA VALUES ('PARKS', 'SHAPE', MDSYS.SDO_DIM_ARRAY(MDSYS.SDO_DIM_ELEMENT('X', 0, 100, 0.005), MDSYS.SDO_DIM_ELEMENT('Y', 0, 100, 0.005)), 3.2.4 Compound Polygon with Holes A compound polygon's boundary is a connected sequence of straight line segments and circular arcs. Figure 3-4 is an example of a geometry that contains a compound polygon with a hole (or void). Figure 3-4 Compound Polygon with a Hole In Figure 3-4, the coordinate values for points P1 to P8 (Element 1) and C1 to C3 (Element 2) that describe the geometry OBJ_4 are: Element 1 = [P1(20,30), P2(11,30), P3(7,22), P4(7,15), P5(11,10), P6(21,10), P7(27,30), P8(25,27), P1(20,30)] Element 2 = [C1(10,17), C2(15,22), C3(20,17)] The following example assumes that a table named PARKS was created as follows: CREATE TABLE PARKS (GID VARCHAR2(32), SHAPE MSSYS.SDO_GEOMETRY); The SQL statement for inserting this compound polygon with a hole is: INSERT INTO Parks VALUES ('OBJ_4', MDSYS.SDO_GEOMETRY(2003, NULL,NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1,1005,2, 1,2,1, 13,2,2, 19,2003,4), MDSYS.SDO_ORDINATE_ARRAY(20,30, 11,30, 7,22, 7,15, 11,10, 21,10, 27,30, 25,27, 20,30, 10,17, 15,22, 20,17))); The SDO_GEOMETRY object type takes values and constructors for its attributes SDO_GTYPE, SDO_ELEM_INFO, and SDO_ORDINATES. The SDO_GTYPE is 2003, the SDO_ELEM_INFO has 4 triplet values. The first 3 triplet values represent element 1. The first triplet (1,1005,2) identifies this element as a compound element with two subelements. The values in SDO_ELEM_INFO(1...9) pertain to element 1, while SDO_ELEM_INFO(10...12) are for element 2. The first subelement starts at offset 1, is of ETYPE 2, and its interpretation is 1 because the points are connected by straight line segments. Subelement 2 has a starting offset of 13, is of ETYPE 2, and has an interpretation value of 2 because the points describe a circular arc. The fourth triplet (19,2003,4) represents element 2. Element 2 starts at offset 19, is of ETYPE 2003, and its interpretation value is 4, indicating that it is a circle. The SDO_ORDINATES varying length array has 24 values, with SDO_ORDINATES(1...14) describing subelement 1, SDO_ORDINATES(13...18) describing subelement 2, and SDO_ORDINATES(19...24) describing element 2. Assume that two dimensions are named X and Y, their bounds are 0 to 100, and tolerance for both dimensions is 0.005. The SQL statement to insert the metadata into the USER_SDO_GEOM_METADATA view is: INSERT INTO USER_SDO_GEOM_METADATA VALUES ('PARKS', 'SHAPE', MDSYS.SDO_DIM_ARRAY(MDSYS.SDO_DIM_ELEMENT('X', 0, 100, 0.005), MDSYS.SDO_DIM_ELEMENT('Y', 0, 100, 0.005)), 3.2.5 Transactional Insertion of Point-Only Data A point-only geometry can be inserted with the following statement: INSERT INTO PARKS VALUES ('OBJ_PT', NULL, NULL)
{"url":"http://docs.oracle.com/cd/A91202_01/901_doc/appdev.901/a88805/sdo_objl.htm","timestamp":"2014-04-21T01:27:20Z","content_type":null,"content_length":"31196","record_id":"<urn:uuid:c4c57665-9ab6-499b-b2b7-40761612078e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Ultimate Zone Rating (UZR), Part 2 Adjusting for everything under the sun, or dome, or whatever the case may be. Note: A Primer reader, and researcher in his own right (Shorty), diligently pointed out to me a possible error in the methodology described in Part I of this series. Rather than the two-part (and confusing) process I used to determine the number of balls, and hence, runs, cost or saved by a fielder in each zone, he suggested that a one-part and much simpler process can and should be used. His suggested methodology is as follows: To determine a fielder?s "balls cost or saved" in each zone, first the fielder?s out percentage is determined by dividing his "balls caught" (his outs) by the total balls in play in that zone (hits plus outs) when that player is on the field, and then subtracting the league outs in that zone (made by a fielder at that position only) divided by the league balls in play in that zone, and multiplying the difference by the total balls in play in that zone when that player is on the field. I know it still sounds a little complicated, but it is actually simple, straightforward, and obvious. More importantly, it may be more accurate than the old methodology. Basically it is a fielder?s simple zone rating in each zone minus an average fielder?s (at that position) simple zone rating in that zone, multiplied by the fielder?s total chances in that zone. As I stated in Part I, a UZR is essentially the weighted average of a player?s simple ZR in every zone on the field. Since Shorty?s comments were posted on Primer, several other equally credible readers and researchers have suggested that the original methodology may in fact be better - for various reasons that I won?t go into. To be honest, I have gone back and forth between the two methodologies (and others) for several years now, and I?m not even sure anymore why I settled on the current one. Well, after some more rumination and sleepless nights, and invaluable help from Primer friends and colleagues, I have decided that Shorty is right - the simpler methodology is the more correct one. At the end of this article, you will find updated unadjusted UZR runs (using the simpler methodology) for the 2002 NL and AL shortstops, as well as the corresponding adjusted results (also using the new methodology), which is the subject of this article. Anyway, let?s get on with the UZR adjustments? Since my original UZR came out several years ago, it has been suggested that there are a number of factors which might influence a fielder?s ability to turn a batted ball into an out other than what zone that ball is hit into. These are noted in Part I and are reiterated below: 1. Park factors 2. Speed of the batted ball 3. Handedness of the batter 4. Ground/Fly ratio of the pitcher 5. Baserunners/outs Park Factors Actually, park factors have always been included in my UZR ratings. This year, I made some minor changes. I?ll explain what changes were made, how UZR park factors are calculated, and how they are applied to the UZR player ratings. Originally, I used separate infield park factors for each of the infield positions, reasoning that what effect a park?s peculiarities had on one infield position (e.g., the condition of the turf, lighting, glare from the sun, etc.) might not be the same for another infield position. At some point, however, I decided to use only one infield park factor for all infield positions. The reason for the change was two-fold: One, this considerably increased my sample size ? thus the resultant infield park factors were more reliable. Two, I figured that the primary influence on an infield park factor was the condition (and type) of the turf. Given that, I figured that the infield park factor should be more or less the same for all infield positions. Right or wrong, that is how it is presently done. For the outfield park factors, I have always separated the OF into three segments and assigned a separate park factor to each segment ? LF, CF, and RF. The LF segment consists of all 7L, 7, and 78 zones (see the retrosheet zones). The CF segment consists of all 8L and 8R zones, and the RF segment consists of all 89, 9, and 9L zones. You could certainly make an argument for wanting more granularity in the OF park factors (and probably for the IF park factors as well), but as with all complex methodologies, you often reach a point of diminishing returns. As well, in determining the level of granularity in any methodology, there is always a balance that must be attained between rigor and sample size. In fact, I grappled with this issue (rigor versus sample size) many times throughout the process of adjusting the UZR ratings. The final change that was made this year, in terms of the park adjustments, was that I used a larger sample size for each park (up to ten years), and I was more careful in accounting for situations which might have affected the park factors (such as changes in OF dimensions, fence heights, turf type, etc.). As you will see in the 2002 park factor chart below, I used data since the 1993 season, and treated a park as separate as long as no material changes were made. If a material change was made to a park (such as replacing artificial turf with natural grass, or changing an OF dimension), I treated the "renovated" park as a completely different park. If a change was made to the OF but not to the IF, I treated the renovated park as new for the OF park factors but not for the IF park factors (and vice versa). Thus, some parks (like Wrigley Field and Yankee Stadium) have 10 years of data that go into their IF and OF park factors, other parks (Turner Field, Coors Field, et al.) have less than 10 years for their IF and OF factors, while still others (like the new Comisky Park) have x years for their IF factors and y years for their OF factors (the OF dimensions in Comisky were changed in 2001). UZR park factors are calculated in the same way that regular park factors are calculated. For the IF, the home (home and road teams combined) groundball out percentage is divided by 1/14th (or 1/ 16th, depending upon the number of parks in the league) of the home GB out percentage plus 13/14th (or 15/16th) of the road (again, home and road teams combined) GB out percentage. (Actually, the computer is programmed to use 1/15th and 14/15th for all leagues and years.) Road game data is of course for that team?s road games only. For the OF, the same calculations are done, using flyball and line drive out percentages (FB?s and LD?s are treated as if they were the same) rather than GB out percentage. The same OF park factor is applied to fly balls and line drives. As I said earlier, separate calculations are done for the LF, CF, and RF zones in the OF. Errors in the IF and OF are treated as outs. A ground ball error park factor is also calculated, using the number of GB errors divided by the number of GB outs plus errors. When all is said and done, for each "park", we get an IF park factor, a LF PF, a CF PF, a RF PF, and a GB error PF. I put the word "park" in quotes because, as I explained above, two different "parks" may actually be the same park with different dimensions and/or different turf. The chart below contains the 2002 regressed park factors for all 30 NL and AL parks. These factors are used to park adjust the 2002 UZR player ratings. Park factors are regressed according to the size of the sample data. Park GB PF Error PF LF PF CF PF RF PF ANA 1.01 1 .99 1 1.01 ARI .97 .99 1.02 1 1.02 ATL 1.01 1.1 1.01 .98 1.02 BAL 1.01 1.02 1.02 .99 .99 BOS .99 1.12 .85 .98 1.01 CHA 1 .94 1 .99 1.01 CHN 1 1 .99 .99 .99 CIN .99 .97 1.02 .99 1.01 CLE 1.01 1.01 .98 .98 1.01 COL .97 1.02 .93 .91 .91 DET .99 1 1.02 .99 1.01 FLO 1.01 1.05 .98 1 1.03 HOU .99 .95 .98 1 1.02 KCA 1 .96 1.02 .99 .99 LAN 1.02 .97 1.02 1.02 1.01 MIL 1 1.03 1.02 .99 .99 MIN 1 .94 1 1.01 .98 MON .99 .92 1.01 1 .99 NYA .99 .99 1.03 1.02 1.01 NYN 1.01 1.06 1.01 1.02 1 OAK 1 1.03 1.01 .99 .99 PHI 1.01 .95 1 .98 .98 PIT 1.01 1.05 .97 .98 .99 SDN 1.01 1.01 1.02 1.04 1.03 SEA 1.02 .97 1.05 1.06 1.03 SFN 1 .96 .99 1.02 .99 SLN 1 .99 .99 1.02 1.01 TBA .98 .99 .99 .97 .96 TEX 1.02 .99 .97 .97 .99 TOR 1 .96 .98 1.01 1.01 The UZR park factors are applied in the same way that some of the other UZR adjustments are applied, and in the same way that most offensive park factors are applied. (Note: Technically this is not the correct way to apply park factors - however it is close enough.) When an out is recorded by a particular fielder, rather than crediting that fielder with exactly one out, I credit him with "one divided by the park factor" number of outs. For example, the infield park factor at Coors Field is .97, so for every out recorded by an infielder in Coors Field, he gets credited with one divided by .97, or 1.03 outs. Every out for every fielder is park adjusted in this way, depending upon what park the out was recorded in and what the corresponding park factor is for that park at that position. In other words, outs in a fielder?s home park are not the only outs that are park adjusted ? all outs are park adjusted. Batted Ball Speed This one is obvious, yet this is the first year that I was able to obtain the speed of every batted ball (since 1999), as judged by the same people ("stringers" I think they call them) who record batted ball type and location. By the way, all of my play-by-play data is courtesy of two independent sources. One is Gary Gillette and Pete Palmer and the other is STATS Inc. I believe that STATS, at least, uses three "stringers" for each game, and somehow combines their judgments, in order to reduce human error and bias. Anyway, each ball in play is designated (by the "stringers") as hard, medium, or soft. For ground balls, the meaning of these designations is obvious. For bunts, fly balls, and especially pop files, it is not (things like height, distance and trajectory are considered for fly balls and pop-ups). The important thing is that all of the "stringers" are reasonably consistent. From working with the data, I am fairly confident they are. Here is an example of how the GB out percentages change in the various IF zones, depending upon the speed of the ground ball: GB Out Percentages by Zone and Batted Ball Speed Zone Soft Medium Hard all 0.813 0.781 0.513 56 0.689 0.714 0.435 6/6M 0.81 0.798 0.51 4/4M 0.835 0.819 0.541 3/34 0.86 .753 .465 5L 0.724 0.704 0.384 5 0.753 0.808 0.523 56 0.688 0.617 0.331 6 0.874 0.872 0.689 6M 0.638 0.545 0.216 4M 0.555 0.458 0.18 4 0.93 0.905 0.728 34 0.804 0.616 0.26 3 0.953 0.887 0.639 3L 0.966 0.82 0.394 How the OF fly ball and line drive out percentages vary with the speed of the ball depends upon the OF zone. For example, softly hit fly balls are harder to catch in the short outfield zones. The opposite is true in the medium and deep OF zones. FB Out Percentages by Fly Ball Depth and Batted Ball Speed Fly ball depth Easy Medium Hard all .697 .931 .728 deep .957 .939 .708 medium .832 .944 .855 short .591 .882 .877 How is the batted ball speed applied to UZR? Rather than using a park factor type adjustment, I opted to "split" each zone into six separate "sub-zones", and keep track of player outs and chances and league outs and chances separately in each "sub-zone". Why six and not three (soft, medium, and hard)? Well, I also "tacked on" the handedness of the batter, which is another important adjustment, as you will see later on. In other words, a fielder?s runs saved or cost is calculated six separate times for each zone on the field. I warned you that there were going to be lots of "rigor versus sample size" issues in the UZR adjustments! Batter Handedness Well, I already mentioned above that the handedness of the batter significantly affects the out rate in the various zones for both the IF and the OF. The reason for this is three-fold: One, the positioning of the fielders change, so that for example, the SS catches more balls in zone 56 (the SS hole) with a RHB at the plate than with a LHB (he is presumably shaded towards the hole). Two, when a batter pulls a ground ball, it is a weaker hit on the average, and when a batter pulls a fly ball or line drive, it is generally hit harder and further (the opposite of the ground ball). Three, RHB?s and LHB?s, as a group, hit the ball differently, even after accounting for which side of the field they tend to hit to. Here are some examples as to how GB outs in the various zones are affected by the handedness of the batter: GB Out Percentage and Batter Handedness Zone RHB LHB all 3 .434 .538 3 .752 .822 34 .256 .293 all 4 .745 .750 4 .882 .889 4M .51 .376 all 5 .582 .478 5 .73 .605 56 .444 .4 all 6 .737 .703 6 .853 .794 6M .485 .579 FB Out Percentage and Batter Handedness R/L 7 Zones 78 Zones 8 Zones 89 Zones 9 Zones R .832 .778 .862 .816 .845 L .853 .827 .870 .763 .834 LD Out Percentages and Batter Handedness R/L 7 Zones 78 Zones 8 Zones 89 Zones 9 Zones R .163 .176 .225 .202 .258 L .261 .201 .227 .165 .150 As I explained above, the way I adjust UZR for batter handedness is the same as the way I adjust for batted ball speed. I keep track of LHB?s and RHB?s separately. Pitcher G/F Ratio The ground/fly ratio of the pitcher also affects the GB and FB (not so much for LD?s) out rate. Basically, a ground ball pitcher allows ground balls that are easier to field and fly balls that are more difficult to field. The opposite is true for fly ball pitchers. The more extreme a pitcher?s G/F ratio, the more pronounced the effect. Originally, I assumed that if I controlled for ground ball and fly ball speed, the differences would disappear (IOW, that ground ball pitchers allowed a greater percentage of soft ground balls, etc.). I figured then, that since I was already accounting for batted ball speed, I wouldn?t need to account for pitcher G/F ratio. I was wrong. Interestingly, and somewhat inexplicably, when I controlled for batted ball speed, the differences between ground ball and fly ball pitchers were still there and almost as pronounced as before. (Also, when I controlled for batted ball speed, the differences between LHB?s and RHB?s did not change much either.) In the following chart, FB pitchers had an average G/F ratio of around .8 and GB pitchers around 2.0. FB pitchers were around 25% of all pitchers with at least 100 PA in a season. Same for GB Type of Pitcher GB out % FB out % FB .721 .709 GB .748 .695 As it turns out, pitcher G/F ratios do not have much of an affect on a player?s UZR rating for one simple reason (actually two reasons). GB and FB out percentages are not that sensitive to a pitcher? s G/F ratio, most pitchers? G/F ratios are near average (around 1.4), and almost all pitching staffs have near-average G/F ratios (well, maybe that was three reasons). In any case, the way I adjust a fielder?s UZR for his pitching staff?s G/F ratio is to keep track of the average G/F ratio for all pitchers while the fielder is on the field, and then to apply this to his UZR rating - at the end. In fact, I simply adjust a player?s UZR rate (and then, eventually, his UZR runs) by .001 per .1 above or below an average pitcher G/F ratio. For example, if an infielder?s pitching staff had an average G/F ratio of 1.8, since this is .4 more than the average pitcher G/F ratio of 1.4, the infielder?s UZR rate would be reduced by .004 (since those pitchers presumably allow easier ground balls). This is admittedly a very course way to do an adjustment, however considering how relatively unimportant pitcher G/F ratio it is, I think it works just fine. Finally, we get to the last, but certainly not the least, UZR adjustment. Each infielder?s (but not the outfielders) GB out percentage is significantly influenced by the baserunners and the number of outs (as with the other adjustments, this is not to say that fielders, as a general rule, have markedly different distributions of baserunners and outs ? in fact, they don?t, as you will see from a comparison of the unadjusted and adjusted UZR ratings). This is mostly due to the positioning of the infielders (e.g., with a runner on first, the first baseman has limited range, with a runner on third and less than two outs, the infield may be playing up, etc.), and to a much lesser extent to the approach of the pitchers and batters (e.g., with two outs, GB out percentages tend to be higher across the board). Here are the GB out percentages for each "set of zones" and for each of the 24 bases/outs situations: The "3" zones (all zones beginning with the number "3") Overall: .513 Baserunners 0 Out 1 Out 2 Outs xxx .537 .546 .549 1xx .402 .399 .432 x2x .544 .549 .561 xx3 .47 .485 .561 12x .477 .522 .562 1x3 .385 .402 .427 x23 .489 .497 .551 123 .495 .466 .529 The "4" zones (all zones beginning with the number "4") Overall: .748 Baserunners 0 Out 1 Out 2 Outs xxx .744 .741 .743 1xx .738 .749 .757 x2x .759 .763 .754 xx3 .742 .658 .771 12x .768 .768 .775 1x3 .752 .755 .755 x23 .737 .723 .772 123 .732 .776 .767 The "5" zones (all zones beginning with the number "5") Overall: .566 Baserunners 0 Out 1 Out 2 Outs xxx .561 .57 .574 1xx .54 .553 .59 x2x .57 .564 .585 xx3 .556 .528 .617 12x .529 .548 .589 1x2 .531 .533 .578 x23 .587 .523 .604 123 .526 .548 .587 The "6" zones (all zones beginning with the number "6") Overall: .728 Baserunners 0 Out 1 Out 2 Outs xxx .72 .723 .724 1xx .739 .741 .739 x2x .727 .712 .725 xx3 .669 .661 .746 12x .751 .747 .741 1x3 .718 .753 .764 x23 .686 .704 .729 123 .742 .742 .739 How are the baserunner/outs adjustments handled? Dare I break each sub-zone down further into 24 (the number of bases/outs combinations) more sub-sub-zones? Not a chance! First I go through my ten-year database (93-02) to determine the "adjustment factors" for each infield position and for each of the 24 bases/outs combinations. For example, as you can see above, a simple ZR for an average first baseman for 1993-2002 was .513. With a runner on first only, and 0 outs, however, it was .402. Therefore, the bases/outs adjustment factor for a first baseman and this particular bases/outs combination, is .402/.513, or .784. I use this adjustment factor for all outs recorded by a first baseman, regardless of the zone (yes, I know that each zone should have its own bases/outs adjustment factor, but I can only deal with so much granularity in one lifetime), and I apply the adjustment in the same way that I apply the park factor adjustments ? by dividing each out recorded by the adjustment factor. In the case of a first baseman who records an out with a runner on first base only and no outs, he gets credit for 1/.784, or 1.28 outs (remember - this is technically not the correct way to apply an adjustment factor ? but it is good enough, IMO). Well, those are about all the non-trivial adjustments that I could think of. If anyone comes up with any more, please keep them to yourself! I still have to crank out the rest of the 2002 revised (new and improved) Super-lwts! To get an idea as to how all of these adjustments affect a player?s UZR rating, here are the same SS charts I printed in Part I with the adjusted UZR runs added. Note again that I redid the original unadjusted ratings using the new methodology, as described at the beginning of this article, so that the following charts contain adjusted and unadjusted UZR runs using the new methodology only. 2002 NL SS UZR Data Name Team Games Played Chances Adj. UZR Rate Unadj. Runs Adj. Runs Tony Womack Ari 149 422 .678 -25 -22 Alex S. Gonzalez ChC 142 382 .734 -7 -4 Jimmy Rollins Phi 152 488 .701 -16 -17 Barry Larkin Cin 135 425 .720 -11 -8 Jack Wilson Pit 143 499 .733 1 -6 Rey Ordonez NYM 142 416 .759 6 4 Rich Aurilia SF 131 327 .689 -16 -16 Orlando Cabrera Mon 153 519 .758 3 4 Jose Hernandez Mil 149 469 .786 10 14 Deivi Cruz SD 147 380 .734 -1 -3 Rafael Furcal Atl 150 451 .765 8 7 Juan Uribe Col 155 521 .797 9 19 Edgar Renteria StL 149 449 .762 4 5 Andy Fox Fla 112 265 .770 11 5 Cesar Izturis LA 128 290 .774 9 6 2002 AL SS UZR Data Name Team Games PLayed Chances Adj. UZR Rate Unadj. Runs Adj. Runs Cristian Guzman Min 147 378 .688 -17 -17 Derek Jeter NYY 156 415 .673 -26 -24 Neifi Perez KC 139 405 .707 -16 -13 Carlos Guillen Sea 130 334 .737 -2 -2 Omar Vizquel Cle 150 461 .761 9 7 Chris Gomez TB 130 345 .745 -1 0 Miguel Tejada Oak 156 539 .756 6 4 Nomar Garciaparra Bos 154 481 .755 3 4 David Eckstein Ana 147 406 .771 10 7 Alex Rodriguez Tex 162 443 .768 14 8 Royce Clayton CWS 109 278 .777 8 8 Mike Bordick Bal 117 354 .814 15 19
{"url":"http://www.baseballthinkfactory.org/primate_studies/discussion/lichtman_2003-03-21_0","timestamp":"2014-04-21T15:29:40Z","content_type":null,"content_length":"131639","record_id":"<urn:uuid:359df8ac-df48-4ee6-ad3f-8807e060fc7b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Keywords: Probabilidade, Adrien-Marie Legendre, Blaise Pascal, Christiaan Huygens, Fuzzy logic, George Boole, Jakob Bernoulli, Karl Pearson, Kolmogorov A palavra probabilidade origina-se do Latim probare (provar ou testar). Informalmente, provável é uma das muitas palavras utilizadas para eventos incertos ou conhecidos, sendo também substituida por algumas palavras como “sorte”, “risco”, “azar”, “incerteza”, “duvidoso”, dependendo do contexto. Tal como acontece com a teoria da mecânica, que atribui definições precisas a termos de uso diário, como trabalho e força, também a teoria das probabilidades tenta quantificar a noção de provável. Em essência, existe um conjunto de regras matemáticas para manipular a probabilidade, listado no tópico "Formalização da probabilidade", em baixo. (Existem outras regras para quantificar a incerteza, como a teoria de Dempster-Shafer e a lógica difusa (fuzzy logic), mas estas são, em essência, diferentes e incompatíveis com as leis da probabilidade tal como são geralmente entendidas). No entanto, está em curso um debate sobre a que é, exactamente, que as regras se aplicam; a este tópico chama-se interpretações da probabilidade. A idéia geral da probabilidade é frequentemente dividida em dois conceitos relacionados: • Probabilidade aleatória que representa uma série de eventos futuros cuja ocorrência é definida por alguns fenômenos físicos randômicos (aleatórios). Este conceito poder ser dividido em fenômenos físicos que são previsíveis através de informação suficiente e fenômenos que são essencialmente imprevisíveis. Um exemplo para o primeiro tipo é uma roleta, e um exemplo para o segundo tipo é um vazamento radioativo. Nota: Este artigo encontra-se em processo de tradução. A sua ajuda é bem-vinda. Provavelmente existem blocos de texto por traduzir no conteúdo do artigo. Verifique se lhe são úteis. • Epistemic probability, which represents our uncertainty about propositions when one lacks complete knowledge of causative circumstances. Such propositions may be about past or future events, but need not be. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true, and to determine how "probable" it is that a suspect committed a crime, based on the evidence presented. It is an open question whether aleatory probability is reducible to epistemic probability based on our inability to precisely predict every force that might affect the roll of a die, or whether such uncertainties exist in the nature of reality itself, particularly in quantum phenomena governed by Heisenberg's uncertainty principle. Although the same mathematical rules apply regardless of which interpretation is chosen, the choice has major implications for the way in which probability is used to model the real world. Historical remarks The scientific study of probability is a modern development. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions of use in those problems only arose much later. The doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the first scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that there are certain assignable limits within which all errors may be supposed to fall; continuous errors are discussed and a probability curve is given. Pierre-Simon Laplace (1774) made the first attempt to deduce a rule for the combination of observations from the principles of the theory of probabilities. He represented the law of probability of errors by a curve y = φ(x), x being any error and y its probability, and laid down three properties of this curve: (1) It is symmetric as to the y-axis; (2) the x-axis is an asymptote, the probability of the error $\infty$ being 0; (3) the area enclosed is 1, it being certain that an error exists. He deduced a formula for the mean of three observations. He also gave (1781) a formula for the law of facility of error (a term due to Lagrange, 1774), but one which led to unmanageable equations. Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors. The method of least squares is due to Adrien-Marie Legendre (1805), who introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes. In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error, $\phi(x) = ce^{-h^2 x^2}$ c and h being constants depending on precision of observation. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof which seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for r, the probable error of a single observation, is well known. In the nineteenth century authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion, and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory. On the geometric side (see integral geometry) contributors to The Educational Times were influential (Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin). There is essentially one set of mathematical rules for manipulating probability; these rules are listed under "Formalization of probability" below. (There are other rules for quantifying uncertainty, such as the Dempster-Shafer theory and fuzzy logic, but those are essentially different and not compatible with the laws of probability as they are usually understood.) However, there is ongoing debate over what, exactly, the rules apply to; this is the topic of probability interpretations. The general idea of probability is often divided into two related concepts: • Aleatory probability, which represents the likelihood of future events whose occurrence is governed by some random physical phenomenon. This concept can be further divided into physical phenomena that are predictable, in principle, with sufficient information, and phenomena which are essentially unpredictable. Examples of the first kind include tossing dice or spinning a roulette wheel, and an example of the second kind is radioactive decay. • Epistemic probability, which represents our uncertainty about propositions when one lacks complete knowledge of causative circumstances. Such propositions may be about past or future events, but need not be. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true, and to determine how "probable" it is that a suspect committed a crime, based on the evidence presented. It is an open question whether aleatory probability is reducible to epistemic probability based on our inability to precisely predict every force that might affect the roll of a die, or whether such uncertainties exist in the nature of reality itself, particularly in quantum phenomena governed by Heisenberg's uncertainty principle. Although the same mathematical rules apply regardless of which interpretation is chosen, the choice has major implications for the way in which probability is used to model the real world. Formalization of probability Like other theories, the theory of probability is a representation of probabilistic concepts in formal terms -- that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are then interpreted or translated back into the problem domain. There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation, sets are interpreted as events and probability itself as a measure on a class of sets. In Cox's formulation, probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details: 1. a probability is a number between 0 and 1; 2. the probability of an event or proposition and its complement must add up to 1; and 3. the joint probability of two events or propositions is the product of the probability of one of them and the probability of the second, conditional on the first. The reader will find an exposition of the Kolmogorov formulation in the probability theory article, and in the Cox's theorem article for Cox's formulation. See also the article on probability axioms. Representation and interpretation of probability values The probability of an event is generally represented as a real number between 0 and 1. An impossible event has a probability of exactly 0, and a certain event has a probability of 1, but the converses are not always true: probability 0 events are not always impossible, nor probability 1 events certain. The rather subtle distinction between "certain" and "probability 1" is treated at greater length in the article on "almost surely". Most probabilities that occur in practice are numbers between 0 and 1, indicating the event's position on the continuum between impossibility and certainty. The closer an event's probability is to 1, the more likely it is to occur. For example, if two events are assumed equally probable, such as a flipped coin landing heads-up or tails-up, we can express the probability of each event as "1 in 2", or, equivalently, "50%" or "1/ Probabilities are equivalently expressed as odds, which is the ratio of the probability of one event to the probability of all other events. The odds of heads-up, for the tossed coin, are (1/2)/(1 - 1/2), which is equal to 1/1. This is expressed as "1 to 1 odds" and often written "1:1". Odds a:b for some event are equivalent to probability a/(a+b). For example, 1:1 odds are equivalent to probability 1/2, and 3:2 odds are equivalent to probability 3/5. There remains the question of exactly what can be assigned probability, and how the numbers so assigned can be used; this is the question of probability interpretations. There are some who claim that probability can be assigned to any kind of an uncertain logical proposition; this is the Bayesian interpretation. There are others who argue that probability is properly applied only to propositions concerning sequences of repeated experiments or sampling from a large population; this is the frequentist interpretation. There are several other interpretations which are variations on one or the other of those, or which have less acceptance at present. A probability distribution is a function that assigns probabilities to events or propositions. For any set of events or propositions there are many ways to assign probabilities, so the choice of one distribution or another is equivalent to making different assumptions about the events or propositions in question. There are several equivalent ways to specify a probability distribution. Perhaps the most common is to specify a probability density function. Then the probability of an event or proposition is obtained by integrating the density function. The distribution function may also be specified directly. In one dimension, the distribution function is called the cumulative distribution function. Probability distributions can also be specified via moments or the characteristic function, or in still other ways. A distribution is called a discrete distribution if it is defined on a countable, discrete set, such as a subset of the integers. A distribution is called a continuous distribution if it has a continuous distribution function, such as a polynomial or exponential function. Most distributions of practical importance are either discrete or continuous, but there are examples of distributions which are neither. Important discrete distributions include the discrete uniform distribution, the Poisson distribution, the binomial distribution, the negative binomial distribution and the Maxwell-Boltzmann Important continuous distributions include the normal distribution, the gamma distribution, the Student's t-distribution, and the exponential distribution. Probability in mathematics Probability axioms form the basis for mathematical probability theory. Calculation of probabilities can often be determined using combinatorics or by applying the axioms directly. Probability applications include even more than statistics, which is usually based on the idea of probability distributions and the central limit theorem. To give a mathematical meaning to probability, consider flipping a "fair" coin. Intuitively, the probability that heads will come up on any given coin toss is "obviously" 50%; but this statement alone lacks mathematical rigor - certainly, while we might expect that flipping such a coin 10 times will yield 5 heads and 5 tails, there is no guarantee that this will occur; it is possible for example to flip 10 heads in a row. What then does the number "50%" mean in this context? One approach is to use the law of large numbers. In this case, we assume that we can perform any number of coin flips, with each coin flip being independent - that is to say, the outcome of each coin flip is unaffected by previous coin flips. If we perform N trials (coin flips), and let N[H] be the number of times the coin lands heads, then we can, for any N, consider the ratio N[H]/N. As N gets larger and larger, we expect that in our example the ratio N[H]/N will get closer and closer to 1/2. This allows us to "define" the probability Pr(H) of flipping heads as the mathematical limit, as N approaches infinity, of this sequence of ratios: $\Pr(H) = \lim_{N \to \infty}{N_H \over N}$ In actual practice, of course, we cannot flip a coin an infinite number of times; so in general, this formula most accurately applies to situations in which we have already assigned an a priori probability to a particular outcome (in this case, our assumption that the coin was a "fair" coin). The law of large numbers then says that, given Pr(H), and any arbitrarily small number ε, there exists some number n such that for all N > n, $\left| \Pr(H) - {N_H \over N}\right| < \epsilon$ In other words, by saying that "the probability of heads is 1/2", we mean that, if we flip our coin often enough, eventually the number of heads over the number of total flips will become arbitrarily close to 1/2; and will then stay at least as close to 1/2 for as long as we keep performing additional coin flips. Note that a proper definition requires measure theory which provides means to cancel out those cases where the above limit does not provide the "right" result or is even undefined by showing that those cases have a measure of zero. The a priori aspect of this approach to probability is sometimes troubling when applied to real world situations. For example, in the play Rosencrantz and Guildenstern are Dead by Tom Stoppard, a character flips a coin which keeps coming up heads over and over again, a hundred times. He can't decide whether this is just a random event - after all, it is possible (although unlikely) that a fair coin would give this result - or whether his assumption that the coin is fair is at fault. Remarks on probability calculations The difficulty of probability calculations lie in determining the number of possible events, counting the occurrences of each event, counting the total number of possible events. Especially difficult is drawing meaningful conclusions from the probabilities calculated. An amusing probability riddle, the Monty Hall problem demonstrates the pitfalls nicely. To learn more about the basics of probability theory, see the article on probability axioms and the article on Bayes' theorem that explains the use of conditional probabilities in case where the occurrence of two events is related. Applications of probability theory to everyday life A major effect of probability theory on everyday life is in risk assessment and in trade on commodity markets. Governments typically apply probability methods in environment regulation where it is called "pathway analysis", and are often measuring well-being using methods that are stochastic in nature, and choosing projects to undertake based on their perceived probable effect on the population as a whole, statistically. It is not correct to say that statistics are involved in the modelling itself, as typically the assessments of risk are one-time and thus require more fundamental probability models, e.g. "the probability of another 9/11". A law of small numbers tends to apply to all such choices and perception of the effect of such choices, which makes probability measures a political matter. A good example is the effect of the perceived probability of any widespread Middle East conflict on oil prices - which have ripple effects in the economy as a whole. An assessment by a commodity trade that a war is more likely vs. less likely sends prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are not assessed independently nor necessarily very rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict. It can reasonably be said that the discovery of rigorous methods to assess and combine probability assessments has had a profound effect on modern society. A good example is the application of game theory, itself based strictly on probability, to the Cold War and the mutual assured destruction doctrine. Accordingly, it may be of some importance to most citizens to understand how odds and probability assessments are made, and how they contribute to reputations and to decisions, especially in a democracy. See also • Bayesian probability • Bernoulli process • Cox's theorem • Decision theory • Game of chance • Game theory • Information theory • Law of averages • Law of large numbers • Normal distribution • Random fields • Random variable • Statistics □ List of statistical topics • Stochastic process • Wiener process • Important publications in probability External links • Damon Runyon, "It may be that the race is not always to the swift, nor the battle to the strong - but that is the way to bet." • Pierre-Simon Laplace "It is remarkable that a science which began with the consideration of games of chance should have become the most important object of human knowledge." Théorie Analytique des Probabilités, 1812. • Richard von Mises "The unlimited extension of the validity of the exact sciences was a characteristic feature of the exaggerated rationalism of the eighteenth century" (in reference to Laplace). Probability, Statistics, and Truth, p 9. Dover edition, 1981 (republication of second English edition, 1957). Keywords: Probabilidade, Adrien-Marie Legendre, Blaise Pascal, Christiaan Huygens, Fuzzy logic, George Boole, Jakob Bernoulli, Karl Pearson, Kolmogorov
{"url":"http://encyclopedie-pt.snyke.com/articles/probabilidade.html","timestamp":"2014-04-17T00:57:52Z","content_type":null,"content_length":"29047","record_id":"<urn:uuid:b760b2ae-d507-4f88-a794-407bef779fef>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Distance vs. Time 1. To use the calculator to make a distance vs. time graph, and 2. To determine the relationship between a distance - time graph and speed TI-82 or TI-83 calculator Jill walks forward while Bob records her distance from where she started at one second intervals. The data recorded are listed below. Using the recorded data, Jill’s average speed during any time interval can be determined using the definition: average speed = distance traveled divided by time traveled. Another way to examine the data is to construct a distance vs time graph. The steepness of the graph gives you information about Jill’s speed. The following are Bob’s recorded data. Total Time Total Distance Total Time Total Distance (s) (m) (s) (m) 0.0 0.0 8.0 8.5 1.0 1.0 9.0 13.0 2.0 2.0 10.0 17.5 3.0 3.0 11.0 22.0 4.0 4.0 12.0 24.0 5.0 4.0 13.0 26.0 6.0 4.0 14.0 28.0 7.0 4.0 15.0 30.0 To create a list using the TI-82 OR TI-83 1. Press the [STAT] key. (Make sure Edit is highlighted in the EDIT menu) 2. Press [ENTER] or [1]. (This will select the Edit menu; the first entry under list Ll should be highlighted) If there are data in L1 or L2 use the arrow keys to move the cursor to the list heading, L1, press [CLEAR] then [ENTER]. Move the cursor to the list heading, L2, press [CLEAR] then [ENTER]. 3. Enter 0 for the first time data value; (note that the value that you are entering appears on the bottom line) 4. Press [ENTER]. 5. Enter the remaining time data in the same manner. 6. Press [>] (right arrow key; this moves the cursor to the first entry of list L2) 7. Enter the position data in L2. (check to see that both Ll and L2 have the same number of entries) To make corrections to a list: Press [STAT] [ENTER] ), use the arrow keys to position the cursor over the entry that you wish to modify; type in the new entry and press [ENTER] To graph one data list vs. another: 1. Press [2nd][STATPLOT] (this command displays the STAT PLOTS screen; note that there are three possible plots available) 2. Press [1] (this command displays the Plot1 screen) 3. The 5 options you may select are: a) On vs. Off On indicates that the plot information selected will be activated when [GRAPH] is pressed; to turn On, position cursor over On and press [ENTER] Off indicates that this screen will not be activated when [GRAPH] is pressed b) Type choices are (in order): scatter plot, line/curve connecting data points, box plot, and histogram (select the second choice - connecting data points) c) Xlist determines which list will be plotted on the X (horizontal axis); to select list L1 on TI-82, use arrow keys to move flashing cursor over desired list; press [ENTER] (highlighted non-flashing entry is selected); on TI-83 simply enter L1. d) Ylist determines which list will be plotted on the Y (vertical axis); to select list L2, use arrow keys to move flashing cursor over desired list; press [ENTER] (highlighted non-flashing entry is selected); on TI-83 simply enter L2. e) Mark determines shape/size of data point markers on the graph 4. After selecting the desired options (remember to turn the plot ON !), press [GRAPH] 5. Your graph may not display all of your data points (because the graphing window is not set properly); press [ZOOM] [9] to automatically select a window that displays the full range of your data. Graphing Distance vs Time Lab Report 1. What quantity is plotted on the y-axis?___________________________________________ On the x-axis?_______________________________________________________________ 2. On the given axes sketch the graph as seen on your screen. Label the axes. 3. Sometimes dividing a graph into regions make it easier to analyze. Look for patterns in the data. Examine your graph and draw vertical lines that divide it into regions. How many different patterns did you mark? __________________ 4. Using the TRACE function and arrow keys to look at your graph,what was Jill doing during the time interval 4sec-7sec? What was Jill’s average speed during that time? Describe the shape of the graph for this time interval. __________________________________________________________________________ 5. What was Jill’s average speed for: (A) the first four seconds?___________________________________________ (B) the time interval 7 seconds to 11 seconds?____________________________ (C) the last four seconds?___________________________________________ (D) the entire trip? _________________________________________________ 6. Suppose the distance for 16s was 29m. Describe the motion of Jill during that time. 7. Look at your answers in #4 and #5 above. What conclusion can you draw about the relationship between the shape of the distance time graph and Jill’s speed? 8. Why might you prefer to show the data on a graph instead of in a table? copyright 2009 The North Carolina School of Science and Mathematics
{"url":"http://courses.ncssm.edu/physics/Labs/PH355/CBL/CBL_DVST.htm","timestamp":"2014-04-21T09:35:43Z","content_type":null,"content_length":"11804","record_id":"<urn:uuid:a991cd55-a75d-452d-8bb8-dc8c5eb08ebe>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Could power generated from swimming laps create the next wave in sustainable energy? Wake Forest sophomore Yinger "Eagle" Jin ('16) demonstrates his wave-powered electric generator in the pool in Reynolds Gym. The system harnesses the wave action as it compresses air inside the tube, which turns a small turbine that generates electricity. With the help of an oscillating water column and a summer undergraduate research grant, sophomore Yinger 'Eagle' Jin discovered waves made by swimmers in the campus pool produce enough electricity to power 10 100-watt lightbulbs for a day. Jin's research, inspired by Assistant Professor of Mathematics Sarah Mason's first-year seminar class on the mathematics of sustainable energy and funded by the URECA Center, lends new insights into how wave energy is captured and used. Jin constructed an oscillating water column, one of the most productive wave energy converters available, to test the amount of electricity that could feasibly be produced by the pool's waves. It uses a large volume of moving water (in the case of Wake Forest's Reynolds Gym pool, the waves generated by daily swimmers) as a piston in a cylinder. Air is forced out of the column as a wave rises and fresh air is drawn in as the wave falls. This movement of air turns a turbine at the top of the column, which ultimately converts the wave energy to electricity. "During the class we looked at the amount of energy that can be produced from sustainable energy sources on Wake Forest's campus," Mason said. "Wave energy was something we talked about but obviously we don't have an ocean here and lakes don't typically generate many waves." Jin, an avid swimmer, thought there might be enough waves in the campus pool to generate a small amount of electricity. "We are talking a very small scale, but recreational swimmers produce a decent amount of waves," Jin said. "The concept is similar to the idea that at a regular gym you have exercise bikes that are powered by someone spinning the pedals." Jin calculated that on an average day during the school year, the swimming pool is open 10 hours and 10 people swim each hour. He said if each person swims butterfly stroke, collectively they will generate enough waves to produce 10 kilowatt-hours of electricity. Jin used his water column to produce a small amount of electricity and to measure the period and height of waves in the pool over the course of a day. He then used this data to build a mathematical model for determining electrical energy output from waves. Rob Erhardt, a statistician in Wake Forest's math department, helped Jin and Mason with their calculations and applying the math to the specific case of the pool. Mason said the plan is to follow up on the research she and Jin conducted over the summer with a trip to North Carolina's coast. "There is certainly room for continuation in Eagle's project; in particular one publishable goal is to calculate how much energy could be produced through wave energy off the coast of North Carolina," she said. "We have computed rough estimates but would need to factor in more details and be more precise if we wanted to get an accurate prediction." Nevertheless, she said their initial estimates show North Carolina waves have tremendous energy potential. not rated yet Dec 16, 2013 "...enough to power 10 100-watt light bulbs for a day..." "...collectively they will generate enough waves to produce 10 kilowatt-hours of electricity." 10 * 100w = 1kW 1kW * 24hrs = 24kW-hr 24kWh ≠ 10kWh I guess in the first paragraph, they are assuming the bulb is only on for 10 hours of the day...
{"url":"http://phys.org/news/2013-12-power-laps-sustainable-energy.html","timestamp":"2014-04-17T22:51:02Z","content_type":null,"content_length":"70859","record_id":"<urn:uuid:d1a17ba6-6c8f-41d2-8af9-f20a1bf06d1d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: integrate -x(1+2x)^5 dx by parts • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5132e1e2e4b093a1d948e3d8","timestamp":"2014-04-18T20:53:17Z","content_type":null,"content_length":"64471","record_id":"<urn:uuid:d05b92f0-aef9-4bc0-a7c4-fa7fc15e387e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple C++ program for the 21 game The game "21" is played as a misère game with any number of players who take turns saying a number. The first player says "1" and each player in turn increases the number by 1, 2, or 3, but may not exceed 21; the player forced to say "21" loses. This can be modeled as a subtraction game with a heap of 21–n objects. The winning strategy for this game is to say a multiple of 4 and after that it is guaranteed that the other player will have to say 21, barring a mistake from the first player. This game has a Sprague-Grundy value of zero, i.e., it is biased in favor of the 2nd player as s/he can get to 4 first and then control the game from there, as no matter what, the 1st player will never be able to say a multiple of 4 as s/he is only allowed increments of either 1, 2 or 3. Proof (via a sample game of 21)- Player Number 1 5,6 or 7 1 9,10 or 11 1 13,14 or 15 1 17,18 or 19 I would like to write a program in Turbo C== for this. Would anyone please help me with the algorithm of this program?? Last edited on Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/71167/","timestamp":"2014-04-19T22:20:37Z","content_type":null,"content_length":"6891","record_id":"<urn:uuid:8eb31e4c-dc57-4367-991f-4e4189566e14>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
On the unpredictability of bits of the elliptic curve Diffie–Hellman scheme - In ASIACRYPT 2001, volume 2248 of LNCS , 2001 "... Abstract. We study a class of problems called Modular Inverse Hidden Number Problems (MIHNPs). The basic problem in this class is the following: Given many pairs � � � � −1 xi, msbk (α + xi) mod p for random xi ∈ Zp the problem is to find α ∈ Zp (here msbk(x) refers to the k most significant bits o ..." Cited by 12 (1 self) Add to MetaCart Abstract. We study a class of problems called Modular Inverse Hidden Number Problems (MIHNPs). The basic problem in this class is the following: Given many pairs � � � � −1 xi, msbk (α + xi) mod p for random xi ∈ Zp the problem is to find α ∈ Zp (here msbk(x) refers to the k most significant bits of x). We describe an algorithm for this problem when k> (log 2 p)/3 and conjecture that the problem is hard whenever k < (log 2 p)/3. We show that assuming hardness of some variants of this MIHNP problem leads to very efficient algebraic PRNGs and MACs. - EUROCRYPT 2009, volume 5479 of LNCS , 2009 "... Abstract. In this paper, we study a quite simple deterministic randomness extractor from random Diffie-Hellman elements defined over a prime order multiplicative subgroup G of a finite field Zp (the truncation), and over a group of points of an elliptic curve (the truncation of the abscissa). Inform ..." Cited by 6 (0 self) Add to MetaCart Abstract. In this paper, we study a quite simple deterministic randomness extractor from random Diffie-Hellman elements defined over a prime order multiplicative subgroup G of a finite field Zp (the truncation), and over a group of points of an elliptic curve (the truncation of the abscissa). Informally speaking, we show that the least significant bits of a random element in G ⊂ Z ∗ p or of the abscissa of a random point in E(Fp) are indistinguishable from a uniform bit-string. Such an operation is quite efficient, and is a good randomness extractor, since we show that it can extract nearly the same number of bits as the Leftover Hash Lemma can do for most Elliptic Curve parameters and for large subgroups of finite fields. To this aim, we develop a new technique to bound exponential sums that allows us to double the number of extracted bits compared with previous known results proposed at ICALP’06 by Fouque et al. It can also be used to improve previous bounds proposed by Canetti et al. One of the main application of this extractor is to mathematically prove an assumption proposed at Crypto ’07 and used in the security proof of the Elliptic Curve Pseudo Random Generator proposed by the NIST. The second most obvious application is to perform efficient key derivation given Diffie-Hellman elements. 1 "... Abstract. We show that in certain natural computational models every bit of a message encrypted with the NtruEncrypt cryptosystem is as secure as the whole message. 1 ..." Cited by 2 (0 self) Add to MetaCart Abstract. We show that in certain natural computational models every bit of a message encrypted with the NtruEncrypt cryptosystem is as secure as the whole message. 1 - In Appl. Algebra in Engin., Commun. and Computing , 2006 "... Let IFp be a finite field of p elements, where p is prime. The bit security of the Diffie-Hellman function over subgroups of IF ∗ p and of an elliptic curve over IFp, is considered. It is shown that if the Decision Diffie-Hellman problem is hard in these groups, then the two most significant bits of ..." Cited by 1 (0 self) Add to MetaCart Let IFp be a finite field of p elements, where p is prime. The bit security of the Diffie-Hellman function over subgroups of IF ∗ p and of an elliptic curve over IFp, is considered. It is shown that if the Decision Diffie-Hellman problem is hard in these groups, then the two most significant bits of the Diffie-Hellman function are secure. Under the weaker assumption of the computational (rather than decisional) hardness of the Diffie-Hellman problems, only about (log p) 1/2 bits are known to be secure. Keywords Diffie-Hellman protocol, bit security, exponential sums 1 1 "... Abstract. We prove that if one can predict any of the bits of the input to an elliptic curve based one-way function over a finite field, then we can invert the function. In particular, our result implies that if one can predict any of the bits of the input to a classical pairing-based one-way functi ..." Cited by 1 (0 self) Add to MetaCart Abstract. We prove that if one can predict any of the bits of the input to an elliptic curve based one-way function over a finite field, then we can invert the function. In particular, our result implies that if one can predict any of the bits of the input to a classical pairing-based one-way function with non-negligible advantage over a random guess then one can efficiently invert this function and thus, solve the Fixed Argument Pairing Inversion problem (FAPI-1/FAPI-2). The latter has implications on the security of various pairing-based schemes such as the identity-based encryption scheme of Boneh– Franklin, Hess ’ identity-based signature scheme, as well as Joux’s three-party one-round key agreement protocol. Moreover, if one can solve FAPI-1 and FAPI-2 in polynomial time then one can solve the Computational Diffie–Hellman problem (CDH) in polynomial time. Our result implies that all the bits of the functions defined above are hard-to-compute assuming these functions are one-way. The argument is based on a list-decoding technique via discrete Fourier transforms due to Akavia–Goldwasser–Safra as well as an idea due to Boneh–Shparlinski. Keywords: One-way function, hard-to-compute bits, bilinear pairings, elliptic curves, fixed argument pairing inversion problem, Fourier transform, list decoding. 1 , 2003 "... The theory of elliptic curves is a classical topic in many branches of algebra and number theory, but recently it is receiving more attention in cryptography. An elliptic curve is a two-dimensional (planar) curve defined by an equation involving a cubic power of coordinate x and a square power of co ..." Add to MetaCart The theory of elliptic curves is a classical topic in many branches of algebra and number theory, but recently it is receiving more attention in cryptography. An elliptic curve is a two-dimensional (planar) curve defined by an equation involving a cubic power of coordinate x and a square power of coordinate y. One class of these curves is "... In this paper, we introduce the intermediate hashed Diffie-Hellman (IHDH) assumption which is weaker than the hashed DH (HDH) assumption (and thus the decisional DH assumption), and is stronger than the computational DH assumption. We then present two public key encryption schemes with short ciphert ..." Add to MetaCart In this paper, we introduce the intermediate hashed Diffie-Hellman (IHDH) assumption which is weaker than the hashed DH (HDH) assumption (and thus the decisional DH assumption), and is stronger than the computational DH assumption. We then present two public key encryption schemes with short ciphertexts which are both chosen-ciphertext secure under this assumption. The short-message scheme has smaller size of ciphertexts than Kurosawa-Desmedt (KD) scheme, and the long-message scheme is a KD-size scheme (with arbitrary plaintext length) which is based on a weaker assumption than the HDH assumption. Key words: public key encryption, chosen-ciphertext security, Diffie-Hellman assumption , 2013 "... A long-standing open problem in cryptography is proving the existence of (deterministic) hardcore predicates for the Diffie-Hellman problem defined over finite fields. In this paper we make progress on this problem by defining a very natural variation of the Diffie-Hellman problem over Fp2 and provi ..." Add to MetaCart A long-standing open problem in cryptography is proving the existence of (deterministic) hardcore predicates for the Diffie-Hellman problem defined over finite fields. In this paper we make progress on this problem by defining a very natural variation of the Diffie-Hellman problem over Fp2 and proving the unpredictability of every single bit of one of the coordinates of the secret DH value. To achieve our result we modify an idea presented at CRYPTO’01 by Boneh and Shparlinski [4] originally developed to prove that the LSB of the Elliptic Curve Diffie-Hellman problem is hard. We extend this idea in two novel ways: 1. We generalize it to the case of finite fields F p 2; 2. We prove that any bit, not just the LSB, is hard using the list decoding techniques of Akavia et al. [1] (FOCS’03) as generalized at CRYPTO’12 by Duc and Jetchev [6]. In the process we prove several other interesting results: • Our result hold also for a larger class of predicates, called segment predicates in [1]; • We extend the result of Boneh and Shparlinski to prove that every bit (and every segment predicate) of the Elliptic Curve Diffie-Hellman problem is hard-core; • We define the notion of partial one-way function over finite fields Fp2 and prove that every bit (and every segment predicate) of one of the input coordinate for these functions is hard-core. "... Abstract. Many technics for randomness extraction over finite fields was proposed by various authors such as Fouque et al. and Carneti et al.. At eurocrypt’09, these previous works was improved by Chevalier et al., over a finite field Fp, where p is a prime. But their papers don’t study the case whe ..." Add to MetaCart Abstract. Many technics for randomness extraction over finite fields was proposed by various authors such as Fouque et al. and Carneti et al.. At eurocrypt’09, these previous works was improved by Chevalier et al., over a finite field Fp, where p is a prime. But their papers don’t study the case where the field is not prime such as binary fields. In this paper, we present a deterministic extractor for a multiplicative subgroup of F ∗ pn, where p is a prime. In particular, we show that the k-first F2-coefficients of a random element in a subgroup of F ∗ 2n are indistinguishable from a random bit-string of the same length. Hence, under the Decisional Diffie-Hellman assumption over binary fields, one can deterministically derive a uniformly random bit-string from a Diffie-Hellman key exchange in the standard model. Over Fp, Chevalier et al. use the ”Polya-Vinogradov inequality ” to bound incomplete character sums but over F ∗ pn we use ”Winterhof inequality ” to bound incomplete character sums. Our proposition is a good deterministic extractor even if the length of its output is less than those one can have with the leftover hash lemma and universal hash functions. Our extractor can be used in any cryptographic protocol or encryption schemes. "... Abstract. We generalize and extend results obtained by Boneh and Venkatesan in 1996 and by González Vasco and Shparlinski in 2000 on the hardness of computing bits of the Diffie-Hellman key, given the public values. Specifically, while these results could only exclude (essentially) error-free predic ..." Add to MetaCart Abstract. We generalize and extend results obtained by Boneh and Venkatesan in 1996 and by González Vasco and Shparlinski in 2000 on the hardness of computing bits of the Diffie-Hellman key, given the public values. Specifically, while these results could only exclude (essentially) error-free predictions, we here exclude any non-negligible advantage, though for larger fractions of the bits. We can also demonstrate a trade-off between the tolerated error rate and the number of unpredictable bits. Moreover, by changing computational model, we show that even a very small proportion of the most significant bits of the Diffie–Hellman secret key cannot be retrieved from the public information by means of a Las Vegas type algorithm, unless the corresponding scheme is weak itself. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=316500","timestamp":"2014-04-17T02:01:52Z","content_type":null,"content_length":"37412","record_id":"<urn:uuid:5c8a4f2f-29eb-4a0c-a010-5eb5d5fdb886>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculate Call Center Staffing with Excel (Erlang formula) By Joannès Vermorel, May 2008 This guide explains how to optimize the number of agents to reach the desired service level . This guide applies to call centers and contact centers. The theory is illustrated with Microsoft Excel . Advanced notes are available for software developer who would like to reproduce the theory into a custom application. Download: erlang-by-lokad.xls (Microsoft Excel Spreadsheet) When opening the spreadsheet, Excel will warn you than this document contains macros . Those macros correspond to the Erlang-C formula (see explanation below). You need to activate the macros in order to reproduce the calculations. Modeling the inbound call activity The inbound call activity can be modeled with a few variables: • The average call duration noted t is known. t is located on B7. • The number of agents, noted m is known. m is located on B8. • The call arrival rate, noted λ is known. The arrival rate is the number of incoming calls per second. In the spreadsheet, λ is located on B9. In the following, based on those 3 variables, plus a couple of statistical assumptions, we will be able to compute • the average agent occupancy. • the probability that a call has to wait. • the probability that a call waits for more than a specified time. The most important statistical assumption is that the incoming calls behave statistically like a Poisson process . Without entering too much into the details, this assumption is reasonable if the call events are if we consider the case of a call center that receive calls from viewers trying to answer a question on TV show game; then clearly the Poisson assumption is not going to hold because all the calls get simultaneously triggered by the same event (the TV show). Computing indicators with Erlang Based on the assumptions introduced in the previous section, we will now calculate a couple of insightful indicators that reflect the call center activity. period length represents the duration of the time window being considered for the analysis. In the illustration here above, it's 900s which is to say 15min, a very frequently used aggregation level among call traffic intensity is a number that represent the minimal number of agents that are required to address all the incoming calls. If there are fewer agents than the traffic intensity, then mechanically, calls will be dropped. The traffic intensity is named and be computed as the product of the call arrival rate λ multiplied by the average call duration . In the spreadsheet, the traffic intensity is computed in average agent occupancy (or utilization) is a ratio that expresses the amount of time spend by the agents actually answering call compared to the total time (which might include idle periods for the agents). The agent occupancy can be simply computed by dividing the traffic intensity by the number of agents . In the spreadsheet, the agent occupancy is computed in probability to wait (from the caller viewpoint) expressed the probability that an agent will be readily available (i.e. idle) to answer an incoming call. This value is obtained through the Erlang-C formula. The terms of the Erlang-C formula are beyond the scope of this guide, but your can refer to the Wikipedia for the . In the sample spreadsheet, the probability to wait is computed in using the macro function implemented in Visual Basic. The function takes two arguments, first the number of agents and second the traffic intensity. average speed of answer (ASA) represents the average wait time for a call. The ASA computation is based on the Erlang-C formula. In the sample spreadsheet, the ASA is computed in using the macro function implemented in Visual Basic. The function takes 3 arguments, first the number of agents, second and third the average call duration. probability to wait less than a target time is self-explanatory. Like for the probability to wait, the detail of the actual formula is beyond the scope of this guide. In the sample spreadsheet, the probability is computed in , the desired wait time (i.e. target time), named , being provided in . The computation is using the function which takes 4 arguments: first the number of agents, second the traffic intensity, third the average call duration and fourth the target time. Practical staffing with Excel In the previous sections, we have seen how to compute useful indicators to analyze the call center activity. Yet, the Excel layout (see screenshot here above) was chosen for the sake of clarity, and is not suited for practical call center staffing. In this section we propose to use a much more compact layout illustrated in the screenshot below. Within the sample spreadsheet, the upper-left corner of illustration here above is the cell (the cell is empty). The computations performed in this table are just the straightforward application of the formulas introduced in the previous section. A couple of remarks • we assume constant average call duration t and constant target time tt. • we use static Excel cell reference, i.e. $A$1 instead of A1 for the variables (which facilitate cut-and-pasting the formulas). • agent counts can be freely optimized to adjust the expected service levels. • cell format properties are adjusted to avoid displaying to many decimals.
{"url":"http://www.lokad.com/calculate-call-center-staffing-with-excel","timestamp":"2014-04-18T08:02:21Z","content_type":null,"content_length":"18260","record_id":"<urn:uuid:fec1fabb-1300-4415-908d-3f92bd204601>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
throw :: (Monad m, Exception e) => e -> Pipe a b m rSource Throw an exception within the Pipe monad. An exception thrown with throw can be caught by catch with any base monad. If the exception is not caught in the Pipeline at all, it will be rethrown as a normal Haskell exception when using runPipe. Note that runPurePipe returns the exception in an Either value, instead. :: (Monad m, Exception e) => Pipe a b m r Pipe to run -> (e -> Pipe a b m r) handler function -> Pipe a b m r Catch an exception within the Pipe monad. This function takes a Pipe, runs it, and if an exception is raised it executes the handler, passing it the value of the exception. Otherwise, the result is returned as normal. For example, given a Pipe: reader :: Pipe () String IO () we can use catch to resume after an exception. For example: safeReader :: Pipe () (Either SomeException String) IO () safeReader = catch (reader >+> 'Pipe' Right) $ \e -> do yield $ Left e Note that only the initial monadic actions contained in a handler are guaranteed to be executed. Anything else is subject to the usual termination rule of Pipes: if a Pipe at either side terminates, the whole pipeline terminates. :: Monad m => m r action to acquire resource -> (r -> m y) action to release resource -> (r -> Pipe a b m x) Pipe to run in between -> Pipe a b m x Allocate a resource within the base monad, run a Pipe, then ensure the resource is released. The typical example is reading from a file: (openFile "filename" ReadMode) (\handle -> do line <- lift $ hGetLine handle yield line :: Monad m => m r action to run first -> m y action to run last -> Pipe a b m x Pipe to run in between -> Pipe a b m x A variant of bracket where the return value from the allocation action is not required. :: Monad m => m r action to acquire resource -> (r -> m y) action to release resource -> (r -> Pipe a b m x) Pipe to run in between -> Pipe a b m x Like bracket, but only performs the "release" action if there was an exception raised by the Pipe. :: Monad m => Pipe a b m r Pipe to run first -> m s finalizer action -> Pipe a b m r A specialized variant of bracket with just a computation to run afterwards. :: Monad m => Pipe a b m r Pipe to run first -> Pipe a b m s Pipe to run if an exception happens -> Pipe a b m r Like finally, but only performs the final action if there was an exception raised by the Pipe.
{"url":"http://hackage.haskell.org/package/pipes-core-0.1.0/docs/Control-Pipe-Exception.html","timestamp":"2014-04-20T07:43:35Z","content_type":null,"content_length":"13250","record_id":"<urn:uuid:179f3c30-6f11-4438-b018-2cb51517e778>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
A last-minute extra talk was scheduled on the Wednesday morning at the Banff meeting – Assaf Naor on Superexpanders. I would have loved to attend that talk, but I had a prior commitment to Aftenroe, a beautiful eight-pitch limestone climb above the road to Lake Louise. Fortunately, thanks to the video system at BIRS, this doesn’t mean that I missed the talk. One can find the excellent video of the talk here. It’s a great presentation, as gratifying and elegant as Aftenroe‘s third pitch. Below, I will try to describe some of the ideas. Assaf begins by recalling the classical definition of an expander sequence of graphs (for simplicity, all his expanders are sequences of 3-regular graphs). It’s a sequence of finite graphs \(X_n\), of size tending to infinity, such that for each subset \(A\subseteq X_n\) comprising half the vertices or fewer, the size of the boundary of \(A\), that is the number of edges connecting \(A\) to its complement, is at least some uniform constant times the size of \(A\) itself. Then he points out that this definition is equivalent to the following: for any map \(f\colon X_n\to {\mathbb R}^k \), and any \(p\in [1,\infty)\), one has \[ \frac{1}{n^2} \sum_{(v,v')\in V\times V} d(f(v),f(v'))^p \approx \frac{1}{n} \sum_{(v,v')\in E} d(f(v),f(v'))^p. \] Here \(V,E\) are the sets of vertices and edges respectively, and the \(\approx\) symbol means that each side is bounded (independent of \(n\)) by a constant multiple of the other. The statements for various \(p\) are all equivalent. Considering \(\{0,1\}\)-valued functions with \(k=p=1\) shows they imply the classical definition of expander; the \(k=1,\ p=2\) statement is just the “spectral gap” version of expansion; for \(p=2\), the \(k=1\) case trivially implies the arbitrary \(k\) case. Notice that the force of the result is this: the average distance between a configuration of \(n \) points in Euclidean space can be effectively computed just by sampling the distances along a suitable group of \(O(n)\) edges. Anyhow, from this perspective there is an obvious question: what is so special about Euclidean space? In fact, let \(X\) be any metric space. Call a sequence of 3-regular graphs an expander with respect to \(X\) if \[ \frac{1}{n^2} \sum_{(v,v')\in V\times V} d(f(v),f(v'))^2 \approx \frac{1}{n} \sum_{(v,v')\in E} d(f(v),f(v'))^2 \] where now \(f\) runs over maps from \(X_n\) to \(X\). It is easy to see that: • Whenever \(X\) has at least two points, a sequence that is an expander wrt \(X\) is also an expander in the classical sense. • No sequence of graphs (of size tending to infinity) is an expander wrt itself, and (as a consequence) no sequence is an expander wrt \(X=\ell^\infty\). The whole machinery can be generalized by introducing a nonlinear spectral theory. Let \(K\) be a symmetric kernel on some set \(X\) (think of the metric, or some power thereof, on a metric space). Let \(A\) be an \(n\times n\) symmetric stochastic matrix (think of the normalized adjacency matrix of a regular graph). Then define the nonlinear Poincare constant of \(A\) with respect to \(K\), denoted \(\gamma(A,K)\), to be the best (infimal) constant \(\gamma\) for which the following inequality holds for all \(n\)-tuples \(x_1,\ldots,x_n\in X\): \[ \frac{1}{n^2} \sum_{i,j=1}^n K(x_i,x_j) \le \frac{\gamma}{n} \sum_{i,j=1}^n a_{ij}K(x_i,x_j). \] Then the graphs \(X_n\) are expanders wrt \(X\) if the Poincare constants of their normalized adjacency matrices, with respect to the kernel \(d^2\) on \(X\), are bounded away from zero. I’ll continue the discussion in the next post. Mendel, Manor, and Assaf Naor. “Nonlinear Spectral Calculus and Super-expanders.” Publications Mathématiques de l’IHÉS (n.d.): 1–95. Accessed August 8, 2013. doi:10.1007/s10240-013-0053-2.
{"url":"http://sites.psu.edu/johnroe/2013/08/22/superexpanders/","timestamp":"2014-04-17T15:38:21Z","content_type":null,"content_length":"38438","record_id":"<urn:uuid:a023bb4b-5c28-4e42-8fda-565466e9ce76>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Janine wants to paint just the sides of a cylindrical pottery vase that has a height of 45 cm and a diameter of 14 cm. To the nearest whole number, find the number of square centimeters she will need to paint. Explain the method you would use to find the lateral area. • 8 months ago • 8 months ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. \[\Pi dh\] Best Response You've already chosen the best response. Is 1979.2 right? Best Response You've already chosen the best response. Lateral Area = 2 * pi * r * h (Area of Base of Cylinder times its height) LA = 2 * pi * 7 * 45 LA = ? ---------------> I think it is 1979 ish. Check @OpenSessame Best Response You've already chosen the best response. I did my test! but i used the formula pi*d*h which is basically the same thing. Best Response You've already chosen the best response. It is the same thing. Are you saying that you have finished the Geometry final? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Ill get my results in a week! Best Response You've already chosen the best response. I hope you did well. I feel that you did. Best Response You've already chosen the best response. i do too:) Best Response You've already chosen the best response. When do you begin Algebra 2 ? Best Response You've already chosen the best response. I already did algebra 2:) Best Response You've already chosen the best response. What math course comes next? Or, are you finished? Best Response You've already chosen the best response. Next is college math, but that isnt for another year. So yes im done! Best Response You've already chosen the best response. You might want to work a few assorted math problems during that year so that you won't begin college math stone cold. Best Response You've already chosen the best response. I got a book of 1000 math problems so yea! Best Response You've already chosen the best response. And, a 1000 solutions also? Best Response You've already chosen the best response. Yes, with 1000 explanations! Best Response You've already chosen the best response. That is good. Is it one of the Schaum outline type books? Best Response You've already chosen the best response. Idk, it was like 2 dollars and seemed pretty helpful. Nice to keep myself busy! Best Response You've already chosen the best response. The price is right. Best Response You've already chosen the best response. Best Response You've already chosen the best response. You could come to Open Study, post ten problems a day for 100 days and get discussions of the problems. That might be fun, or my idea of fun. Best Response You've already chosen the best response. Lol, pretty good idea if i have the time! ill try that out! Best Response You've already chosen the best response. Or ill just stick to helping others so i can sharpen my skills too. Best Response You've already chosen the best response. Now, that is a better idea. It will be good to see you around. Best Response You've already chosen the best response. Thanks, i got one question though...Is it possible to get a 100 score on openstudy lol? Best Response You've already chosen the best response. Yes. One person has that score. I think it was arbitrarily assigned. That is, not computer calculated. Nobody knows how the Smart Score stuff works. I was grandfathered in. A year or so ago there was a different scoring method based solely on medal count. I'll see if I can find the icon of the 100 person. Best Response You've already chosen the best response. Lol okay i was just wondering cause seems like most people are stuck at 99. Best Response You've already chosen the best response. And whats a ambassador on here? so many things that arent explained. Best Response You've already chosen the best response. Here's the S Score of 100 person. Best Response You've already chosen the best response. Pretty coool to be them, i though it was like an openstudy makers account or something. Ever need help seems like hes the guy to ask! Best Response You've already chosen the best response. Thats it? Okay...Thanks man! Best Response You've already chosen the best response. Be sure and read the closed questions, too. The Feedback section is very interesting reading. All sorts of stuff there. http://openstudy.com/study#/groups/OpenStudy%20Feedback Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5215a67be4b0450ed75e5fc5","timestamp":"2014-04-19T13:07:21Z","content_type":null,"content_length":"113387","record_id":"<urn:uuid:b3edea39-1c1f-4f07-9bf9-bf1b99f3afd0>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] The semantics of set theory Kanovei kanovei at wmwap1.math.uni-wuppertal.de Sun Oct 6 12:38:58 EDT 2002 From: Ralf Schindler <rds at logic.univie.ac.at> On Thu, 3 Oct 2002, Kanovei wrote: > Generally, there is no way to define ZFC-truth other than to > extend the language of ZFC. > Three typical methods are known. > Third, consider a second-order impredicative theory of classes. One doesn't need an *im*predicative theory of classes here, one can do with predicative classes. (A class is predicative iff it can be defined by a fmla of set theory + parameters for sets.) --Ralf To define that a set theoretic formula A (with parameters or even without parameters) "is true" one has to claim the existence of a class satisfying certain known properties and containing A. Such a class itself cannot be definable, e.g. predicative, if we want to treat A as a free variable. Therefore, in this case, predicative classes do not suffice. If, on the contrary, we are going to consider A(x_1,...,x_n) ^?^[[3~ as a fixed, metamathematically given, formula, then prediva^[ predicative classes suffice, but the whole problerm results in the tautology : "A is true" is replaced by A. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2002-October/005961.html","timestamp":"2014-04-19T17:07:50Z","content_type":null,"content_length":"3610","record_id":"<urn:uuid:30f86773-ce7a-4955-af22-33488386ef33>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Fifth Annual Alice T. Schafer Prize - AWM Association for Women in Mathematics July 1994, San Diego CA In 1990, the Executive Committee of the Association for Women in Mathematics (AWM) established the annual Alice T. Schafer Prize for excellence in mathematics by an undergraduate woman. The prize is named for former AWM president and one of its founding members, Alice T. Schafer (Professor Emerita from Wellesley College), who has contributed a great deal to women in mathematics throughout her career. The criteria for selection includes, but is not limited to, the quality of the nominees' performance in mathematics courses and special programs, an exhibition of real interest in mathematics, the ability to do independent work, and if applicable, performance in mathematical competitions. Jing Rebecca Li, a junior at the University of Michigan, is the winner of the fifth annual Alice T. Schafer Mathematics Prize. In addition to the winner, Patricia Hersh, a junior at Harvard University, Julia J. Rehmeyer, a senior at Wellesley College, and Nina Zipser, a senior at Columbia University, were declared runners-up. Two honorable mention citations were awarded to Jennifer M. Switkes, rvey Mudd College and Yi Wang, Bryn Mawr College. Schafer Prize Winner: Jing Rebecca Li Our winner Jing Rebecca Li, a junior at the University of Michigan, is a relative newcomer to mathematics. An outstanding mechanical engineering student, with a published paper on the deformation of bicrystals, Li switched to mathematics only last fall. Since then, she has excelled in demanding undergraduate and graduate courses, performing at the level of the best graduate students. The summer before she entered the mathematics Honors Program, she participated in the National Science Foundation's (NSF) Research Experiences for Undergraduates (REU) at the Geometry Center, University of Minnesota, where she studied computer music. In his letter of nomination for the Schafer Prize, one of her professors writes, "I have taught some very bright undergraduates, but I would rank her in the upper one-half percent of the undergraduates (male and female) I have known." In addition to praising Li for her remarkable achievement in mathematics in so short a time, Li's nominators commented on her impressive record in such diverse disciplines as physics, computer science, philosophy, Russian literature, and Asian history! Her letters of recommendation for the Prize stressed her determination, stemming from her "burning desire to learn," her love of mathematics, and her energy. Runner-Up: Patricia Hersh Runner-up Patricia Hersh, a junior at Harvard University, has already written two research papers on graph theory, which have been submitted for publication. One of her nominators writes, "She is comparable to the best students I have seen in my classes." Last summer, she participated in an REU program at the University of Minnesota, Duluth. The director writes, "In my 17 years running summer research programs it has been my experience that each year only one or two of the participants seem to have the ideal blend of talent, work ethic and personality. Patricia Hersh is one of these people." In previous summers, she served as a counselor at an NSF mathematics program at Boston University for talented high school students, of which she herself was an alumna. Runner-Up: Julia J. Rehmeyer Runner-up Julia J. Rehmeyer is a senior at Wellesley College. In a letter of recommendation, one of her professors writes, "Ms. Rehmeyer is certainly the strongest student I have known in my 14 years at Wellesley, but that doesn't describe how different she is from any other student I have known here. She is extraordinarily bright, self-motivated, and thorough, with an intellectual maturity that would suit a mature mathematician." Rehmeyer's work at Wellesley and in undergraduate and graduate courses at MIT is outstanding. She has been awarded a National Science Foundation Graduate Runner-Up: Nina Zipser Runner-up Nina Zipser has been awarded Columbia University's prestigious Kellet Fellowship, for study at Cambridge University. A senior at Barnard College, she also won the competition for the mathematics department's Van Buren Prize. Referred to in a letter of nomination as "the overall best student I have taught," Zipser not only earned A's and A+'s in graduate mathematics courses, but is now working on two research projects: "the universality of lengths of closed geodesics in hyperbolic manifolds" and an experimental project search for "degenerate groups." Honorable Mention: Jennifer M. Switkes Honorable mention awardee Jennifer M. Switkes is a senior at Harvey Mudd College majoring in both mathematics and physics who is commended by her nominators both for her outstanding work in courses and for her "original and ambitious research." She has won numerous awards and scholarships for her work in both physics and mathematics. Honorable Mention: Yi Wang Honorable mention awardee Yi Wang is a senior at Bryn Mawr College, where she is completing a double major in mathematics and economics. She is described as a "truly extraordinary student." She has participated in several research programs, including the Bryn Mawr-Spelman Summer Program, an REU at Mt. Holyoke, and a senior research program on wavelets.
{"url":"https://sites.google.com/site/awmmath/programs/schafer-prize/schafer-prize-awardees/schafer-prize-awardee-announcements/fifthannualalicetschaferprize","timestamp":"2014-04-16T13:37:54Z","content_type":null,"content_length":"45425","record_id":"<urn:uuid:6a288b24-53ca-4b97-a518-cedcf3c23d34>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Determining the field from the topology of the affine space over it. up vote 4 down vote favorite The inspiration is from another question in which it is remarked that the passage to Zariski topology is a very interesting functor from Commutative Rings to Topological spaces. I am trying to understand how "faithful" this functor is. Any two fields have the same topology. But, with polynomial rings, we have more hope. So. Let $K$, $L$ be two fields. Then if $Spec\ K[X]$ and $Spec\ L[X]$ are homeomorphic, does it follow that $K$ and $L$ are isomorphic? Let $K$, $L$ be two fields. Then if $Spec\ K[X_1, \ldots , X_n]$ and $Spec\ L[X_1, \ldots , X_n]$ are homeomorphic, does it follow that $K$ and $L$ are isomorphic? ag.algebraic-geometry model-theory 1 The affine line over a field carries the cofinite topology, so its homeomorphism type depends only on its cardinality. – Qiaochu Yuan May 21 '10 at 17:52 Ahh, posted as an answer as you posted your comment! – Charles Siegel May 21 '10 at 17:53 Surely the answer is no (by which I mean I do not have a proof). At the very least, if $K$ is the algebraic closure of $\mathbb{Q}$ and $L$ is the algebraic closure of $\mathbb{Q}(t)$, then $\ operatorname{Spec} K[t_1,\ldots,t_n]$ and $\operatorname{Spec} L[t_1,\ldots,t_n]$ should be homeomorphic for all $n$. It seems reasonable to conjecture that the only invariant of $K$ that these spectra see is the cardinality of $K$. – Pete L. Clark May 21 '10 at 17:59 add comment 2 Answers active oldest votes The following paper of Hrushovski-Zilber shows that if we restrict our attention to algebraically closed fields $F$, then $F$ is uniquely determined up to isomorphism by its "Zariski geometry". Presumably an examination of the proof will show that an algebraically closed field $F$ is determined by $Spec(F[x_{1}, \cdots F[x_{n}])$ for sufficiently large $n$. up vote 3 Hrushovski, Ehud; Zilber, Boris (1996). "Zariski Geometries". Journal of the American Mathematical Society 9: 1–56. down vote Another "reference": http://en.wikipedia.org/wiki/Zariski_geometry Interesting. The relevant result seems to be Proposition 1.1. If you take $C$ to be the affine line over an algebraically closed field $F$ and $C'$ to be the affine line over an algebraically closed field $F'$, then the result says that an isomorphism of Zariski geometries from $C$ to $C'$ induces an isomorphism from $F$ to $F'$. It also says that a morphism of Zariski geometries is an isomorphism if it induces a homeomorphism on $C^n$ for all $n$.... – Pete L. Clark May 21 '10 at 20:17 ...But what I wasn't immediately able to see was whether a morphism of Z.G.'s from $C$ to $C'$ involves more data than just a collection of such homeomorphisms -- e.g. whether various geometric compatibility conditions need to be satisfied. I would be very interested to see these details worked out. – Pete L. Clark May 21 '10 at 20:19 The experts in this area (such as Dave Marker) would know all about this. Hopefully one of them will stumble across this question and save us the trouble of working out the details ourselves! – Simon Thomas May 21 '10 at 20:32 In this type of argument you usually reconstruct the field from a 2-dimensional family of curves on $C\times C$. So probably you can get by knowing that the homeomorphism between $C$ and $C_1$ lifts to homeomorphisms of the Zariski topologies of $C^n$ and $C_1^n$ for $n=2,3,4$. – Dave Marker May 24 '10 at 6:56 A simple version of this idea would be if $K$ and $L$ are algebraically closed fields and we had a homeomorphism between $P_K^2\timesP_K^2$' and $P_L^2\times P_L^2$. Let $V\subset P^2_K 1 \times P^2_K$ be the incidence variety for lines in $P_2$. Desargues ideas allow us to reconstruct $K$ from $V$. A homeomorphism would give us an isomorphic variety in $P_L^2\times P_L^ 2$. This would allow us to interpret $K$ in $L$. But by a result of Poizat then $K$ and $L$ would be isomorphic. – Dave Marker May 24 '10 at 12:53 add comment This is false. Let $K=\bar{\mathbb{Q}}$ and $L=\bar{\mathbb{F}}_2$. These are clearly both algebraically closed, of different characteristics, so $K\not\cong L$. However, if we ONLY look at the topology, $\mathrm{Spec}(K[x])$ and $\mathrm{Spec}(L[x])$ will be be countable sets with the finite complement topology on the closed points, with a single generic point, so they're homeomorphic. For algebraically closed fields, the TOPOLOGY on the affine line over the field is determined by the cardinality. up vote 2 down vote For higher dimensions, it's less clear to me, because you might be able to recover characteristic (I'm a char 0 kind of person, so I don't know) from how the various curves/hypersurfaces sit inside it. @CS: I don't know for sure, but I am skeptical that you can recover the characteristic of an infinite field from the topologies on affine spaces over it. – Pete L. Clark May 21 '10 at I'm also skeptical, but my positive characteristic intuition is dismal enough that I'm not willing to come out and say it. – Charles Siegel May 21 '10 at 18:11 @Charles: um, well, in that case, maybe you should...never mind, I'm sure we'll turn out to be right. :) – Pete L. Clark May 21 '10 at 18:56 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry model-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/25512/determining-the-field-from-the-topology-of-the-affine-space-over-it","timestamp":"2014-04-18T16:10:49Z","content_type":null,"content_length":"69590","record_id":"<urn:uuid:db72f916-6985-4489-8f65-dedd8633807b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
simple differential equation I want to solve simple differential equation 2x(1+sqrt(x^2-y)) = y'*sqrt(x^2-y) 10x in advance Well, there's probably some obviously simple approach. In the meantime: Make the substitution $w^2 = x^2 - y \Rightarrow 2w \frac{dw}{dx} 2x - \frac{dy}{dx} \Rightarrow \frac{dy}{dx} = 2x - 2w \frac {dw}{dx}$. Then after a little algebra the DE becomes $-w^2 \frac{dw}{dx} = x$ which is seperable. Solve for w and then substitute back $w^2 = x^2 - y$.
{"url":"http://mathhelpforum.com/calculus/34856-simple-differential-equation.html","timestamp":"2014-04-18T18:34:22Z","content_type":null,"content_length":"34142","record_id":"<urn:uuid:417725d9-1932-46bf-b138-c7a2dcc27624>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
MC-Net: a method for the construction of phylogenetic networks based on the Monte-Carlo method. Jump to Full Text MedLine PMID: 20727135 Owner: NLM Status: MEDLINE Abstract/ BACKGROUND: A phylogenetic network is a generalization of phylogenetic trees that allows the representation of conflicting signals or alternative evolutionary histories in a single OtherAbstract: diagram. There are several methods for constructing these networks. Some of these methods are based on distances among taxa. In practice, the methods which are based on distance perform faster in comparison with other methods. The Neighbor-Net (N-Net) is a distance-based method. The N-Net produces a circular ordering from a distance matrix, then constructs a collection of weighted splits using circular ordering. The SplitsTree which is a program using these weighted splits makes a phylogenetic network. In general, finding an optimal circular ordering is an NP-hard problem. The N-Net is a heuristic algorithm to find the optimal circular ordering which is based on neighbor-joining algorithm. RESULTS: In this paper, we present a heuristic algorithm to find an optimal circular ordering based on the Monte-Carlo method, called MC-Net algorithm. In order to show that MC-Net performs better than N-Net, we apply both algorithms on different data sets. Then we draw phylogenetic networks corresponding to outputs of these algorithms using SplitsTree and compare the results. CONCLUSIONS: We find that the circular ordering produced by the MC-Net is closer to optimal circular ordering than the N-Net. Furthermore, the networks corresponding to outputs of MC-Net made by SplitsTree are simpler than N-Net. Authors: Changiz Eslahchi; Mahnaz Habibi; Reza Hassanzadeh; Ehsan Mottaghi Related 9797405 - Molecular evolution modeled as a fractal renewal point process in agreement with the di... Documents : 15465695 - Detection of tree roots and determination of root diameters by ground penetrating radar... 22212205 - Indigenous bali cattle is most suitable for sustainable small farming in indonesia. 22722735 - Body adiposity index assess body fat with high accuracy in nondialyzed chronic kidney d... 24062635 - Individual differences in social information gathering revealed through bayesian hierar... 23562915 - Mathematics for biomathics. Publication Type: Journal Article; Research Support, Non-U.S. Gov't Date: 2010-08-20 Journal Title: BMC evolutionary biology Volume: 10 ISSN: 1471-2148 ISO Abbreviation: BMC Evol. Biol. Publication Date: 2010 Date Detail: Created Date: 2010-09-14 Completed Date: 2010-12-22 Revised Date: 2013-05-28 Medline Nlm Unique ID: 100966975 Medline TA: BMC Evol Biol Country: England Journal Info: Other Details: Languages: eng Pagination: 254 Citation Subset: IM Affiliation: Faculty of Mathematics, Shahid Beheshti University, GC, Tehran, Iran. ch-eslahchi@sbu.ac.ir Export APA/MLA Format Download EndNote Download BibTex MeSH Terms Descriptor/ Algorithms Qualifier: Computational Biology / methods Models, Theoretical Monte Carlo Method* From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine Full Text Journal Information Article Information Journal ID (nlm-ta): BMC Evol Biol Copyright ©2010 Eslahchi et al; licensee BioMed Central Ltd. ISSN: 1471-2148 open-access: Publisher: BioMed Central Received Day: 9 Month: 9 Year: 2009 Accepted Day: 20 Month: 8 Year: 2010 collection publication date: Year: 2010 Electronic publication date: Day: 20 Month: 8 Year: 2010 Volume: 10First Page: 254 Last Page: 254 Publisher Id: 1471-2148-10-254 PubMed Id: 20727135 DOI: 10.1186/1471-2148-10-254 MC-Net: a method for the construction of phylogenetic networks based on the Monte-Carlo method Changiz Eslahchi1 Email: ch-eslahchi@sbu.ac.ir Mahnaz Habibi1 Email: mhabibi@ipm.ir Reza Hassanzadeh12 Email: re.hassanzadeh@mail.sbu.ac.ir Ehsan Mottaghi1 Email: mottaghi.ehsan@mail.sbu.ac.ir 1Faculty of Mathematics, Shahid Beheshti University, G.C., Tehran, Iran 2School of Computer Science, Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran, Iran Phylogenetics is concerned with the construction and analysis of phylogenetic trees or networks to understand the evolution of species, populations, and individuals. Evolutionary processes such as hybridization between species, lateral transfer of genes, recombination within a population, and convergent evolution can all lead to evolutionary histories that are distinctly non-treelike. Moreover, even when the underlying evolution is treelike, the presence of conflicting or ambiguous signals can make a single tree representation inappropriate. In these situations, phylogenetic network methods can be particularly useful. Phylogenetic network is a generalization of phylogenetic trees that can represent several trees simultaneously. For any network construction method, the conflicting signals should be represented in the network but it is vital that the network does not depict more conflict than is found in the data. At the same time, when the data fits well to a tree, the method should return a network that is close to a tree. Recently, in addition to biology, the phylogenetic networks methods are widely used for classifying different types of data such as those finding in linguistics, music, etc. There are many different methods to construct phylogenetic trees or networks which are based on distance matrix such as ME (minimum evolution) [^1], LS (least squares) [^2,^3], NJ (neighbor-joining) [^4], AddTree [^5], N-Net (neighbor-net) [^6] and Q-Net [^7]. All these methods are called distance-based methods. ME is one of the most well-known methods. It was first introduced by Kidd and Sgamarella-Zonta [^1]. Given a distance matrix, the ME principle consists of selecting the tree whose length (sum of its branch lengths) is minimal among all tree topologies for taxa. Comparative studies of tree-building methods show that ME generally is an accurate criterion for selecting a true tree. Nei and Rzhetsky have shown that ME principle is statistically consistent when branch lengths are assigned by ordinary least-squares (OLS) fitting [^8]. In the OLS framework, we simply minimize where δ[ij ]is an estimation of input d[ij ]and X is the set of taxa. In fact, the main goal is to find a tree whose induced metric is closer to d[ij]. The LS was first introduced in [^2] and [^3]. Nearly 20 years have passed by since the landmark paper in Molecular Biology and Evolution introducing NJ [^4]. The method has become the most widely used method for building phylogenetic trees from distances. Steel and Gascuel showed that NJ is a greedy algorithm for ME principle [^9]. The N-Net is a hybrid of NJ and split decomposition [^10]. It is applicable to data sets containing hundreds of taxa. The N-Net is an algorithm for constructing phylogenetic networks. Split decomposition, implemented in SplitsTree [^11], decomposes the distance matrix into simple components based on weighted splits. These splits are then represented using a special type of phylogenetic network called split network. The N-Net works in a similar way: it first produces a circular ordering from distance matrix and then constructs a collection of weighted splits. Dan Levy and Lior Patcher showed that the N-Net is a greedy algorithm for the traveling salesman problem that minimizes the balanced length of the split system at every step and it is optimal for circular distance matrices [^12]. Balanced minimum evolution (BME) is designed under the ME principle [^13]. The BME is a special version of the ME principle where tree length is estimated by the weighted least squares [^13]. In this work, we introduce MC-Net algorithm (Monte-Carlo Network algorithm) which works in a similar way: First, it finds a circular ordering for taxa, based on Monte-Carlo with simulated annealing, it then extracts splits from the circular ordering and uses non-negative least squares for weighting splits. We compare the results of the N-Net and the MC-Net for several data sets. A split of a given set X of taxa is a bipartition of the set X into two non-empty subsets of X. A split is called trivial if one of the two subsets contains only one taxon. Let T be a non-empty tree. Let the leaves of the T are labeled by the set of taxa, X ={x[1],...,x[n]}. Every edge e of T defines a split S = A|B, where A and B are two sets of taxa contained in the two components of T - e. For example, Figure 1 shows an eight-leaf tree. Removing the edge e from the tree produces two sets of leaves │A={t3,t4,t5} and B={t1,t2,t6,t7,t8}.│ In an edge-weighted tree, the weight of each edge is assigned to its corresponding split. The Phyletic distance between any two taxa x and y in an edge-weighted tree is the sum of the weights of the edges along the path from x to y. Hence, the phyletic distance between x and y equals the sum of split weights for all those splits in which x and y belong to separate components. Two different splits S[1 ]= A[1]|B[1], and S[2 ]= A[2]|B[2], are compatible, if one of the following conditions holds: │A1⊆A2,A1⊆B2,B1⊆A2 or B1⊆B2.│ A collection of splits is called compatible, if all possible pairing of splits are compatible. A compatible collection of splits is represented by a phylogenetic tree [^14,^15]. Dress and Huson introduced SplitsTree to display more complex evolutionary patterns [^16]. For a set of incompatible splits, SplitsTree outputs the split network using bands of parallel edges. Circular collection of splits is a mathematical generalization of compatible collections of splits. Formally, a collection of splits of X is circular if there exists an ordering x[1],⋯,x[n ]of X such that every split is of the form {x[i], x[i+1],⋯,x[j]}|X - {x[i],⋯,x[j]} for some i and j, 1 ≤ i ≤ j ≤ n. A Compatible collection of splits are always circular [^10]. On the other hand, the class of circular collection of splits contains the class of the collection of splits corresponding to a tree. Andreas Dress and Daniel Huson proved that circular collections of splits always have a planar splits graph representation [^16]. A distance matrix is circular (also called Kalmanson) if it is the phyletic distances for a circular collection of splits with positive weights. Because compatible splits are circular, treelike distances are circular too [^6]. As mentioned above, the ME principle consists of selecting a tree whose length is minimal. In fact, the ME principle is equivalent to finding a circular ordering σ = {x[σ(1)],...,x[σ(n)]} in order to find the minimum of the function η [Formula ID: Where Σ is the set of all circular orderings of taxa x[1],...,x[n]. We call function η the energy function, and any circular ordering that minimizes η is called the optimal circular ordering. There are a number of different methods for constructing various kinds of phylogenetic networks. A phylogenetic network can be constructed from a collection of weighted splits. N-Net uses circular ordering to construct a collection of weighted splits. Since finding an optimal circular ordering is an NP-hard problem, so we introduce a heuristic algorithm based on the Monte-Carlo method to find optimal circular ordering. The MC-Net seeks to find an optimal circular ordering from the distance matrix and then extracts a collection of weighted splits based on that ordering. In this section, a new algorithm called the MC-Net, is presented to construct a set of weighted splits for taxa set X = {x[1],...,x[n]}with a given distance matrix. The MC-Net consists of two steps. In the first step, we find a circular ordering. In the second step, the splits which are obtained from the circular ordering are weighted. The core of the first step contains two procedures, namely, INITIAL and the Monte-Carlo. The INITIAL is a greedy algorithm to obtain a circular ordering, namely, the initial circular ordering. The INITIAL works in the following way: Suppose x[σ(1)],...,x[σ(k) ]are ordered and let x¯ be an element of S = X - {x[σ(1)],...,x[σ(k)]} such that If r = x[σ(1)], we consider the new ordering x¯,xσ(1),…,xσ(k). Otherwise the ordering xσ(1),…,xσ(k),x¯ is considered. This process stops when all taxa are ordered. The second procedure, or the Monte-Carlo procedure, relies on random iteration to find the optimal circular ordering. The Monte-Carlo algorithm starts its movement from the initial circular ordering, σ[0 ]For each circular ordering σ, we define the neighborhood of σ, N (σ), by: │N(σ)={σ~∈∑|∃k; 2≤k≤n−1; σ~={xσ(1),…,xσ(k−1),xσ(k+1),…,xσ(n),xσ(k)}},│ where Σ is the set of all circular orderings. We choose σ[1 ]∈ N (σ[0]) randomly. if η (σ[1]) ≤ η (σ[0]), then the system moves into ordering σ[1]. However we allow non-greedy movements for the system in order to avoid having the system trapped in local minima. In other words, if η (σ[1]) >η (σ[0]), then the system moves into ordering σ[1 ]with a small probability e−η(σ1)+η(σ0)T, where T is a constant temperature. For each temperature, these movements are carried out t times. To compute the minimum energy we allow this process to continue until the temperature drops to zero (see the appendix for more details). Pseudo code of the Monte-Carlo algorithm is shown in Table 1. It is notticeable that the second procedure can start from any circular ordering other than the one obtained by the INITIAL procedure. In the final step, we use the least squares algorithm to weight the splits of obtained circular ordering. Let A be the matrix with rows indexed by pairs of taxa and columns indexed by splits. Then for each pair of texa i and j and for each split k, A[ij,k ]is defined by: │Aij,k={1if i and j are on opposite of split k;0otherwise.│ The matrix A = [A[ij,k]] is full rank [^17]. Let d = (d[12], d[13],...,d[(n-1)n]) be an n(n - 1)/2 dimensional vector corresponding to rows of A where d[ij ]is obtained by distance matrix. Let b be the weight vector of splits, then the phyletic distance vector is p = Ab. Now, the ordinary least squares(OLS) is used to estimate b by the following standard formula If we discard splits with negative weights and leave the remaining splits unchanged, the weight of the remaining splits are often grossly overestimated. Similar to the N-Net algorithm, we compute the optimal least square estimates with a non-negative constraint. In this paper, we use the FNNLS algorithm [^18]. Results and Discussion In this section, we compare the results of the MC-Net and the N-Net on some data sets. We use SplitsTree4 program [^19] for drawing phylogenetic networks. Due to the limitation of space, we insert only six figures in this article. Data sets One of the data sets, a collection of 110 Salmonella MLST Data, was obtained from authors of the N-Net. The other data sets presented as the examples in SplitsTree4 program (version 4.10): Its(46 taxa), Jsa (46 taxa), Mammals (30 taxa), Primates (12 taxa), Rubber (23 taxa), Dolphins (36 taxa) and Myosin (143 taxa). Optimal threshold for cooling coefficient and T[low] There are two parameters, T[low ]and cooling coefficient, in the Monte-Carlo procedure. We first adjust T[low ]between 10^5 and 0.2 to obtain the best cooling coefficient. The value of energy function and running time of algorithm for each T[low ]for JSA data are given in Figure 2 (for the other data sets, the figures are the same as JSA). According to Figure 2, when cooling coefficient is 0.95, running time of the algorithm compared to other coefficients increases considerably. On the other hand, the value of energy function for 0.95 or 0.9 as a cooling coefficient is significantly better than the other cooling coefficients. Hence, we conclude that the best value of energy function with respect to running time of the algorithm is achieved when cooling coefficient is 0.9 and T[ low ]< 10^-3. The initial test for performance of our method is done by calculating the value of energy function for circular orderings obtained by the MC-Net and the N-Net (Table 2). The first two rows of Table 2 show that in all data sets except Salmonella, the value of energy function for the MC-Net is less than those obtained from the N-Net. The interesting feature of the MC-Net algorithm is in finding different circular orderings by changing initial ordering. So, the MC-Net algorithm could take the circular ordering obtained by the N-Net as initial ordering. The third row of Table 2 shows the values of energy function for circular orderings achieved by the MC-Net with the circular ordering obtained by the N-Net as an initial ordering. For four data sets, Its, Rubber, Salmonella, Myosin, the third row indicates better results than the first row. But for the other data sets, the conclusions mentioned above are the vice versa. Another test for the performance of our method is comparing the number of splits obtained by both the algorithms. In Table 3, the number of splits of circular orderings obtained by the MC-Net and the N-Net on different data sets are shown. In all data sets the number of splits obtained by the MC-Net is less than the N-Net except Primates. In this case, these two numbers are equal. Let d be the input distance vector and P and P' are the phyletic distance vector of weighted splits obtained by the MC-Net and the N-Net, respectively. In Table 4, the value of norm of P - d and P' - d for each data set are shown. The norm of P - d is less than P' - d in all data sets even in Primates. It means that the results of the MC-Net algorithm give better approximation for input distance To illustrate difference between two algorithms, we present some examples of networks obtained by both the MC-Net and the N-Net using SplitsTree4 (Figures 3,4,5,6, 7 and 8). It is obvious that both algorithms give the same classification of taxa and exhibit the same major splits. For example, in Figures 5 and 6, we highlight some edges such that by removing the same-colored edges, the same clustering of taxa is obtained. But according to what we see in Tables 3 and 4, split networks obtained by the MC-Net are less complicated than split networks obtained by the N-Net. It means that the networks obtained by the MC-Net have less noise than the networks obtained by the N-Net. According to Corollary 1 (see Appendix), when t approaches to 1, the MC-Net finds optimal circular ordering with the probability 1. We examined our algorithm on several treelike distance matrices and it returned corresponding trees quickly. The MC-Net has been implemented in Matlab and is available for download at http://bioinf.cs.ipm.ac.ir/softwares/mc.net. In this work, we propose an algorithm, MC-Net, which is a distance based method for constructing phylogenetic networks. The MC-Net scales well and can quickly produce detailed and informative networks for large number of taxa. We compare the performance of the MC-Net with the N-Net on eight different data sets. We have shown (Tables 2, 3 and 4) that the MC-Net performs better than the N-Net for almost test cases and the networks obtained by the MC-Net are simpler than the N-Net with the same major splits. The N-Net is a part of SplitsTree program. So, the results of the MC-Net could be used in SplitsTree program too. Authors' contributions CE, RH and EM performed initial studies. MH designed the algorithm. RH and EM analysis the data sets. All authors participated in the writing of the manuscript. All authors read and approved the final manuscript. Let S = {E[1],...,E[s]} be a finite set of states, and consider a physical process having these discrete states at time t. A Markov chain is a stochastic model of this system, such that the state of system at time t + 1 depends only on the state of system at time t. Consider X[0], X[1],..., be a collection of Markov random variables, such that X[n ]is the state of the system at time n. Let p[ij ]be the probability that the system enters into the state E[j ]from the state E[i], where i, j ∈ {1,...,s} The matrix P = (p[ij])[1≤i, j≤s ]is called transition matrix. A probability distribution q = (q[1],...,q[s]) such that q[i ]is the probability that system starts its movement from the state E[i ], is called initial probability distribution. A Markov chain is a stochastic model X[0], X[1],..., such that X[t ]is the state of the system at time t. For each i and j in {1,2,...,s}; The Markov chain is irreducible, if for all i, j ∈ {1,...,s} there exists n > 0 such that pij(n)>0, where │∀α pij(n)=prob(Xn+α=Ej|Xα=Ei).│ In other words, the Markov chain is irreducible, if there exist n such that the probability that the system enters into the state E[j ]from the state E[i ]after n times is positive. The irreducible Markov chain is called aperiodic, if for some n ≥ 0 and some state E[j], Theorem 1(Convergence to stationary Markov chain, [^20]) If the Markov chain is irreducible and aperiodic then │limt→∞prob(Xt=Ej)=πj j=1,…,s│ such that π = (π[1],...,π[s]) is a unique probability distribution and πj=∑i=1sπipij. The probability distribution is π is called stationary probability of the Markov chain. It means that if P is the transition matrix and P^(t) is the t^th power of P, when t → ∞ the j^th column of transition matrix is approximately equal to π[j]. In the Monte-Carlo algorithm, a special kind of Markov chain is used. Let Σ be the finite set of states and q=(1|Σ|,…,1|Σ|) is the initial probability distribution. For each state i the neighborhood of i, N(i), is defined as the set of all the states that are reachable from i by one movement. In this system the set of neighborhoods have to satisfy the following properties: 1. i, ∉ N(i). 2. i ∈ N(j) ⇔ j ∈ N(i) 3. if i ≠ j, then there exit i[1],i[2],...,i[1 ]∈ Σ such that The matrix PT=(pijT)i,j∈Σ is defined as the transition matrix by │pijT={1|N(i)|if j∈N(i) and η(j)≤η(i),e−(η(j)−η(i))/T|N(i)|if j∈N(i) and η(j)>η(i),1−∑k∈Σ,k≠ipikTif i=j,0otherwise,│ where T is a positive constant number (constant temperature). The third property of the neighborhood shows that this Markov chain is irreducible. Also, if PiiT>0 and P^T contains non-negative entries then (PT)ii(t)>0 for all t ≥ 0. So, it is a finite, aperiodic and irreducible Markov chain. The theorem 1 shows that for each constant temperature T and i ∈ Σ, there exists a stationary probability distribution πiT such that: Where πiT=e−η(i)T∑j∈∑e−η(j)T (see page 45 in [^20]). Proposition 1. Let (πiT)i∈Σ be a probability distribution such that: and suppose that m[0 ]= min{η(i) | i ∈ Σ} and, η[0 ]= {i ∈ Σ | η(i) = m[0]} then for each i∈Σ,limT→0+πiT=πi0, where │πi0={1|η0|if i∈η0;0otherwise.│ Proof: The proof is presented in [^20] (claim 2.8 and claim 2.9). Corollary 1. Let Σ be the finite set of states, then for each i ∈ Σ we have The corollary 1 illustrates that by cooling temperature (T → 0^+), system enters into one of the states of η[0 ]with the probability 1 after t (t → ∞) time. In this article, we define the set of all circular orderings of taxa as the finite set of states. Our definition of neighborhood in the MC-Net satisfies in three properties of neighborhood and every elements of η[0 ]is an optimal circular ordering. Therefore, the MC-Net yields a circular ordering with approximately minimal energy function. We are grateful to the faculty of mathematics of Shahid Beheshti University. This work is supported in part by IPM(cs-1385-02). The authors would like to thank Prof. Hamid Pezeshk for many useful Kidd KK,Sgamarella-Zonta LA,Phylogenetic analysis: concepts and methodsAm J Human GeneticsYear: 197123235252 Cavalli-Sforza LL,Edwards AWF,Phylogenetic analysis: models and estimating proceduresAm J Hum GenetYear: 196719233257 (1967). 6026583 Fitch WM,Margoliash E,Construction of phylogenetic treesScienceYear: 196715527928410.1126/science.155.3760.2795334057 Saitou N,Nei N,The neighbor joining method: a new method for reconstructing phylogenetic treesMolecular Biology and EvolutionYear: 198744064253447015 Sattath S,Tversky A,Phylogenetic similarity treesPsychometrikaYear: 19774231934510.1007/BF02293654 Bryant D,Moulton V,NeighborNet: An agglomerative method for the construction of planar phylogenetic networksMolecular Biology And EvolutionYear: 20042125526510.1093/molbev/msh01814660700 Grunewald S,Forslund K,Dress A,Moulton V,QNet: An agglomerative method for the construction of phylogenetic networks from weighted quartetsMolecular Biology and EvolutionYear: 20072453253810.1093/ Rzhetsky A,Nei M,Theoretical foundation of the minimum-evolution method of phylogenetic inferenceMol Biol EvolYear: 199310107310958412650 Gascuel O,Steel M,Neighbor-joining revealedMolecular Biology and EvolutionYear: 2006231997200010.1093/molbev/msl07216877499 Bandelt HJ,Dress AWM,Split decomposition: A new and useful approach to phylogenetic analysis of distance dataMol Phyl EvolYear: 1992124225210.1016/1055-7903(92)90021-8 Huson DH,SplitsTree: A program for analyzing and visualizing evolutionary dataBioinformaticsYear: 19981410687310.1093/bioinformatics/14.1.689520503 Levy D,Patcher L,The Neighbor-Net AlgorithmAdvances in Applied Mathematics in press . 20161590 Desper R,Gascuel O,Gascuel OThe Minimum-Evolution Distance Based Approach to Phylogenetic InferenceMath Evolution and PhylogenyYear: 2005Oxford Univ. Press Semple C,Steel M,Cyclic permutations and evolutionary treesAdv Appl MathYear: 200432466968010.1016/S0196-8858(03)00098-8 Semple C,Steel M,PhylogeneticsYear: 2003Oxford, UK: Oxford University Press Dress A,Huson DH,Constructing splits graphsIEEE/ACM Transactions in Computational Biology and BioinformaticsYear: 2004110911510.1109/TCBB.2004.27 Bandelt H-J,Dress A,A canonical decomposition theory for metrics on a finite setAdv MathYear: 1992924710510.1016/0001-8708(92)90061-O Bro R,Jong SD,A Fast Non-negativity-constrained Least Squares AlgorithmJournal of ChemometricsYear: 199711539340110.1002/(SICI)1099-128X(199709/10)11:5<393::AID-CEM483>3.0.CO;2-L Huson D,Bryant D,Application of phylogenetic networks in evolutionary studiesMolecular Biology and EvolutionYear: 20052325426710.1093/molbev/msj03016221896 Clote P,Backofen R,Computational molecular biologyYear: 2000New York, WILEY Figure 1 ID: F1] The split S = A|B is obtained by removing the edge e of T. Figure 2 ID: F2] The value of energy function (b) and running time of algorithm (a) for each T[low ]for JSA data. Figure 3 ID: F3] The N-Net network for the Rubber data set. Figure 4 ID: F4] The MC-Net network for the Rubber data set. Figure 5 ID: F5] The N-Net network for the Mammal data set. Figure 6 ID: F6] The MC-Net network for the Mammal data set. Figure 7 ID: F7] The N-Net network for the Salmonella data set. Group A includes the isolates Sty54, Sty54*, Sty2, She9, Sty87, Snp40*, Sty13, Snp41*, Sen5, Sha160, Sha141, Sty20*, Sha58, Sse18, Sha71, Sty31. Group B includes the isolates Sty61, Sha148, Smb-17, Sag75, Sha124. Group C includes the isolates UND3, Sha150, Sha173, Sen23*, Sha153, Sha140, San96, Sen30*, Sen24*, Sha138, Sha176, Sha130, Sha164, Sha157, Sen29*, Sca93, Sha122, Sht20, Sha186. Group D includes the isolates She3, Sha50, Sse95, Sha56, Sen24, Sen34, Sha177, Sty13*, Swo44, Sty86, Ste41, Sha77, UND80. Group E includes the isolates Ssc40, Sse28, Sty89, Sty15*, Ske69, UND110, Sha49, Sen4, Sha48, Sha165, Sty92, Snp33*, Sty52, UND109, Sha131, Sha102, Sty6, Sha175. Figure 8 ID: F8] The MC-Net network for the Salmonella data set. Group A includes the isolates Sty54, Sty54*, Sty2, She9, Sty87, Snp40*, Sty13, Snp41*, Sen5, Sha160, Sha141, Sty20*, Sha58, Sse18, Sha71, Sty31. Group B includes the isolates Sty61, Sha148, Smb-17, Sag75, Sha124. Group C includes the isolates UND3, Sha150, Sha173, Sen23*, Sha153, Sha140, San96, Sen30*, Sen24*, Sha138, Sha176, Sha130, Sha164, Sha157, Sen29*, Sca93, Sha122, Sht20, Sha186. Group D includes the isolates She3, Sha50, Sse95, Sha56, Sen24, Sen34, Sha177, Sty13*, Swo44, Sty86, Ste41, Sha77, UND80. Group E includes the isolates Ssc40, Sse28, Sty89, Sty15*, Ske69, UND110, Sha49, Sen4, Sha48, Sha165, Sty92, Snp33*, Sty52, UND109, Sha131, Sha102, Sty6, Sha175. [TableWrap ID: ] Table 1 Pseudo code of the Monte-Carlo algorithm with simulated annealing Input: T initial temperature σ[0 ]initial ordering T[low ]low temperature t constant number σ =σ[0] While T > T[low] Repeat t time choose random σ~∈N(σ) If η(σ~)≤η(σ) x = random(0, 1) If x<e−η(σ~)+(σ)T T = T * 0.9 Return σ and η(σ) [TableWrap ID: ] Table 2 Values of energy function: the values of energy function for circular orderings obtained by the N-Net, the MC-Net and the MC-Net with initial ordering of the N-Net. Data set Its Jsa Mammals Primates N-Net 0.4096 0.2808 4.4275 2.1465 MC-Net 0.4079 0.2728 4.4172 2.1410 start N-Net 0.3979 0.2767 4.4202 2.1410 Data set Rubber Dolphins Salmonella Myosin N-Net 0.7723 2.2 0.2546 43.8199 MC-Net 0.7596 2.1667 0.2575 43.8019 start N-Net 0.7547 2.2 0.2515 43.6935 [TableWrap ID: ] Table 3 The number of splits obtained by the MC-Net and the N-Net for all data sets. Data set Its Jsa Mammals Primates N-Net 110 83 103 34 MC-Net 105 78 99 34 Data set Rubber Dolphins Salmonella Myosin N-Net 55 67 107 520 MC-Net 53 62 90 507 [TableWrap ID: ] Table 4 The value of norm for all data sets. Data set Its Jsa Mammals Primates N-Net 0.0444 0.0329 0.0717 0.0385 MC-Net 0.0358 0.0292 0.0648 0.0358 Data set Rubber Dolphins Salmonella Myosin N-Net 0.0362 0.1068 0.0487 0.0291 MC-Net 0.0316 0.1019 0.0405 0.0207 Article Categories: • Research Article Previous Document: Kafka, paranoic doubles and the brain: hypnagogic vs. hyper-reflexive models of disrupted self in ne... Next Document: Risk factors and correlates for anemia in HIV treatment-naïve infected patients: a cross-sectional ...
{"url":"http://www.biomedsearch.com/nih/MC-Net-method-construction-phylogenetic/20727135.html","timestamp":"2014-04-18T06:22:38Z","content_type":null,"content_length":"63102","record_id":"<urn:uuid:4a77f7c6-fe3d-4111-b424-68803b72ed04>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
MathFiction: Six Thought Experiments Concerning the Nature of Computation (Rudy Rucker) These are six very short stories, a few of which have mathematical themes. In the first story, Lucky Number, a game programmer spots some "lucky numbers" spray painted on a train. On a whim, he uses them as the rule for a new cellular automaton. Cellular automata are a subject of much mathematical research; informally you could say that each one is a finite dimensional discrete universe in which we set the "laws of physics" and see what happens. As it turns out, there are many interesting possible "universes" that can be constructed in this way. Perhaps the most famous is Conway's "Game of Life". The complexity of some of these systems have lead some to suggest that the real universe (the one in which we live) might itself be a cellular automaton. (IMHO, this is not a ridiculous suggestion. But, it is ridiculous to take it too far. Since cellular automata do display great complexity, it is possible that something like the real universe could be a result of the right set of rules and the right initial condition. However, there are lots of different sorts of math that lead to complex results. For instance, differential equations also describe rather complex situations. There is no reason -- at this point -- to assume that the universe is best described as a cellular automaton as opposed to any other kind of math.) The story rolls with this idea, so that after the character uses the lucky numbers as a set of rules, the cellular automaton quickly runs through the history of the universe (including dinosaurs and Jesus). Then, as if that wasn't enough, there is one final twist. [One final note on cellular automata: IMHO, although Steve Wolfram is one of many researchers who has made contributions to this field of mathematics, he does have a tendency to overemphasize his own role and the significance of this topic. If you read his book or other writings on the subject, approach such claims with skepticism.] The second story mixes Buddhism with quantum physics to allow a woman to afford an apartment in California despite high rents...but is not very mathematical. The third story is only mathematical in the sense that it introduces the idea of a paint that uses activator-inhibitor processes like the Belousov-Zhabotinsky reaction (unfortunately misspelled in the story) to generate interesting patterns. (That is chemistry as much as it is mathematics, but such reactions can be studied mathematically as well.) There's no math at all in the fourth story (about a man who builds a machine to hold conversations for him, like the programs often displayed by researchers in artificial intelligence). Similarly, the fifth story -- about a woman who can see patterns in the rain -- is not really mathematical either. Finally, Hello Infinity is quite mathematical, and reminiscent of the ideas in Rucker's first novel, White Light. An accountant on his day off discovers that he has the ability to count to infinity (by visualizing each number in half the time it takes for the one before). Coincidentally, his scientist wife comes up with a new idea for a microscope (using octopus skin for the display!) on the same day. Putting the two together, they can see the infinitessimal...and consider quitting their jobs. This story was first published in Lifebox, the Seashell and the Soul and was reprinted just a year later in Mad Professor. As of October 2012, the story can also be freely downloaded from the author's Website.
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf596","timestamp":"2014-04-16T13:30:39Z","content_type":null,"content_length":"13202","record_id":"<urn:uuid:320f979b-548b-4145-9717-96cf60845558>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Genetic optimization for Trading Strategies using Rapidminer and R December 5, 2010 By a Physicist That is the second tutorial of Rapidminer and R extension for Trading and the first in Video. In the last example the ROC obtained is not as good as it should be to make money in this business, To improve the strategy we will try to optimize the trading strategy. Different methods of objective functions for trading can be studied in the literature, Finally we will use a genetic non-multiobjetive to optimize our simple strategy. The simple strategy defined is the following: • The symbol used is “IBM” (you can use any other symbol) • A SVM (Support Vector Machine) predicts the close value of the next day, and when the value is mayor than the previous day, we obtain a buy signal and otherwise a shell signal. • The training data used are historical prizes (close, high, volumen) from 2006 to 2009 • The validation is done with historical information from 2010 • It is calculated the following indicators RSI, EMA 7, EMA 50, EMA 200, MACD y ADX. • It is created a two days delay temporal window for all historical values. For the optimization of the strategy it is used a genetic algorithm. The genetic algorithm will modify the input data by removing any entries (for example indicators) in order to maximize the ROC of the strategy . You can watch in the video the model generated: The results are: Initial ROC of the past tutorial The trading % win in the past strategy: Evolving feature selection in 40 generation, the final ROC performance is improved. The ROC funtion improved is the following: The % win trades is also improved It is possible to select other kind of optimization algorithm and to maximize or minimize other value like drawdown or other type of ratios like Kelly or sharpen ratio. In the next tutorial, I will improve the trading operation in order to make as real as possible and to incorporate as XML configuration files the symbols. DOWNLOAD FILES 2$ daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/genetic-optimization-for-trading-strategies-using-rapidminer-and-r-2/","timestamp":"2014-04-17T04:09:15Z","content_type":null,"content_length":"43285","record_id":"<urn:uuid:04823439-dc20-40ab-a796-5e9a11153001>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Evolving portrait of the electron About 150 years ago, people began to do the experiments that would lead to the discovery of the electron. They would study the electrical conductivity of rarefied gases. Finally, in 1896, J.J. Thomson and his collaborators proved that the colorful fog coming from the cathodes is composed of individual discrete corpuscles, the electrons. It just happened that within 10 years, electrons were seen by Henri Becquerel in a completely different context – as radioactive beta-radiation of some nuclei. In 1909, Robert Millikan and Harvey Fletcher performed their oil-drop experiment. The charge of an oil drop may be as small as one electron's elementary charge which translates to an elementary force acting on the drops in an external electric field. Another force acting on the oil drops is the friction force and the equilibrium between these two forces determines the asymptotic speed of the oil drops. The 1910s and 1920s were all about the "quantum motion" of the electrons: people started to understand the structure of the atoms. First, Bohr offered his classical planetary model of the atom equipped with the extra ad hoc quantization rules for the orbits. Finally, in the mid 1920s, the correct laws of quantum mechanics were found and replaced Bohr's model that wasn't quite correct even though it had some desirable properties that had partially captured the "spirit" of the coming quantum revolution. Motion of the electron vs structure of the electron I want to spend the bulk of this article by discussions about the internal structure of the electron – and how it was evolving over the years. That's why I find it very important to clarify a widespread misconception. What is it? People tend to confuse the wave function of the electron in space with the internal structure of the electron. They're completely different things. The wave function of the electron in a hydrogen atom is a "cloud" of radius 0.1 nanometers or so. This distance scale doesn't determine the order-of-magnitude size of the electron; instead, it determines the size of the atom. Imagine that an electron is a ball of radius \(r_e\) for a while. The center of this ball may be located at various points in space. In the hydrogen atom, the location of the center of this electron ball is undetermined and the uncertainty is approximately 0.1 nanometers (Bohr radius times two). But the size of the ball – the electron – is a completely different, independent length that is much shorter. Classical radius of the electron Hendrik Antoon Lorentz began to discuss the radius of the electron as early as in 1892, four years before Thomson's discovery of the electron. He would offer a clever idea that the electrostatic potential energy of the electron – a distribution of the electric charge – is equal to the latent energy \(E=mc^2\) stored in the electron's rest mass. However, it was 13 years before Einstein found his special theory of relativity so all these "early glimpses" of relativity always had some bugs in them. For example, Max Abraham would claim that the aether theory implied \(E_0=(3/4)m_e c^2\) as the relationship between the interaction energy and the mass. Many of the dimensionless numerical coefficients of order one were simply wrong. By the classical electron radius, we usually mean\[ r_e = \frac{1}{4\pi\varepsilon_0} \frac{e^2}{m_ec^2}\approx 2.818 \times 10^{-15}\,{\rm m}. \] If you place two charges \({\mathcal O}(e)\) at the distance \(r_e\) from one another, the electrostatic potential energy between them will be \(m_e c^2\). It's not shocking that the electrostatic potential energy of a "diluted electron" represented as a sphere of radius \(r_e\) or a ball of this radius is of the same order, up to coefficients such as \(3/5\) or \(1/2\). You should note that the classical electron radius is some 10,000-100,000 times shorter than the Bohr radius. The electron is much much smaller than the atoms. For the sake of completeness, let me also mention that there exists another length, the Compton wavelength of the electron \(h/m_e c\), which is exactly the geometric average of the classical radius of the electron and the Bohr radius. Up to numbers of order one, it is exactly in between, about \(2.4\times 10^{-12}\,{\rm m}\). The longer Bohr radius and the shorter classical electron radius are the fine-structure-constant times longer and shorter, respectively. But let's return to the main story. If the electron is visualized as a sphere of radius \(r_e\), the electrostatic potential energy is of order \(E_0=m_e c^2\) which allows us to say that all of electron's mass actually arises from the electrostatic energy. Of course, for this "explanation" to be a meaningful one rather than an example of circular reasoning, we should also find a theory explaining why the electron wants to keep its radius at \(r_e\), why it doesn't want to blow up. Recall that the like-sign charges repel so "halves" or other "pieces" of the electron want to repel from the rest, too. If you want to succeed in this task (a task that is misguided, however, as we will mention in a moment), linear electrodynamics is no good. People proposed various non-linear modifications of classical electrodynamics, most famously the Born-Infeld model in the 1930s. The Lagrangian\[ \mathcal{L}=-b^2\sqrt{-\det\left(\eta+{F\over b}\right)}+b^2 \] reduces to \(-F_{\mu\nu}F^{\mu\nu}\) if \(F_{\mu\nu}\to 0\) but for larger values of the electromagnetic field strength, it develops some non-linear terms that prevent the electron from shrinking to \(r_e\to 0\) or exploding to \(r_e\to\infty\). There are infinitely many nonlinear deformations of classical electrodynamics one could think of. But remarkably enough, the Born-Infeld action is exactly what was derived in the 1990s as the only right answer for the electromagnetic fields on D-branes in string theory. This fact showed that Born and Infeld had a remarkably good intuition for the "right equations". Everyone who finds an equation that later turns out to be "unique" by insights of string theory may count as a visionary of a sort. Renormalization: how the attempts to regularize the electron became obsolete If the electron radius were exactly \(r_e=0\), its electrostatic interaction energy would be infinite. This is true even in classical (non-quantum) physics. However, these infinities arising from short distances – and \(r_e\to 0\) is an example of short distances – reappear in quantum physics all the time. While the Born-Infeld action was an attempt to get rid of the infinities in classical physics, many attempts to remove analogous divergences have been made in quantum physics in general – and quantum field theory in particular. The first image, (a), is the leading correction to the electron's self-energy. It's a Feynman diagram most directly corresponding to the classical electron's self-interaction, electrostatic energy. Because of this diagram, the electron mass (and other quantities) is modified by an infinite amount. The infinity arises from the part of integrals in which the two interaction points (events in spacetime from the Feynman diagram) are very close to each other i.e. \(x\to 0\) or, equivalently, in which the loop momentum \(p\to\infty\). This region of the integration variables is known as the ultraviolet (UV, short-distance) region. The other two diagrams – and many others – contribute their own UV divergences to the dynamics, too. However, starting from the 1940s, people learned how to subtract these UV divergences. Even though the individual terms may be infinite, when all of the terms are properly summed and a finite number of "genuinely physical" parameters is set to their measured values, all the infinities cancel and the predictions for all "genuinely physical" quantities will be finite. This is the magic of the Renormalization has been emotionally frustrating to various people – including giants such as Paul Dirac – but this dissatisfaction was always irrational. What's important in science is that one has a well-defined procedure to extract the physical predictions and these predictions agree with the observations. QED and other quantum field theories supplemented with the renormalization technique can't get a different grade than A, at least in the subject of science. They could get a D or worse from philosophy or emotions but a bad grade from philosophy may often be a reason to boast. Ken Wilson's concept of the renormalization group from the 1970s gave us a new way to understand why renormalization worked. Before Wilson, people would think it was necessary to imagine that the electron had to be exactly point-like and the subtraction of the infinities – and they had to be strict infinities – was an essential part of the game. However, after Wilson, people would interpret the renormalization differently. They would say that they work with an "effective theory" that is able to predict all sufficiently long-distance, low-energy processes. This effective theory doesn't force you to believe that the electron is exactly point-like. Instead, you may imagine that it has a nonzero size and its inner architecture may be "pretty much anything". The effective theory allows you to be agnostic about the inner architecture of the electron. It allows you to prove that whatever the internal structure of particles is, the predictions for all the long-distance phenomena will only depend on a finite number of constants such as \(m_e,m_\gamma,e\) – which may be calculated as functions of the internal architecture of the particles. But one may prove that all the other details about the internal structure will make no impact on the long-distance, low-energy observables! We say that the long-distance, low-energy predictions only depend on the internal architecture of particles through a finite number of constants such as \(m_e,m_\gamma,e\). To say the least, it's a way of thinking about the divergences that makes the whole process of renormalization more acceptable, more philosophically pleasing. The divergent terms may be finite, after all. And while the calculations become most beautiful if the electron is strictly point-like, you're allowed to imagine it is not exactly point-like but you may prove that almost all the dependence on the messiness of a "finite-size electron" evaporates if you study long-distance, low-energy processes only. According to the renormalization group's philosophy, infinities in renormalization that cancel are shortcuts for unknown large yet finite numbers whose detailed value is mostly irrelevant. Going to high energies Well, while it's perfectly enough for all questions that affect atomic physics, you may still want to know what is hiding inside the electron; you may want to go to high energies and short distances, either theoretically, or experimentally. When you look at the electron, it seems obvious that its size has to be much smaller than the classical electron radius, probably at least 2-3 orders of magnitude shorter than that. The Standard Model allows you to "create" electrons by the Dirac field \(\Psi(x,y,z,t)\) which depends on the point in spacetime – so it's apparently created at a single point only. And the interactions are perfectly local, too. In this sense, the electron is exactly point-like in the Standard Model – although the electron is obviously acting on objects in its vicinity in various ways as well. And we know that the Standard Model is a good theory for all distances longer than \(10^{-19}\,{\rm m}\) or so which implies that the internal structure of the electron can't be longer than that. So what is inside the electron according to the cutting-edge theories? In the 1970s, people proposed preons. Quarks and leptons could be composite particles much like protons and neutrons. It wouldn't be the first time when the "indivisible" particles of our time were divided to smaller pieces. In other words, it wouldn't be the most original idea about the way how to do further progress in physics. However, when one looks at the preon models, they don't seem to work well, they predict lots of new particles that don't exist according to the experiments, and they don't seem to be helpful to solve any open puzzles in physics. In other words, the evidence is now pretty strong that if you want to stay at the level of point-like quantum field theories, electrons are strictly point-like particles. It doesn't mean that electrons are strictly point-like in general, however. If you upgrade your physics toolkit to string theory, the only known (and quite possibly, the only mathematically possible) framework that goes beyond that of point-like-particle-based quantum field theory, the effective field theories derived from all the convincing and viable vacua of string theory will look at the electron as an exactly point-like particle. However, if you look at the electron with the string accuracy, it's still a string. In most vacua, the electron is a closed string although models where the electron is an open string exist, too. An electron as a compact brane is in principle possible as well but it is much more exotic and maybe impossible when all the known empirical constraints are imposed. The typical size of the string is of order \(10^{-34}\) meters although models where it's a few orders of magnitude longer also exist. At any rate, the size of the string hiding in the electron is incomparably shorter than the classical electron radius. You may interpret a string as a chain of "string bits". In this sense, a string is a composite system that has many internal degrees of freedom, much like atoms and molecules. However, the stringy compositeness has some advantages that allow us to circumvent problems of the preon models. I discussed them in the article Preons probably can't exist three months ago. Because string theory suggests that the internal size of the "things inside the electron" is much shorter than the classical electron radius, you may rightfully conclude that the most modern research has led to the verdict that the classical electron radius isn't such an important length scale. You may calculate it from the electron mass and the elementary charge; however, nothing too special is happening at the distance scale comparable to the classical electron radius. Instead, the size of "things inside the electron" may be much shorter than the classical electron radius and the electron emerges as a rather light particle because its interaction with the mass-giving Higgs field is rather weak – and because the Higgs condensate is rather small, too (the latter fact seems to be "unlikely" from a generic short-distance viewpoint: this mystery is known as the hierarchy problem). Long distances: everything is clear I think it's appropriate to emphasize once again that all these ambitious questions about the internal structure of the electron make pretty much no impact on the behavior of the electron in atoms and other long-distance situations. If we know that there is one electron in a state, this electron is fully described by its momentum (probabilistically, by a wave function) or position (by another wave function) and by its spin (one qubit of quantum information). And yes, the electron is spinning, after all. The transformation of the spin degree of freedom to another basis – a basis connected with a different axis than the \(z\)-axis – is completely understood and dictated by the group theory applied to the group of rotations. Many people tend to be misguided about this point. They think that if they "deform" the electron by equipping it with some non-linear terms or by seeing its internal stringy or preon-based structure or by acknowledging the fields around the electron, the states of the electron will deviate from the simple Hilbert space whose basis is given by the states \(\ket{\vec p,\ lambda}\) where \(\lambda\) is "up" or "down". But this isn't possible. Even if you incorporate all the facts about the electron's structure, its interactions with all other fields, preons or (more likely) strings that may be hiding inside, as well as higher-order interaction terms that we neglect in the Standard Model, it's still exactly true that what I wrote is the basis of the electron's Hilbert space. This claim follows from the spacetime symmetries – and the basic, totally well-established facts about the electron such as \(J=1/2\). The symmetries are not only beautiful but, as the experiments show, they hold in Nature. Your full theory – including all the corrections and subtleties – must conform to them. The behavior of the electron at long enough (atomic and longer) distances is described by the Dirac equation. When the speed of the electron is much smaller than the speed of light, you may simplify the Dirac equation to the Pauli equation which is nothing else than the non-relativistic Schrödinger's equation with an extra qubit, two-fold degeneracy for the spin (but the operator \(\vec S\) doesn't enter the Hamiltonian, at least not in the leading approximation in which the spin-orbit and other relativistic interactions are neglected). Well, the electron also acts as a tiny magnet whose magnetic moment is a particular multiple of the spin, \(\vec\mu\sim \vec S\): we have to add \(-\vec\mu\cdot \vec B\) to the Hamiltonian. The coefficient \(\vec \mu/\vec S\) may be calculated from the Dirac equation, up to the 0.1% corrections of loop processes in Quantum Electrodynamics. The magnitude of the Dirac-equation-calculable magnetic moment is twice larger than what we would expect from a classical "spinning charge/current" of the same magnitude. The electron may hide lots of wonderful new structure inside. However, the particle's behavior in the atoms is independent of these not-yet-settled mysteries. It's both good news and bad news. It's good news because our understanding of atoms and similar, relatively long-distance physical situations may be rather complete despite the incompleteness of our understanding of the internal structure. It's bad news for the same reason: the observations of the atomic and other phenomena can't tell us anything about the very short-distance physics even though we would love to learn about snail feedback (21) : nice post Nice! BTW - once in a high school exam we were asked to list the properties of electrons. One of my classmates wrote that they were purple. How big is an electron in E8? Dear jitter, there can't be any "electron in E8". Groups like E8 refer to grand unification but E8 itself isn't one of the groups ready for grand unification because it admits no complex Models that start with E8 anyway but lead to viable physical predictions are solutions of string theory, e.g. the heterotic string theory, where the electron is a closed string slightly larger than the Planck scale. LOL. It reminds me of the math jokes here: I have a vague memory of the first article I ever read on E8 refering to a string that end Thanks, Lubos, for this short tutorial on my old friend, the electron. My formal physics eduction ended fifty years ago but this blog merges perfectly with my own experiences of the electron. I love it! Dear Lubos, If the fine structure of the electron looks like a string, you will agree that LHC is a kind of sophisticated hammer, then speaking figuratively, don't you think young physicists should also learn about piano tuning and not only play with 12 knot ropes to "measure" the electron size or dimensions ;-) ? To balance this point of view I would add the following safeguard mantra : "No brainwashing by any Pythagore!" To end with, I hope you will appreciate the following quote from the french mathematician Gilles Godefroy "An electron is more difficult to understand than the diagonal of a square" "What's important in science is that one has a well-defined procedure to extract the physical predictions and these predictions agree with the observations." I've heard it said the Ptolemaic model of the solar system, using epicycles, while less elegant (parsimonious) than the Copernican model, made similar predictions. I gather that is not entirely Protons are not far from "purple" :-) At least 3 of the jokes there are excellent, and I hadn't seen them before. :-) electrons are lilac This is another very nice post. I can just about understand the renormalization problem/solution that occurs here but I haven't really got to grips with 't Hooft's famous solution to renormalization in non-abelian gauge theories Renormalizable Lagrangians for Massive Yang-Mills Fields (1971) Also, I'm slightly confused about recent 'quasi-particle' models of the electron which claim it can be 'split' Dear Lubos, what about http://xxx.lanl.gov/pdf/1301.6971.pdf instead of the Higgs mechanism to generate electron's masses? Fred - A friend of mine, answering the same question, said that the electron "isn't really all that negative." :-) Thanks for this very nice article Lumo, this is exactly what I need to make me happily relax after a long day :-) Ha ha, the story about the Born-Infeld model is fun, that it pops up again in string theory :-D Really I'm impressed out of this post…The one that created this post can be a genius and learns how to keep your readers connected..Thank you for sharing this with us. I uncovered it informative and interesting. Excited for much more updates. aion accounts A similar, but longer joke: A hydrogen atom says to the bartender, "Hey buddy, have you seen an electron around here? I seem to have lost mine." "Are you sure you lost it?" the bartender asks. And the hydrogen atom answers, "I'm positive!" So what is charge exactly? A charge is a quantized scalar quantity of both allowed signs associated with internal transformations of fields. Dear Lubos, I think that renormalisation is one of the most interesting concepts in theoretical physics. And a really good example to introduce this concept is to consider the Ising-model on a triangular
{"url":"http://motls.blogspot.com/2013/01/evolving-portrait-of-electron.html","timestamp":"2014-04-20T16:18:11Z","content_type":null,"content_length":"237303","record_id":"<urn:uuid:d8274b4d-1b2d-4725-9fce-d7a1cdbcc57d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
E(x) of exponential random variable, joint density, and elevators +cov of multinomial April 28th 2009, 08:28 PM E(x) of exponential random variable, joint density, and elevators +cov of multinomial So, I've been working on a bonus review for a few hours now and have hit places where I'm not sure and places where I don't know where to start. I don't need help on all of these but it's simpler if I just post the whole pdf: Beef Supreme - Now with cheese! #3b I can't figure out... it's a classic problem that I've done before, only with n people, not n-separated-by-gender. If it was just n people, I'd do $1 - \frac{(365)(364)...(364-n+1)}{365^n}$ but I don't know how to separate that population by gender. #5 I have no idea where to start. I don't know how I'll end up with a factorial. #6a I used a double integral, the outer between 0 and 1, the inner 0 and x, f(x,y)dydx... with first-time attempted LaTeX reproduction: $<br /> \int_0^1 \int_0^X f(x,y) \, dydx<br />$ and ended up with $\frac{15}{56}$ Are those the correct bounds? #6b I figured I had to find the marginal density of y first, so $\int_0^2 \frac{6}{14} (2x^2 + xy) \, dx$ gave me $\frac{16}{7} + \frac{6}{7} y$ for $0<y<2$ Now I think I have to take a (double?) integral of this to find the expected value, but I don't know what formula or bounds to start with. #7 I can't wrap my brain around at all. I was thinking along the lines of E(x) would simply be trials*probability $10 * \frac{1}{5}$ but there's no way it could be that simple, and E(X) = 2 doesn't sound right. #8 I should note there's a typo here: The cov(X1, X2) should be equal to -np1p2 and not -mp1p2. I've studied my 'Lecture 18' notes, and can't get any clarity out of it.
{"url":"http://mathhelpforum.com/advanced-statistics/86351-e-x-exponential-random-variable-joint-density-elevators-cov-multinomial-print.html","timestamp":"2014-04-17T09:44:04Z","content_type":null,"content_length":"6285","record_id":"<urn:uuid:ce22fb53-3d65-494a-af7e-7b6af0047009>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
December 4th 2007, 04:43 PM #1 If 30% of commuters ride to work on a bus, find the probability that if 8 workers are selected at random, 3 will ride the bus. There are several questions like this on my assignment and I have had no luck in figuring out how to work them after several hours of looking through class notes and on the internet. If someone could solve this and show how they came to the answer, I would appreciate it very much and hopefully be able to complete the rest of the assignment on my own. If 30% of commuters ride to work on a bus, find the probability that if 8 workers are selected at random, 3 will ride the bus. There are several questions like this on my assignment and I have had no luck in figuring out how to work them after several hours of looking through class notes and on the internet. If someone could solve this and show how they came to the answer, I would appreciate it very much and hopefully be able to complete the rest of the assignment on my own. this is a binomial distribution problem. we have independent trials, each can be classified as a success or failure. let success be that the person does ride the bus. so the probability of sucess is 0.3 this means the probability of failure is 0.7 we have 8 trials here and we want 3 successes so $P(k) = {n \choose k} p^kq^{n - k}$ where $n$ is the number of trials, $p$ is the probability of success, $q$ is the probability of failure, $k$ is the number of success, and $P(k)$ is the probability of $k$ successes so you want $P(3)$ December 4th 2007, 04:57 PM #2
{"url":"http://mathhelpforum.com/statistics/24161-probability.html","timestamp":"2014-04-23T18:06:34Z","content_type":null,"content_length":"34656","record_id":"<urn:uuid:63a85e46-c2db-4273-912c-be8f3d378476>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Sugar Land Algebra 2 Tutor Find a Sugar Land Algebra 2 Tutor ...I like to try different approaches to solving problems, because not everyone learns in the same way. I try to find the best approach to help each student understand the concepts, while having the patience and understanding to get the struggling students through the class. I worked for many years as an Engineer for a top company in Semiconductor Design. 18 Subjects: including algebra 2, calculus, geometry, ASVAB ...I am also currently working on completing my Master's Degree in Educational Leadership thru Lamar University. One of the best parts of teaching was tutoring the students before and after school because I could work one-on-one with them and really help them understand the material. If a student is willing to learn and work hard, then I can teach them. 15 Subjects: including algebra 2, Spanish, reading, geometry ...I studied and worked in Japan for 15 years and have a great appreciation for the challenges that native English speakers face when learning Japanese! I started studying Japanese as an undergraduate at Fort Lewis College in Durango, Colorado. Upon graduation, I moved to Osaka to study at Kansai Gaidai. 7 Subjects: including algebra 2, statistics, algebra 1, ACT Math ...If not, I show him/her what was done wrong, and have the student work another example.Mathematics I have studied mathematics through calculus, differential equations, and partial differential equations in obtaining my BA and BS degrees in chemical engineering from Rice University. I have assist... 11 Subjects: including algebra 2, English, chemistry, geometry I have taught calculus and analytic geometry at the U. S. Air Force Academy for 7 years. 11 Subjects: including algebra 2, calculus, geometry, statistics
{"url":"http://www.purplemath.com/Sugar_Land_Algebra_2_tutors.php","timestamp":"2014-04-18T11:41:49Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:c4ef787e-bbc0-480c-9477-9c759fa0a826>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Yarr Maties Welcome to techInterview, a site for technical interview questions, brain teasers, puzzles, quizzles (whatever the heck those are) and other things that make you think! Five pirates discover a chest full of 100 gold coins. The pirates are ranked by their years of service, Pirate 5 having five years of service, Pirate 4 four years, and so on down to Pirate 1 with only one year of deck scrubbing under his belt. To divide up the loot, they agree on the following: The most senior pirate will propose a distribution of the booty. All pirates will then vote, including the most senior pirate, and if at least 50% of the pirates on board accept the proposal, the gold is divided as proposed. If not, the most senior pirate is forced to walk the plank and sink to Davy Jones’ locker. Then the process starts over with the next most senior pirate until a plan is These pirates are not your ordinary swashbucklers. Besides their democratic leanings, they are also perfectly rational and know exactly how the others will vote in every situation. Emotions play no part in their decisions. Their preference is first to remain alive, and next to get as much gold as possible and finally, if given a choice between otherwise equal outcomes, to have fewer pirates on the boat. The most senior pirate thinks for a moment and then proposes a plan that maximizes his gold, and which he knows the others will accept. How does he divide up the coins? What plan would the most senior pirate propose on a boat full of 15 pirates? If there were three pirates, Pirate 3 needs one other person to vote for his plan. The trick to this puzzle is understanding that if Pirate 3’s plan is voted down, he would die and then there would be only two pirates on the boat. We already figured out what happens when there are only two pirates on the boat. In the case of two pirates, Pirate 1 receives nothing. So Pirate 3 can simply offer Pirate 1 a single gold coin and ensure his vote. As a perfectly rational pirate knows, one coin is better than no coins at all! If there were four pirates, Pirate 4 needs to convince one other person to guarantee 50% of the vote. He could give Pirate 1 two gold coins, but his greed makes him realize that if his plan is scuttled, there will only be three pirates on the boat. When there are three pirates left, Pirate 2 knows he will get nothing; so Pirate 4 buys Pirate 2’s vote with one gold coin. Finally when there are five pirates, Pirate 5 needs two other cohorts. He realizes that if he dies, Pirate 1 and Pirate 3 will get zero gold. So he offers each of them one doubloon and makes off with the other 98 pieces o’ eight. The pattern should be evident now. When 15 pirates are on board, Pirate 15 needs 7 other people to vote for him. He recruits pirates 13, 11, 9, 7, 5, 3 and 1 with one coin each, leaving 93 coins for Pirate 15. Those pirates will all vote for Pirate 15’s plan because if they don’t, they’ll be stuck with Pirate 14’s plan in which they all get nothing.
{"url":"http://www.techinterview.org/post/465673557/yarr-maties","timestamp":"2014-04-16T16:00:41Z","content_type":null,"content_length":"43532","record_id":"<urn:uuid:b7960e68-8721-4772-b72d-3a198d958917>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
John Edward Aloysius Steggall Born: 19 November 1855 in London, England Died: 26 November 1935 in Dundee, Scotland Click the picture above to see twelve larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index John Steggall's father was J W B Steggall M.R.C.S., a Member of the Royal College of Surgeons of England. John was educated in London and attended the City of London School. He entered Trinity College, Cambridge, in 1874 and there he studied the Mathematical Tripos. He graduated in 1878 as second Wrangler (meaning that he was ranked second in the list of those obtaining First Class degrees). The Senior Wrangler in that year was Ernest Hobson but in the competition for the Smiths Prize it was Steggall who came out as the top candidate ahead of Hobson. For someone with this high level of attainment at Cambridge the natural next step would have been to have applied for a fellowship. Hobson won a fellowship but these were only open to those who did not marry and Steggall was about to marry Isabella Katherine Fraser of Rowmore House, Gareloch, near Helensburgh. They married in 1878. In 1890 The College, a Cambridge student magazine, contained the comment:- [Steggall] did not proceed to the examination for the fellowship, having gone in for a fellowship of a more lasting kind. It is to be regretted for his sake that the regulations allowing married men to hold fellowships did not come into force until after his time. John and Isabella had three children, one son and two daughters. The son was killed in action in World War I. After graduating Steggall became an Assistant Master at Clifton College, Bristol. He held this position during 1878-1879 and then from 1880 he was Fielden lecturer at Owens College, Manchester (this later became the University of Manchester). In 1881 University College, Dundee, was set up with Mary Ann Baxter as its principal benefactor. She required it to be for:- ... promoting the education of persons of both sexes and the study of Science, Literature and the Fine Arts. In many ways it was modelled on Owens College, Manchester and advice had been sought from Owens College. Steggall applied to become the first Principal of University College, Dundee. He failed to secure this position, but instead was offered the chair of Mathematics and Natural Philosophy. He accepted, and went to Dundee in 1883 to become the first professor of mathematics at the new College. In 1895 University College decided that a separate department of physics was required and J P Keunen was appointed. Steggall then became Professor of Pure and Applied Mathematics, a title that he held until he chose to retire in 1933 having been a professor for 50 years. Two years after he became Professor of Pure and Applied Mathematics, University College Dundee became part of the University of St Andrews and so from 1897 on Steggall became an employee of the University of St Andrews. It was while he was at Owens College, Manchester, that Steggall published London University Pure Mathematics Questions and Solutions which gave the University of London examination questions from 1877 to 1881 together with Steggall's solutions. It was a useful work, for when he went to University College, Dundee, for some years his students took the University of London external examinations since University College had no power to award degrees. In the Preface to the book he explained his reasons for writing it:- The main object of this book is to afford help to those students who have to read mathematics for the BA and BSc degrees of the University of London without the aid of a private tutor. Steggall had little time for research when he arrived in Dundee. He taught his first class at 8 a.m. every morning Monday to Saturday (inclusive). On many days he taught his final lecture from 9 p.m. to 10 p.m. He had to give 16 hours of lectures per week, two tutorials, and (with the help of one laboratory assistant) had to supervise the laboratory which was open 45 hours per week. Of course on top of all this teaching he was responsible for setting and marking examinations, and for procuring equipment for the laboratory. It was due to the very heavy workload that eventually physics was split off to become a separate subject in 1895. William Peddie said of him:- The generation of British mathematics to which Steggall belonged delighted in proposing and working out problems whose solutions might require the aid of any branch of pure or applied He was an enthusiastic member of the Edinburgh Mathematical Society which, by coincidence, was founded in the year that Steggall arrived in Scotland to take up his professorship in Dundee. Comments by Muir in his presidential address to the Society in 1884 suggest that he resented the Englishman Steggall being appointed to a Scottish professorship but Steggall fitted in well with the work of the new Society and was elected its tenth president in 1891-92. He was also president in 1924 and 1929. Turnbull was the only other who was President three times. Steggall ordered mathematical models from a German company in the early 1900s to use in his teaching. His models are still owned by the University of Dundee. They include models of: a surface of rotation of the tractrix; a surface of rotation of Steiner's Roman surface, an ellipsoid; a helicoid; a hyperbolic paraboloid; a catenoid; a Riemann surface with branch point of order two; a triply-connected Riemann surface; a hyperbolic paraboloid; a torus; and a hyperboloid of revolution with ellipsoid cross-section. Although his teaching load was heavy, he did find time to write a few research papers. His research interests were in the theory of numbers and in kinematical geometry, particularly the geometry of the triangle. He also published articles on teaching mathematics such as the following examples which all appeared in the Mathematical Gazette: On practical mathematics in schools (1914); Voting in theory and practice (1929); and The neglect of arithmetic in schools (1935). He gave lectures such as Teaching of Mathematics and Physics in 1898 in Glasgow; Education and Machinery in 1905 to the Ruskin Society; A Pioneer in Hydraulics: Mark Beaufoy in 1908 to the Dundee Society of Engineers; and Lectures on Astronomy. Steggall was an important influence on university life:- He remained an exceptional examiner who maintained an alertness and freshness of outlook to the end. He was a central figure at the college in his time, participating in all aspects of University life. The college magazine was in great praise of his attendance at Student Society meetings and he was a popular Honorary President of the Society for some years. His position and sense of duty made him an important figure to the students. Hilary Mason writes [3]:- Steggall in many ways is an outstanding example of the men of his time who believed passionately in reform and who sought to improve the lives of the poor by educational reform and by provision of better housing and medical care. Steggall was active in many areas: Dundee Social Union, Dundee School Board, and the Episcopal Church. He along with many others deserves to be remembered for his contribution to mathematics, not principally through any contribution of original work but through educational reform. Men like him made it possible for many of us to study when in the past universities would have been closed to us. They also made mathematics easier and more enjoyable to study. Steggall's many other interests included photography, woodwork and cycling. His enthusiasm as a cyclist can be realised by the fact that at the age of 65 he rode his bicycle 500 miles to a British Association meeting in Cardiff. He was a member of the Scottish Arts Club and the Scottish Mountaineering Club. His collection of more than 2000 photographs can be seen at the University of St Andrews. Article by: J J O'Connor and E F Robertson List of References (4 books/articles) A Poster of John Steggall Mathematicians born in the same country Additional Material in MacTutor Honours awarded to John Steggall (Click below for those honoured in this way) EMS President 1891, 1924, 1929 Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © November 2006 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Steggall.html","timestamp":"2014-04-18T03:12:06Z","content_type":null,"content_length":"17922","record_id":"<urn:uuid:d899d0c6-2538-4b39-95a5-22b8fa3ce82a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Best book for learning sensor fusion, specifically regarding IMU and GPS integration. up vote 3 down vote favorite I have posted this in MathOverflow because the subject is primarily Math related. I have a requirement of building an Inertial Measurement Unit (IMU) from the following sensors: • Accelerometer • Gyroscope • Magnetometer I must integrate this data to derive the attitude of the sensor platform and the external forces involved (eg. subtract tilt from linear acceleration). I must then use this information to compliment a standard GPS unit to provide higher consistent measurements than can be provided by GPS alone. I do understand the basic requirements of this problem: • Integrate sensors. (To cancel noise, subtract acceleration). • Remove noise. (Kalman filter) • Integrate IMU measurement into GPS. Whilst there are various libraries currently around that would do this for me (http://code.google.com/p/sf9domahrs/) I need to understand the mechanisms involved to a level where I am able to explain the techniques to other individuals after I have implemented the solution. I have been looking at the following resources, but I am unsure which I should go for... I need something covering Sensor Fusion, Filtering, IMU, Integration. I hope someone experienced in this area can provide any recommendations. Many thanks. add comment 3 Answers active oldest votes I don't know if it's the best book, but T&DA by Bar-Shalom is considered a standard reference for problems of this kind, AFAIK. up vote 1 down vote http://www.amazon.com/Tracking-data-association-Yaakov-Bar-Shalom/dp/B0006YT0HY add comment I'd recommend "Applied Optimal Estimation", edited by Arthur Gelb. It won't answer all your questions, but I think it will help quite a bit. up vote 1 down vote add comment If you want to learn the mathematical theory of sensor fusion I strongly recommend you invent it. up vote 1 down vote There are good engineering books, including the ones cited in the answers, on several different aspects of sensor fusion. A mathematical theory doesn't exist yet. Really, right on! And I'll help! My email address is in my profile! – drbobmeister Apr 5 '12 at 3:57 add comment Not the answer you're looking for? Browse other questions tagged physics or ask your own question.
{"url":"http://mathoverflow.net/questions/93003/best-book-for-learning-sensor-fusion-specifically-regarding-imu-and-gps-integra/93011","timestamp":"2014-04-19T07:59:44Z","content_type":null,"content_length":"54616","record_id":"<urn:uuid:ee280bad-49cd-4d14-907c-19a9083fa673>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Home Libraries People FAQ More This concept describes how to define a ODE that can be solved by an implicit routine. Implicit routines need not only the function f(x,t) but also the Jacobian df/dx = A(x,t). A is a matrix and implicit routines need to solve the linear problem Ax = b. In odeint this is implemented with use of Boost.uBLAS, therefore, the state_type implicit routines is ublas::vector and the matrix is defined as ublas::matrix. A type that is a model of Implicit System A type representing the time of the ODE An object of type System Object of type ublas::vector Object of type ublas::vector Object of type ublas::matrix Object of type Time Name Expression Type Semantics Calculate dx/dt := f(x,t) sys.first( x , dxdt , t ) void Calculates f(x,t), the result is stored into dxdt Calculate A := df/dx (x,t) sys.second( x , jacobi , t ) void Calculates the Jacobian of f at x,t, the result is stored into jacobi
{"url":"http://www.boost.org/doc/libs/1_55_0/libs/numeric/odeint/doc/html/boost_numeric_odeint/concepts/implicit_system.html","timestamp":"2014-04-20T22:01:00Z","content_type":null,"content_length":"11007","record_id":"<urn:uuid:944ba985-39c1-449f-8bec-938655cd001c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential Equation Spring System!!! June 9th 2009, 04:00 AM #1 Junior Member Mar 2009 Differential Equation Spring System!!! Hi, Please help me out with the following problem: Consider the double mass spring system shown in the figure below. The positions I have to find x_1 and x_2 solution... First I found the eigen values which are 16 and 24. Then found the respective eigenvectors: for lambda = -9 v1 = $\begin{pmatrix}<br /> {1}\\ <br /> {1}<br /> \end{pmatrix}$ for lambda = -25 v2 = $\begin{pmatrix}<br /> {-1}\\ <br /> {1}<br /> \end{pmatrix}$ After that I am stuck... Please help... Many thanks... Last edited by althaemenes; June 9th 2009 at 12:32 PM. Reason: miscalculation Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/differential-equations/92292-differential-equation-spring-system.html","timestamp":"2014-04-23T18:20:49Z","content_type":null,"content_length":"32659","record_id":"<urn:uuid:03ae833a-244e-49ae-bb25-1fe37440d9a1>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the area bounded by the given curves using integration, x^2 =2ay , y=2a Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f370f6fe4b0fc0c1a0d2c1a","timestamp":"2014-04-19T19:38:06Z","content_type":null,"content_length":"312726","record_id":"<urn:uuid:94065dfa-8ee5-4ac8-99eb-595753fc20c9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
Computing the hopf invariant (without integration or homology, as in Milnor) of the hopf map up vote 2 down vote favorite In exercise 15 of Milnor's Topology from a Differentiable Viewpoint, one is asked to compute the Hopf invariant of the Hopf map. The way one is supposed to do this is to compute the linking number of two of the fibres, but Milnor doesn't define the linking number in terms of an integral. He says to compute it as the degree of the map $\frac{x-y}{||x-y||}$ from the product of two compact oriented boundaryless manifolds embedded in $\mathbf{R}^{k+1}$ to the sphere of dimension $k$ where the sum of the dimension of the manifolds is $k$. I'm aware of other ways to compute the Hopf invariant by using deRham cohomology (see Bott and Tu, for instance), but I'm curious how one is actually supposed to do it by hand. Is there a particularly concrete way to compute the linking number without using this other machinery? Most of the other exercises in the book have cute little solutions, but is that true of this problem? (Not homework!!) The degree of a map $f : M \to N$ provided $M$ and $N$ are compact, orientable and of the same dimension is given by an integral. Think about $\int_M f^* \omega$, provided $\int_N \omega = 1$. – Ryan Budney Oct 17 '10 at 23:45 Sure, but integration is not covered in the book, and all of the other exercises only use material covered in the book. – Harry Gindi Oct 17 '10 at 23:59 1 This is not research level. I voted to close. – Andy Putman Oct 18 '10 at 0:35 2 People have asked problems from Atiyah-MacDonald here before, and this is certainly more research-level than those. – Harry Gindi Oct 18 '10 at 1:08 Atiyah-MacDonald and Milnor's "Topology from the Differentiable Viewpoint" are at similar levels (1st year grad), though Milnor is maybe a little easier. However, AM contains a couple of exercises that are notoriously difficult (even for experts) and thus are borderline appropriate. Milnor does not, and what you asked is absolutely standard 1st year graduate topology. – Andy Putman Oct 18 '10 at 3:26 show 4 more comments 1 Answer active oldest votes If you have the Hopf link embedded in some standard way in $\mathbb{R}^3$, you can see the linking number as given by the degree of a map $S^1 \times S^1 \to S^2$ in a number of ways. For instance, the pre-image of the north pole in $S^2$ consists of pairs of points stacked vertically above each other, i.e., crossings between the two components in the knot diagram given by up vote 3 projection to the $xy$ plane. (Crossings will correspond to preimages of the north pole or south pole, depending on your conventions.) For the standard diagram for the Hopf link, there's down vote only one crossing that counts. The hard part from this point of view is getting the orientation right (is the Hopf invariant $-1$ or $+1$?), but that can be done with care and attention. what's a standard way to embed the hopf link in $\mathbf{R}^3$? – Harry Gindi Oct 18 '10 at 0:06 2 Two circles in orthogonal planes, the center of one circle being a point of the other. – Ryan Budney Oct 18 '10 at 0:08 add comment Not the answer you're looking for? Browse other questions tagged differential-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/42557/computing-the-hopf-invariant-without-integration-or-homology-as-in-milnor-of/42559","timestamp":"2014-04-20T06:36:07Z","content_type":null,"content_length":"60052","record_id":"<urn:uuid:4e9f4eee-19f9-4362-babc-7f299023587c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Efficient Processing of Vague Queries using a Data Stream Approach - ACM Transactions on Information Systems , 1994 "... We present a probabilistic relational algebra (PRA) which is a generalization of standard relational algebra. Here tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Based on intensional semantics, the tuple weights of the result of a PRA expression ..." Cited by 173 (30 self) Add to MetaCart We present a probabilistic relational algebra (PRA) which is a generalization of standard relational algebra. Here tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Based on intensional semantics, the tuple weights of the result of a PRA expression always confirm to the underlying probabilistic model. We also show for which expressions extensional semantics yields the same results. Furthermore, we discuss complexity issues and indicate possibilities for optimization. With regard to databases, the approach allows for representing imprecise attribute values, whereas for information retrieval, probabilistic document indexing and probabilistic search term weighting can be modelled. As an important extension, we introduce the concept of vague predicates which yields a probabilistic weight instead of a Boolean value, thus allowing for queries with vague selection conditions. So PRA implements uncertainty and vagueness in combination with the... - Journal of the American Society for Information Science , 1999 "... In the logical approach to information retrieval (IR), retrieval is considered as uncertain inference. ..." , 1995 "... The task of an information retrieval system is to identify documents that will satisfy a user's information need. Effective fulfillment of this task has long been an active area of research, leading to sophisticated retrieval models for representing information content in documents and queries and m ..." Cited by 19 (0 self) Add to MetaCart The task of an information retrieval system is to identify documents that will satisfy a user's information need. Effective fulfillment of this task has long been an active area of research, leading to sophisticated retrieval models for representing information content in documents and queries and measuring similarity between the two. The maturity and proven effectiveness of these systems has resulted in demand for increased capacity, performance, scalability, and functionality, especially as information retrieval is integrated into more traditional database management environments. In this dissertation we explore a number of functionality and performance issues in information retrieval. First, we consider creation and modification of the document collection, concentrating on management of the inverted file index. An inverted file architecture based on a persistent object store is described and experimental results are presented for inverted file creation and modification. Our architecture provides performance that scales well with document collection size and the database features supported by the persistent object store provide many solutions to issues that arise during integration of information retrieval into more general database environments. We then turn to query evaluation speed and introduce a new optimization technique for statistical ranking retrieval systems that support structured queries. Experimental results from a variety of query sets show that execution time can be reduced by more than 50% wit... - In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval , 1998 "... We describe the design and implementation of a system for logic-based multimedia retrieval. As highlevel logic for retrieval of hypermedia documents, we have developed a probabilistic object-oriented logic (POOL) which supports aggregated objects, different kinds of propositions (terms, classificati ..." Cited by 15 (8 self) Add to MetaCart We describe the design and implementation of a system for logic-based multimedia retrieval. As highlevel logic for retrieval of hypermedia documents, we have developed a probabilistic object-oriented logic (POOL) which supports aggregated objects, different kinds of propositions (terms, classifications and attributes) and even rules as being contained in objects. Based on a probabilistic four-valued logic, POOL uses an implicit open world assumption, allows for closed world assumptions and is able to deal with inconsistent knowledge. POOL programs and queries are translated into probabilistic Datalog programs which can be interpreted by the HySpirit inference engine. For storing the multimedia data, we have developed a new basic IR engine which yields physical data abstraction. The overall architecture and the flexibility of each layer supports logic-based methods for multimedia information retrieval. - In Proceedings of the 2nd World Conference on Integrated Design and Process Technology , 1996 "... The integration of information retrieval (IR) and database systems requires a data model which allows for modelling documents as entities, representing uncertainty and vagueness and performing uncertain inference. For this purpose, we present a probabilistic data model based on relations in nonfirst ..." Cited by 10 (1 self) Add to MetaCart The integration of information retrieval (IR) and database systems requires a data model which allows for modelling documents as entities, representing uncertainty and vagueness and performing uncertain inference. For this purpose, we present a probabilistic data model based on relations in nonfirst -normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Thus, the set of weighted index terms of a document are represented as a probabilistic subrelation. In a similar way, imprecise attribute values are modelled as a set-valued attribute. We redefine the relational operators for this type of relations such that the result of each operator is again a probabilistic NF2 relation, where the weight of a tuple gives the probability that this tuple belongs to the result. By ordering the tuples according to decreasing probabilities, the model yields a ranking of answers like in most IR models. This effect also can be used for ... , 1996 "... We present a probabilistic data model which is based on relations in non-first-normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. This way, imprecise attribute values are modelled as a probabilistic subrelation. ..." Cited by 10 (2 self) Add to MetaCart We present a probabilistic data model which is based on relations in non-first-normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. This way, imprecise attribute values are modelled as a probabilistic subrelation.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1091718","timestamp":"2014-04-23T08:50:47Z","content_type":null,"content_length":"27427","record_id":"<urn:uuid:ae20428e-6189-417b-877b-77fbf02993f6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Noether charge in multisymplectic geometry P: n/a On Jun 7, 3:49 pm, ygor.geu...@gmail.com wrote: > Hi, > I'm currently looking for the mathematical foundation behind the > claim, often found in field theory/string theory books that the > noether charge associated to a symmetry of the lagrangian is the > generator of that symmetry, ie. its poisson bracket with a field from > the lagrangian, generates the change in the field. This theorem goes back to Hamiltonian mechanics. It generalizes to field theory in a straightforward way. Unfortunately, I haven't found an explicit demonstration of it in any of my classical mechanics The proof is not particularly complicated, so here it is. Consider a Lagrangian function L(v,x) defined on a configuration space with coordinates x^i. Let X^i be the coordinate components of the vector field that generates a symmetry of the Lagrangian. Then, there exists a conserved quantity I(v,x) = L,i X^i. The notation L,ij,klm will represent derivatives, with this example representing derivatives of L with respect to v^i, v^j, x^k, x^l and x^m. The velocity varies under the symmetry transformation as (' is time (x^i + eps X^i + ...)' = v^i + eps X^i,k v^k + ... . Invariance of the Lagrangian dictates L,,i X^i + L,i X^i,k v^k = 0 . (*) The Euler-Lagrange equations are (L,i)' = L,,i . It is easy to show that I(v,x) is a constant of motion: I' = (L,i)' X^i + L,i X^i,k v^k (**) = L,,i X^i + L,i X^i,k v^k = 0 by (*) . That's the first half of the theorem you wanted. The other half of the theorem states that the conserved quantity I, as a function on the phase space, will generate the phase space extension of the configuration space vector field X^i. The phase space is coordinatized by x^i and p_j = L,j. In terms of the the x^i, v^j coordinates, the phase space extension of X^i is X = X^i @x^i + X^i,k v^k @v^i , where @x^i and @v^i are the coordinate basis for vector fields. Using the chain rule, we express this basis in terms of the @x^i, @p_j [ @v^i ] [ L,ji 0 ] [ @p_j ] [ ] = [ ] [ ] . [ @x^i ] [ L,j,i 1 ] [ @x^i ] X = X^i ( L,j,i @p_j + @x^i) + X^i,k v^k L,ji @p_j = X^i @x^i + ( L,j,i X^i + L,ji X^i,k v^k ) @p_j = X^i @x^i + [ (L,,i X^i v^k + L,i X^i,k v^k),j - L,i X^i,j ] @p_j = X^i @x^i + [ 0 - L,i X^i,j ] @p_j by (**) = X^i @x^i - p_i X^i,j @p_j . Using the symplectic form, we can transform this vector field into a w = X^i dp_i - p_i X^i,j (-dx^j) = X^i dp_i + d(X^i) p_i = d(p_i X^i) = d(L,i X^i) = dI . In other words, the above demonstrates that X is precisely the Hamiltonian vector field generated by the phase space function I(p,x) = p_i X^i, which is presicely the same quantity as the quantity I(v,x) obtained in the first half of the calculation. Hope this helps.
{"url":"http://www.physicsforums.com/showthread.php?t=239264","timestamp":"2014-04-21T05:30:55Z","content_type":null,"content_length":"29464","record_id":"<urn:uuid:725a4038-dfaf-492a-8dba-e93fe5f2d9e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 22 - Acta Informatica , 1991 "... this paper we generalize the well-known Schonhage-Strassen algorithm for multiplying large integers to an algorithm for multiplying polynomials with coefficients from an arbitrary, not necessarily commutative, not necessarily associative, algebra A. Our main result is an algorithm to multiply polyno ..." Cited by 156 (6 self) Add to MetaCart this paper we generalize the well-known Schonhage-Strassen algorithm for multiplying large integers to an algorithm for multiplying polynomials with coefficients from an arbitrary, not necessarily commutative, not necessarily associative, algebra A. Our main result is an algorithm to multiply polynomials of degree ! n in - In Proceedings of the 39th Symposium on Foundations of Computer Science , 1998 "... In this paper we give a randomized O(n log n)-time algorithm for the string matching with don't cares problem. This improves the Fischer-Paterson bound [10] from 1974 and answers the open problem posed (among others) by Weiner [30] and Galil [11]. Using the same technique, we give an O(n log n)-t ..." Cited by 30 (5 self) Add to MetaCart In this paper we give a randomized O(n log n)-time algorithm for the string matching with don't cares problem. This improves the Fischer-Paterson bound [10] from 1974 and answers the open problem posed (among others) by Weiner [30] and Galil [11]. Using the same technique, we give an O(n log n)-time algorithm for other problems, including subset matching and tree pattern matching [15, 21, 9, 7, 17] and (general) approximate threshold matching [28, 17]. As this bound essentially matches the complexity of computing of the Fast Fourier Transform which is the only known technique for solving problems of this type, it is likely that the algorithms are in fact optimal. Additionally, the technique used for the threshold matching problem can be applied to the on-line version of this problem, in which we are allowed to preprocess the text and require to process the pattern in time sublinear in the text length. This result involves an interesting variant of the Karp-Rabin fingerprint m... - in Communications, Information and Network Security, V.Bhargava, H.V.Poor, V.Tarokh, and S.Yoon , 2002 "... A new algorithm is developed for decoding Reed-Solomon codes. It uses fast Fourier transforms and computes the message symbols directly without explicitly finding error locations or error magnitudes. In the decoding radius (up to half of the minimum distance), the new method is easily adapted for er ..." Cited by 16 (1 self) Add to MetaCart A new algorithm is developed for decoding Reed-Solomon codes. It uses fast Fourier transforms and computes the message symbols directly without explicitly finding error locations or error magnitudes. In the decoding radius (up to half of the minimum distance), the new method is easily adapted for error and erasure decoding. It can also detect all errors outside the decoding radius. Compared with the Berlekamp-Massey algorithm, discovered in the late 1960's, the new method seems simpler and more natural yet it has a similar time complexity. - J. SYMB. COMP , 2000 "... ..." , 1993 "... Interest in normal bases over finite fields stems both from mathematical theory and practical applications. There has been a lot of literature dealing with various properties of normal bases (for finite fields and for Galois extension of arbitrary fields). The advantage of using normal bases to repr ..." Cited by 9 (0 self) Add to MetaCart Interest in normal bases over finite fields stems both from mathematical theory and practical applications. There has been a lot of literature dealing with various properties of normal bases (for finite fields and for Galois extension of arbitrary fields). The advantage of using normal bases to represent finite fields was noted by Hensel in 1888. With the introduction of optimal normal bases, large finite fields, that can be used in secure and e#cient implementation of several cryptosystems, have recently been realized in hardware. The present thesis studies various theoretical and practical aspects of normal bases in finite fields. We first give some characterizations of normal bases. Then by using linear algebra, we prove that F q n has a basis over F q such that any element in F q represented in this basis generates a normal basis if and only if some groups of coordinates are not simultaneously zero. We show how to construct an irreducible polynomial of degree 2 n with linearly i... - In Security and Cryptography for Networks – SCN’2008, volume 5229 of Lecture Notes in Computer Science , 2008 "... Abstract. We improve our proposal of a new variant of the McEliece cryptosystem based on QC-LDPC codes. The original McEliece cryptosystem, based on Goppa codes, is still unbroken up to now, but has two major drawbacks: long key and low transmission rate. Our variant is based on QC-LDPC codes and is ..." Cited by 8 (2 self) Add to MetaCart Abstract. We improve our proposal of a new variant of the McEliece cryptosystem based on QC-LDPC codes. The original McEliece cryptosystem, based on Goppa codes, is still unbroken up to now, but has two major drawbacks: long key and low transmission rate. Our variant is based on QC-LDPC codes and is able to overcome such drawbacks, while avoiding the known attacks. Recently, however, a new attack has been discovered that can recover the private key with limited complexity. We show that such attack can be avoided by changing the form of some constituent matrices, without altering the remaining system parameters. We also propose another variant that exhibits an overall increased security level. We analyze the complexity of the encryption and decryption stages by adopting efficient algorithms for processing large circulant matrices. The Toom-Cook algorithm and the short Winograd convolution are considered, that give a significant speed-up in the cryptosystem operations. - the IBM SP2. Mathematics of Computation 68 , 1999 "... Abstract. A C implementation of Niederreiter’s algorithm for factoring polynomials over F2 is described. The most time-consuming part of this algorithm, which consists of setting up and solving a certain system of linear equations, is performed in parallel. Once a basis for the solution space is fou ..." Cited by 7 (1 self) Add to MetaCart Abstract. A C implementation of Niederreiter’s algorithm for factoring polynomials over F2 is described. The most time-consuming part of this algorithm, which consists of setting up and solving a certain system of linear equations, is performed in parallel. Once a basis for the solution space is found, all irreducible factors of the polynomial can be extracted by suitable gcdcomputations. For this purpose, asymptotically fast polynomial arithmetic algorithms are implemented. These include Karatsuba & Ofman multiplication, Cantor multiplication and Newton inversion. In addition, a new efficient version of the half-gcd algorithm is presented. Sequential run times for the polynomial arithmetic and parallel run times for the factorization are given. A new “world record ” for polynomial factorization over the binary field is set by showing that a pseudo-randomly selected polynomial of degree 300000 can be factored in about 10 hours on 256 nodes of the IBM SP2 at the Cornell Theory Center. 1. "... Abstract. In this paper, we discuss an implementation of various algorithms for multiplying polynomials in GF(2)[x]: variants of the window methods, Karatsuba’s, Toom-Cook’s, Schönhage’s and Cantor’s algorithms. For most of them, we propose improvements that lead to practical speedups. ..." Cited by 5 (2 self) Add to MetaCart Abstract. In this paper, we discuss an implementation of various algorithms for multiplying polynomials in GF(2)[x]: variants of the window methods, Karatsuba’s, Toom-Cook’s, Schönhage’s and Cantor’s algorithms. For most of them, we propose improvements that lead to practical speedups. - PROC. ISSAC 96 , 1996 "... We describe algorithms for polynomial multiplication and polynomial factorization over the binary field F2 and their implementation. They allow polynomials of degree up to 100,000 to be factored in about one day of CPU time. ..." Cited by 3 (0 self) Add to MetaCart We describe algorithms for polynomial multiplication and polynomial factorization over the binary field F2 and their implementation. They allow polynomials of degree up to 100,000 to be factored in about one day of CPU time. - the Binary Field”, Math. Comp "... Abstract. The most time-consuming part of the Niederreiter algorithm for factoring univariate polynomials over finite fields is the computation of elements of the nullspace of a certain matrix. This paper describes the so-called “black-box ” Niederreiter algorithm, in which these elements are found ..." Cited by 2 (0 self) Add to MetaCart Abstract. The most time-consuming part of the Niederreiter algorithm for factoring univariate polynomials over finite fields is the computation of elements of the nullspace of a certain matrix. This paper describes the so-called “black-box ” Niederreiter algorithm, in which these elements are found by using a method developed by Wiedemann. The main advantages over an approach based on Gaussian elimination are that the matrix does not have to be stored in memory and that the computational complexity of this approach is lower. The black-box Niederreiter algorithm for factoring polynomials over the binary field was implemented in the C programming language, and benchmarks for factoring high-degree polynomials over this field are presented. These benchmarks include timings for both a sequential implementation and a parallel implementation running on a small cluster of workstations. In addition, the Wan algorithm, which was recently introduced, is described, and connections between (implementation aspects of) Wan’s and Niederreiter’s algorithm are given. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=334453","timestamp":"2014-04-21T07:43:56Z","content_type":null,"content_length":"36740","record_id":"<urn:uuid:c22bb696-208c-45a4-881f-354705c03da7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Signed von Mises Stress vs Principal Stress Range for Fracture/Fatigue Member Login Come Join Us! Are you an Engineering professional? Join Eng-Tips now! • Talk With Other Members • Be Notified Of Responses To Your Posts • Keyword Search • One-Click Access To Your Favorite Forums • Automated Signatures On Your Posts • Best Of All, It's Free! *Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. Donate Today! Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. Link To This Forum! Signed von Mises Stress vs Principal Stress Range for Fracture/Fatigue Forum Search FAQs Links Jobs Whitepapers MVPs flash3780 (Mechanical) 7 Dec 11 13:14 I'm working with one of our customers to determine the maximum allowable flaw size in our hardware during inspection to meet life requirements. FEA models (and hand calculations) produce complex stress states which must be boiled down to a stress range for fracture/fatigue calculations. I've always used a signed von Mises stress to compute the stress range to be used in fracture/fatigue calculations (i.e. the von Mises stress given the sign of the principal stress with the largest amplitude). However, our customer uses the principal stress range between the principal stresses of the largest amplitude at each load state to determine the stress range. I've heard of both approaches before, but I wonder which turns out to be more accurate in metals. Specifically, I'm working with titanium, but I'd imagine that the failure mode for most ductile materials is similar. I find that fracture calculations in real geometries are often a bit of a crapshoot because you're applying a 3d stress state to a 2d crack growth model. Still, I'd be interested to know which method yields the most accurate results. Any takers? (If it matters, the particular case that I'm looking at is constant-amplitude, proportional loading: I don't imagine that it would make much of a difference when choosing to use a signed von Mises range or a principal stress range, though.) Christopher K. Hubley Mechanical Engineer Sunpower Incorporated Athens, Ohio salmon2 (Materials) 7 Dec 11 14:20 A little bit unclear about your customer's approach. Does stress range equal to "sigma_1 - signma_3"? Both signma_1 and sigma_3 are from the same one load condition. If yes, then it is Tresca (maximum) yield model. Tresca is more conservative or strict than von Mises, generally. corus (Mechanical) 7 Dec 11 17:30 Principal stress range is always used. Von Mises stress is for a different failure criteria. Look up FE-Safe on google for examples and tutorials. flash3780 (Mechanical) 7 Dec 11 18:10 I apologize for being unclear above, the two criteria are as follows: Signed von Mises (for each stress state): SVM =IF(ABS(S1)>ABS(S3),SIGN(S1),SIGN(S3))*VM Principal stress range (for each stress state): SPP =IF(ABS(S1)>ABS(S3),S1,S3) VM = von Mises stress S1 = Minimum Principal Stress S3 = Maximum Principal Stress SVM = Signed von Mises Stress SPP = Largest magnitude principal stress From another thread on Eng-Tips (circa 2005), I found this: Quote (feajob): for -1<alpha<0 Signed Tresca is very conservative Signed von Mises is conservative for alpha=0 (uniaxial) both are O.K. for 0<alpha<1 Signed Tresca is O.K. Signed von Mises is non-conservative for alpha=1 (equibiaxial) both are O.K. Ref. MSC.Fatigue From a similar thread: feajob's statement agrees with the information provided by MSC.Fatigue It looks as though the appropriate method is dependent on the stress ratio. I'll have to take that into account. According to MSC.Fatigue, the signed von Mises is okay to use in all situations except when "0 < (SAMP/SMEAN) < 1", when a signed Tresca Stress should be used. However, they do suggest that the absolute maximum principal strain (or stress in the linear regime) may be more accurate than the signed von Mises stress for "-1 < (SAMP/SMEAN) < 0". They also note that in cases of pure shear, a critical plane method should be used, where the stresses/strains are determined based on a stress cube rotation to determine stresses in the direction which would cause a crack to open/close. Christopher K. Hubley Mechanical Engineer Sunpower Incorporated Athens, Ohio flash3780 (Mechanical) 7 Dec 11 19:06 Ack, egads! Pardon my mistake: Alpha is NOT the same as the A-Ratio. The biaxiality ratio is defined as: To "alpha = SP1_surf/SP2_surf" Where SP1 and SP2 are the principal stresses in the plane of the surface. So it's necessary to rotate your stress state such that it lies along the surface, and then compute the planar principal stresses in the area of interest. I think that makes sense... Christopher K. Hubley Mechanical Engineer Sunpower Incorporated Athens, Ohio corus (Mechanical) 8 Dec 11 4:06 Your definition of range is incorrect as it applies to different load cases. The only time your method is correct if you're looking at the stress range from a zero stress state. Looking at the MSC Fatigue section, it states: Stage I is a period of nucleation and crystallographically orientated growth following immediately after initiation and is confined to shear planes. In this phase, both the shear stresses and strains and the normal stresses and strains are the moduli which control the rate of crack extension. Stage II growth is growth which occurs on planes which are orientated perpendicular to the maximum principal stress range. In this phase, the magnitude of the maximum principal stresses and strains dominate the crack growth process. The way I read this is that you have Stage I, crack initiation, and Stage II, crack growth, which is dependent upon the principal stress range. As you're looking for the critical flaw size and/ or fatigue life, then you're looking at crack growth, and as such the principal stress range. rb1957 (Aerospace) 8 Dec 11 6:52 i'd use max principal. how significant is the thru-thickness term ? flash3780 (Mechanical) 8 Dec 11 16:54 Tara, can you elaborate on where I'm going wrong on the stress range? The particular case that I'm analyzing is a beam with a bolt tying it down at one end (like a diving board). There's a boss at the bolt connection to prevent fretting, and I'm looking at the stress in the fillet in that boss. If you could follow all of that, I've attached a comparison of the signed von Mises and the range in the max amplitude principal stresses for the two stress states (as you travel along the 0.5mm fillet). All that said, I suppose that I was looking for a general approach, rather than a solution to this specific problem; in this case, I'm merely using the most conservative of the two, which happens to be the principal stress range. I'm curious, however, which is the most accurate when compared with reality. Basing the choice on the biaxiality ratio seems like a reasonable approach... though maybe a bit tedious if you don't have fatigue software to do all of the stress rotations along the surfaces for you (s'pose I could write something, but... meh). rb1957, the through-thickness term is essentially zero at the surface in this particular case (clearly you can't have normal stress pointing out of a surface; subsurface cracks on the other hand give me headaches). Thanks for the help. I still think crack growth is a bit of a voodoo science in all but very simple cases (pressure vessels, and pipes for example), but we always do our best to be as accurate as possible. It seems like both signed VM and the principal stress range are good unless you're working with principal surface stresses which are either both in tension or both in compression (0<alpha<1), in which case MSC Fatigue suggests using a signed Tresca stress to define the state. Pure shear seems to be another tricky state which requires selecting a critical plane, and equal equibiaxial stresses (alpha=1) apparently give nonconservative results when you use the absolute principal stress range. Christopher K. Hubley Mechanical Engineer Sunpower Incorporated Athens, Ohio rb1957 (Aerospace) 8 Dec 11 17:27 if both principals have the same sign then the combined stresses (like vM) are less than the principals. if thru thickness is negligible then you've got 2D stress state. for both reasons, i'd go with max principal. corus (Mechanical) 9 Dec 11 3:58 It is slightly more complicated than simply taking the principal stress range, in that if the stress is compressive then it shouldn't contribute to crack propogation. This doesn't apply at welds due to residual stresses and 'partly so' in plain material. If I recall, fatigue design code standards advise using 60% of the compressive stress to allow for any residual stresses in the plain material from manufacture unless the stress cycle is wholly compressive when you'd ignore them. In your case you appear to have only one load case where the principal stresses are wholly compressive, and it presumably cycles from zero stress to the stresses quoted in your graph. How does a crack in plain material propogate in those circumstances? When the principal stress direction changes then take the difference of the stress components (sxx, sxy etc) and calculate the principal stress range from those stress differences. All this is in the the design standard BS7608, which states that you use principal stresses. MSC Fatigue also says that, as I said earlier. flash3780 (Mechanical) 10 Dec 11 23:39 rb1957: Good point. I think the table on the MSC Fatigue page states that both the absolute max principal and the signed von Mises are anti-conservative when the signs of the principal stresses are the same (0<alpha<1): They're suggesting to use the Tresca stress in that case. For grins I plotted out signed von Mises vs absolute max principal for various biaxiality ratios (attached). Clearly, signed von Mises is more conservative for -1<alpha<0. That said, MSC Fatigue suggests that the max principal is more accurate in that range. So, using the absolute max principal range seems to be fine unless you're in pure shear or have principal stresses of the same sign, at which point a max shear criteria is most accurate. corus: I see what you're saying about compressive stresses not contributing to crack growth. I suppose that we only generally consider mode 1 crack growth in analyses. That said, they certainly do have an effect on fatigue lives (according to S-N curves, etc.). Perhaps lower fatigue lives for reversing stresses are due higher stresses driving mode 2 and mode 3 crack propagation in the All that said, there's a guy at Lockheed who's plugging the stress values that I give him into NASGRO, and I want to make sure that I'm giving him values that make sense. I'm life-ing my parts with S-N curves, as well. From the sound of it, S-N curves and crack growth models are perhaps interested in two different stress ranges. Is that correct? Since crack growth models are interested in crack orientation and a particular stress state, I suppose that things get complicated. In my mind, the two-dimensional models that I'm familiar with are hard to relate to complex geometries. Still, I've not worked with predicting crack growth much, so perhaps I have a lot to learn. Thanks for the insight, it really is helpful. Christopher K. Hubley Mechanical Engineer Sunpower Incorporated Athens, Ohio
{"url":"http://www.eng-tips.com/viewthread.cfm?qid=312034","timestamp":"2014-04-21T04:56:15Z","content_type":null,"content_length":"41941","record_id":"<urn:uuid:0e290477-f9f5-45c3-a085-390d5af572ae>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What are the first 3 terms of the Taylor expansion for the function \(f(x)=e^{2x}\) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50856acce4b0848fa34f9db8","timestamp":"2014-04-19T13:00:23Z","content_type":null,"content_length":"74855","record_id":"<urn:uuid:0839e95a-965b-4ee5-bb95-58c10008813a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
a few permutation problems August 25th 2008, 08:12 PM #1 Aug 2008 a few permutation problems Hi all!! please help me in solving some perm problems listed below: 1.If in a group of 'n' distinct objects, the number of arrangements of 4 objects is 12 times the number of arrangements of 2 objects, then the number of objects is: a)10, b)8, c)6, d)none of these 2. A 5-digit number divisible by 3 is to be formed using the digits 0,1,2,3,4 and 5 without repetition. The number of ways to do this is: a) 216, b)600, c)240, d)3125 3.The number of different ways in which 8 persons can stand in a row so that between two particular persons A and B there are always 2 persons , is: a) 60*5!, b) 15*4!*5!, c) 4!*5!, d)none of these 4.The number of arrangements of letters of the word 'BHARAT' taking 3 at a time is: a)72, b)120, c)14, d)none of these I will appreciate your time and effort . Hi all!! please help me in solving some perm problems listed below: 1.If in a group of 'n' distinct objects, the number of arrangements of 4 objects is 12 times the number of arrangements of 2 objects, then the number of objects is: a)10, b)8, c)6, d)none of these Mr F says: Solve ${\color{red}^n P _4 = 12 (^n P _2)}$ for n (or just substitute and test each option a, b and c) ..... 2. A 5-digit number divisible by 3 is to be formed using the digits 0,1,2,3,4 and 5 without repetition. The number of ways to do this is: a) 216, b)600, c)240, d)3125 Mr F says: What restrictions are placed on a five digit number to make it divisible by 3 .....? 3.The number of different ways in which 8 persons can stand in a row so that between two particular persons A and B there are always 2 persons , is: a) 60*5!, b) 15*4!*5!, c) 4!*5!, d)none of these Mr F says: Consider A X X B as a single unit where X represents two random people. How many ways can you make the unit A X X B .....? How many different arrangements of five objects are there? 4.The number of arrangements of letters of the word 'BHARAT' taking 3 at a time is: a)72, b)120, c)14, d)none of these Mr F says: What's the formula for the number of arangements of r objects chosen from n when some of the objects are repeated? I will appreciate your time and effort . August 27th 2008, 08:11 AM #2
{"url":"http://mathhelpforum.com/statistics/46754-few-permutation-problems.html","timestamp":"2014-04-18T12:51:12Z","content_type":null,"content_length":"35537","record_id":"<urn:uuid:1e08ff20-f222-49dc-ab34-801511106c80>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
On the productivity of recursive definitions. EWD749 "... Abstract. Most of the standard pleasant properties of term rewriting systems are undecidable; to wit: local confluence, confluence, normalization, termination, and completeness. Mere undecidability is insufficient to rule out a number of possibly useful properties: For instance, if the set of normal ..." Cited by 4 (0 self) Add to MetaCart Abstract. Most of the standard pleasant properties of term rewriting systems are undecidable; to wit: local confluence, confluence, normalization, termination, and completeness. Mere undecidability is insufficient to rule out a number of possibly useful properties: For instance, if the set of normalizing term rewriting systems were recursively enumerable, there would be a program yielding “yes ” in finite time if applied to any normalizing term rewriting system. The contribution of this paper is to show (the uniform version of) each member of the list of properties above (as well as the property of being a productive specification of a stream) complete for the class Π 0 2. Thus, there is neither a program that can enumerate the set of rewriting systems enjoying any one of the properties, nor is there a program enumerating the set of systems that do not. For normalization and termination we show both the ordinary version and the ground versions (where rules may contain variables, but only
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=10427146","timestamp":"2014-04-20T02:56:58Z","content_type":null,"content_length":"12337","record_id":"<urn:uuid:4b5a9cef-23ff-49c6-b553-89d66a202d98>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Predicting the Consequences of a bending stress 250N/mm2 February 8th 2013, 02:53 AM Predicting the Consequences of a bending stress 250N/mm2 A steel beam has an elastic limit at 200N/mm^2 and an ultimate bending stress at 300N/mm^2. Predict the structural consequences of a bending stress of 250N/mm^2 Can sombody shoe me how to work this out please?? February 8th 2013, 05:19 AM Re: Predicting the Consequences of a bending stress 250N/mm2 This is not a mathematics problem except in the very rudimentary sense of recognizing that "250" lies between 200 and 300. Now what do "elastic limit" and "ultimate bending stress" mean? February 8th 2013, 08:02 AM Re: Predicting the Consequences of a bending stress 250N/mm2 stress occurrs in beam as a result of loading it (local deltaF/deltaA). elastic limit: maximum stress you can subject beam to and still have it return to original shape. ultimate bending stress: stress at which rupture occurrs. After reaching elastic limit plastic deformation starts to take place. After 250, the beam will be permanently deformed. February 8th 2013, 08:32 AM Re: Predicting the Consequences of a bending stress 250N/mm2 Thank you! That makes the answer to ashicus' question pretty obvious does't it? It's remarkable how much knowing the definitions helps! February 8th 2013, 10:31 AM Re: Predicting the Consequences of a bending stress 250N/mm2 Actually, the elastic limit and ultimate bending stress are self-explanatory and sufficient information to answer the question. elastic: returns to original shape after deformation. ultimate: max (breaks) So if you go beyond the elastic limit but don’t break it you will have permanent deformation. It is a mathematics question in the sense of drawing logical conclusions from given numerically quantified information.
{"url":"http://mathhelpforum.com/math-topics/212761-predicting-consequences-bending-stress-250n-mm2-print.html","timestamp":"2014-04-17T08:31:19Z","content_type":null,"content_length":"6938","record_id":"<urn:uuid:245527bf-3036-490f-bc7a-2d9c775c2290>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Axiomatizing Higher Order Set Theory Dmytro Taranovsky dmytro at mit.edu Mon Mar 19 12:54:00 EDT 2012 A key limitation of set theory is that for some properties of sets, there is no set of all sets that satisfy the property. To address this limitation, we consider higher order set theory, but immediately hit a roadblock: If V contains all sets, then we cannot form structures above V. The solution is to build higher-order set theory inside V. That is, we can use a cardinal kappa to represent Ord (the class of all ordinals), elements of V_kappa represent sets, subsets of V_kappa -- classes, elements of P(P(V_kappa)) -- collections of classes, and so on. However, not all cardinals are suitable for this purpose. Definition: A cardinal kappa is reflective if it is correct about higher order set theory with parameters in V_kappa. Note: See below for an alternative equivalent definition that uses reflection properties instead of higher order set theory. This definition cannot be formalized in the language of set theory. Instead, we 1. Extend the language of set theory with a predicate R: R(kappa) <==> kappa is reflective. 2. Axiomatize the resulting extension. 3. Argue that the extension is well-defined, or at least has a solid conceptual basis. Let us start axiomatizing. A1. ZFC A2. Axiom schema of replacement for formulas involving R. A3. R(kappa) ==> kappa is an ordinal Now, while we cannot just formulate correctness for higher order set theory as an axiom, the key observation is that if both kappa and lambda are correct, then they agree with each other. A4. (schema, phi has two free variables and does not use R) R(kappa) and R(lambda) ==> forall s in V_min(kappa,lambda) ( phi(s, kappa) <==> phi(s, lambda) ). Finally, to use reflective cardinals for higher order set theory, we need: A5. There is a proper class of reflective cardinals Theorem: A1-A5 is equiconsistent with ZFC + Ord is subtle. The axioms for reflective cardinals naturally correspond to the large cardinal property of full indescribability: A6. Schema (phi has two free variables and does not use R): If kappa is reflective and A is a set, then phi(kappa, A intersect kappa) ==> thereis lambda<kappa phi(lambda, A intersect lambda) A6 implies that reflective cardinals are strongly unfoldable (==> totally indescribable ==> weakly compact ==> Mahlo ==> inaccessible). Theorem: A1-A5 implies that A6 holds in HOD. A1-A6 is Pi^V_2 conservative over A1-A5. In our presentation so far, there is still incompleteness about how similar elements of R have to be to each other. While one option would be to keep R open-ended and progressively reach higher expressive power through stronger indiscernability requirements on elements of R, we propose to make R definite by requiring R(kappa) <==> (R union {kappa} satisfies A4). This can be formalized into a single statement, which however is slightly technical: A4a. forall a R(a) ==> Ord(a) (that is a is an ordinal); forall a,b,c (Ord(a) and R(b) and R(c) and 0<a<b<c ==> (R(a) <==> forall 'phi' forall s in V_a ( phi(a, s) holds in V_b iff phi(b, s) holds in V_c ))), where 'phi' ranges over (codes for) formulas in set theory (without R) with two free variables. Theorem: ZFC + A4a + A5 + "forall s (R intersect s exists)" is finitely axiomatizable and implies A4. A4a slightly increases the consistency strength, which, however, remains below subtle cardinals. A consequence of A4a is that R is definable from every proper class S subclass R. Analogously to A4a, we can convert A6 into a single statement (which inherently makes it slightly stronger) by using a reflective cardinal in place of V. It remains to show that R is well-defined, or at least intuitively sound. While V is poorly understood, the constructible universe L is a well-understood model of set theory. Theorem (ZFC + zero sharp): There is unique R such that (L, in, R) satisfies A4a and R holds for a proper class of cardinals (that is cardinals in V). (L, in, R) also satisfies A1-A6. Moreover, we get the same theorem for other canonical inner models, which suggests that there is unique natural way to choose R in V to satisfy the axioms, which we intuitively describe as follows. Key to infinitary set theory is the concept of a reflection property. Examples of reflection properties abound -- "kappa is a cardinal", "kappa is inaccessible", "kappa is a Sigma-2 elementary substructure of V", and so on, and they appear to form a directed system. Convergence Hypothesis: If ordinals a and b have sufficiently strong reflection properties, then they satisfy the same set of statements, even with parameters in V_min(a,b). Definition (assuming convergence hypothesis): kappa is a reflective cardinal, denoted by R(kappa), iff (V, kappa, in) has the same theory with parameters in V_kappa as (V, lambda, in) for every cardinal lambda > kappa with sufficiently strong reflection properties. Although the notion of a reflection property is vague, the convergence hypothesis (combined with the axiomatization) allows us to escape the vagueness, and make the notion of R unambiguous. While our axioms are incomplete, they are not significantly more incomplete than the axioms of set theory. Just like ZFC, the theory can be extended with large cardinal axioms. There are natural ways to incorporate large cardinal notions at the full expressive level of R, and this sometimes leads to stronger large cardinal notions. The results -- and much more -- are in my paper: (also available on arXiv: http://arxiv.org/abs/1203.2270) As always, I am looking for feedback, whether or not you agree with me. Dmytro Taranovsky More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2012-March/016333.html","timestamp":"2014-04-18T16:10:11Z","content_type":null,"content_length":"8628","record_id":"<urn:uuid:0bd27351-3233-455d-92a1-5f29900e92bb>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Luke Skywalker’s air car is moving with a velocity of 28 m/s NW. There is a strong wind blowing 9 m/s SW. What is Luke’s velocity relative to the ground? Given: velocity of air car [] = 28 m/s NW and velocity of the wind, [] = 9 m/s SW. (The NW and SW mean 45^o between N and W, and S and W, since no angle is specified). Unknown: velocity of car relative to the ground, [] = ? m/s? Physical Principles and/or ideas: Vector addition of two vectors at right angles to each other, Pythagorean Theorem. Solution: The velocity Luke reads in his vehicle is measured relative to the medium he is traveling through, the air. But in this case, the air is moving also, so Luke’s velocity relative to the ground is the vector sum of these two velocities. A vector sum means taking into account both the magnitudes and the directions of the vectors. We can see what we need to do in the figure on the right. The dashed line, which is the hypotenuse of the right triangle, is what we need to find; but we also need to find the angle q since our answer is a vector, which means it must have a magnitude and a direction. We find the magnitude of the resultant in this case by applying the Pythagorean Theorem, since we want the size of the hypotenuse. We get: [] m/s = 29.41 m/s Now we know the magnitude of Luke’s velocity, but we do not know what his direction is. To find the direction we need to determine the angle q. We can get that by using the cosine function (or we could use the sine, or tangent function). We find: cosine q = [] sine q = [] tangent q = [] So, we see that all three procedures do indeed give us the same answer. We now know the angle the resultant velocity makes with the 28 m/s NW vector. To find the actual direction we can either subtract the 17.8^o from 45^o to get 27.2^o which is North of West, or we can add 17.8^o to 45^o to get 62.8^o which is West of North. These two angles, 27.2^o N of W and 62.8^o W of N, are compliments of each other so they do represent the same direction. So Luke’s velocity relative to the ground is Another way to write this would be [], this is if we take 0^o as the East (or x) axis and read counterclockwise from there. All problems involving two vectors that are perpendicular to each other, no matter what quantities the vectors represent (displacement, velocity, acceleration, force, momentum, field or others), can be worked this way. What difference would it make in terms of the procedures used to solve this problem if the: velocity had been directed SE? wind had been directed W? object was an airplane? two velocities were at a 70^o angle relative to each other? Why would the following NOT be a legitimate way to solve this problem? Other Kinematics Examples:
{"url":"http://users.ipfw.edu/maloney/ex2.html","timestamp":"2014-04-20T00:38:03Z","content_type":null,"content_length":"41516","record_id":"<urn:uuid:150fdbe8-d4b3-432e-ac72-868c935456be>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Pacman result for Q7 too good to be true? I played with the Pacman programming assignment given to the Standford class (NOT to our online class) and found a result for the trickySearch Q7 problem with only 379 search nodes expanded. In the statement of the question, it is said that it is hard to get fewer than 7000 nodes expanded. Thus, the result I get seems almost too good to be true. As far as I can tell, by putting an explicit check to verify that the heuristic cost never decreases by more than 1 at each step in the expansion, my heuristic is consistent. I am also pretty sure that it is admissible. Is there anything else I can do to verify that I use a valid heuristic? I should note that the heuristic I use is particularly suited to maze layouts like the trickySearch one. When I use it for the seemingly simpler tinySearch, I get 568 nodes expanded. asked 16 Oct '11, 02:12 Have you tried running it on mediumSearch? (16 Oct '11, 06:03) Maxim Yes, and I had to stop it since, after 20 minutes of run time, it had not come up with a solution and was requiring over 2GB of memory. can you link to the assignment pages and any other references pls ;) Those assignments were a blast. I got 6,110 nodes expanded for trickySearch and 414 nodes expanded on tinySearch (using the AStarFoodSearchAgent). I'd say your heuristic function is overfit to that one particular maze ;) answered 31 Oct '11, 10:26 :D Actually, that might be partly true... It would work really well for similar mazes, those for whom the manhattan distance is a huge under-measurement due to the presence of long and narrow dead-end corridors... Here is a graph of the costs along the optimal path in trickySearch, with the estimated remaining cost (heuristic) stacked above the computed cost (which is of course linear). The heuristic is the aforementioned minimum spanning tree (MST) length + cost to the closest food. Things to note are: • the sum is monotonically non-decreasing (consistency) • it never goes above the optimal path cost of 60 (admissibility); this is implied by consistency and the fact that the heuristic is 0 in the goal state • increments of more than one unit (at steps 3 and 9) correspond to the heuristic value increasing despite the fact we are one step closer to the goal. This is actually a good thing, it means that the heuristic comes closer to the actual remaining cost. Of course the graph doesn't prove anything. To prove consistency (or at least, to convince myself), this is my reasoning: • if in the last move pacman doesn't eat a food, the heuristic can decrease by at most one (the MST stays the same and the distance to the closest food decrease by at most one). • if pacman eats a food that is a leaf of the MST, the heuristic decrease by one (since the length of the removed edge of the MST is equal to the new distance to the closest food, which was 1 in the previous step) • if the eaten food (F) is not a leaf, let n be the number of neighbors of the eaten food in the MST. So you remove n edges from the MST and have to add n-1 new edges back (since the MST is a tree, there is no loop, so you now have to reconnect n components, needing n-1 new edges). For any edge you add, say between components A and B, with length d(A,B), you have the following property: d(A,B) >= d(A,F) and d(A,B) >= d(B,F). That follows from the property of the Minimum Spanning Tree: if you had d(A,B) < d(A,F), then the AF edge wouldn't have been part of the MST in the first place, since a smaller spanning tree would have been possible by replacing the AF edge by that other edge between A and B. By putting all the inequalities together, we can deduce that the length of the new minimum spanning tree cannot decrease by more than the length of the smallest edge leading to F, which is compensated by the new distance to the closest food like in the previous case. Again, the heuristic cannot decrease by more than 1, which was the distance to the closest food in the previous step. answered 17 Jan '12, 19:13 @gbayliss Stanford course downloads are on the course schedule here but already the lecture 1 link is busted. @aroberge Can you offer any tips for installing cs221 code and Python - was it easy to get working? answered 16 Oct '11, 09:25 Python is extremely easy to install. If you have a Mac or a Linux box, it should be there already. If not, head to and follow the links. As for "installing" the code, you simply have to download it, unzip it somewhere. Simply typing "python a_program.py" will run it - no installation step needed. I should have clarified what I meant about installing the code: I was referring to the code samples for the assignment - not Python itself. Thanks for your help, sorry not replying earlier. Loaded pacman under Ubuntu on VirtualBox. Great stuff but going to find getting to grips with the Python code hard - untyped variables makes understanding harder I think. I have a similar issue with Q4, my implementation of a-star seems to find the correct solution much faster than stated in the problem. With the null heuristic, my a-star finds the best solution in 435 nodes when they say 549. answered 16 Oct '11, 20:11 Benoit Ambry For me, it's 538 nodes expanded - much closer to theirs. I can see a difference in a few nodes, given a choice of "left" vs "right" branch for choosing successors, but I am puzzled as to how yours is so much lower. In the case I wrote about, it's all due to a "clever" choice of heuristics - where I get at least an order of magnitude difference better than what could be expected... No, I have a bug, that is clear. I think it is missing some of the branches and it finds the proper solution by luck. The null heuristic always returns 0. I found actually, I wrote f(n) = h(n) instead of f(n) = g(n) + h(n). now I have 538! If you are like me, you may find it useful to include a check like: . if h_cost - new_h_cost > successor[2]: . print "consistency problem", h_cost, new_h_cost, successor[0] in your A-star code. This helped me identify some errors in my heuristic functions. Yeah I have that already, I'm on Q6 and a bad heuristic can really take the a-star algo down. Huh, that's funny that you guys are both getting 538 nodes expanded on this one. I got 549 nodes expanded with the manhattanHeuristic, the same as they predicted. I have had a few seemingly ok heuristics for Q7 - one that finds the optimal solution with 10849 expanded nodes, one that finds an almost optimal solution in 7389 expanded nodes. (The heuristics are taking topology of the maze into account) However, on mediumSearch, the only solutions I could find before my computer melted (rather, ran out of memory) were clearly suboptimal solutions (pacmans oscillating left and right in the maze, eating a few dots on the rigth, turning around, nibbling on the left and back again) that were found in less than 20'000 expanded notes. The heuristics were not admissable for those. Question: Has anyone found a heuristic that solves mediumSearch optimally and finishes in finite time / memory? I have an idea for a solution, but that would require a totally different solution state. I have already spent way too much time on this heuristic and would like to know if there's hope to actually solve it. answered 31 Oct '11, 05:19 So I have a heuristic that works in all mazes but mediumSearch and bigSearch. (trickySearch with optimal solution in 9615 expanded nodes). Would be interesting to hear from others how they are faring. I suspect that my heuristic would solve mediumSearch, but I haven't been able to run it to completion. I have run mediumSearch with my heuristic and it didn't complete before it ate up 6 GB of memory... Mine does Q7 with 5893 nodes expanded... What are other peoples' results? (Mine completes in 1.2 seconds) I have Q4 as 547 nodes expanded with a "total cost of 210". @aroberge I'm interested in your check for consistency... What is h_cost and what is new_h_cost? answered 27 Nov '11, 01:34 Actually... I think I see. So, h_cost is the heuristic of the current position, and new_h_cost must be the heuristic for the successor... I am curious if my heuristic for Q7 is inconsistent or not, given how easy it was to get 5893 as an answer. More later. (27 Nov '11, 01:41) bnsh Yep! Sure enough! Consistency problems! Damnit! hehe... (27 Nov '11, 01:43) bnsh For Q7, I got: 6,110 nodes expanded, total cost 60 (trickySearch) 414 nodes expanded, total cost 27 (tinySearch) For Q4, I got: 549 nodes expanded, total cost 210 (I've got the heuristic consistency check in my aStarSearch) Q7: (tricky search) 60 cost, 215 expanded in 0.1 seconds. Consistency is guarantied by heuristic construction. However, I've inserted code to check for consistency on each step. No warnings so far. tinySearch - 27 cost, 52 expanded. smallSearch - 34 cost, 57 expanded. Heuristic is O(n^3), where n is a number of food dots. In Q7 my best score in trickySearch is: Path found with total cost of 60 in 1.6 seconds Search nodes expanded: 1130 @red75prim I'm interested in what heuristic you used. Could we contact by email? (It should appear in my profile) @maacruz, I can't see your email. However, I can give a hint: consider the length of minimum spanning tree for remaining food dots. BTW, I implemented slightly different priority queue to get 215 expanded nodes. If program uses util.PriorityQueue, then it expands 255 nodes. @red75prim: Hey, I did the same thing ! Length of minimumum spanning tree (using maze distances) + distance to closest food. 247 nodes expanded with the stock util.PriorityQueue in 0.1 s. I used Dijkstra and a cache to speed up the calculations. @ibatugow. Cache, yes, but I was too lazy to implement Dijkstra's algo... [Staring at source code] Well, yes, it's Dijkstra (BFS + tuned getSuccessors). Thanks! But I think it is an inconsistent heuristic (I can think of one such a tree where you remove a node and the resulting total length then is greater), or I'm not making the right approach (my implementation is inconsistent). My email is maacruz in gmail. Looking back... With my first approach for Q7, TrickySearch I got: 5380 nodes expanded, total path cost 60, in 1.8 seconds A solution with 215 nodes expanded is pretty impressive. I am also curious about the approach taken here. Finally got it. Needed a small trick (which I had used in my previous heuristic) with packman to solve the inconsistency. 255 nodes expanded in trickySearch in 1.4 seconds (completely unoptimized although I cache things) maacruz: could you email me? myusername at gmail dot com Anyone have pointers on how to actually implement the MST based on a maze? Thanks! But I think it is an inconsistent heuristic (I can think of one such a tree where you remove a node and the resulting total length then is greater), or I'm not making the right approach (my implementation is inconsistent). My email is maacruz in gmail. answered 16 Jan '12, 18:38 Yes, the heuristic value can increase, but what the consistency condition says is that the (cost up to now + heuristic value) cannot decrease. In other words, the estimated remaining cost cannot decrease faster than the computed cost increase. It doesn't preclude the heuristic function to increase, as long as it stays admissible (i.e., not larger than the remaining optimal cost).
{"url":"http://www.aiqus.com/questions/2206/pacman-result-for-q7-too-good-to-be-true","timestamp":"2014-04-17T15:40:05Z","content_type":null,"content_length":"82973","record_id":"<urn:uuid:2aee7c80-baa4-4d2a-acef-a6fd11835f52>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Subspaces of a Sobolev space up vote 2 down vote favorite For $a \in \mathbb{R}^N\setminus\{0\}, N \ge 2$, and $\lambda \in \mathbb{R}$ let $$ X_{\lambda,a}=\{u(\cdot+\lambda a):\, u(x)=u(|x|) \in W^{1,2}(\mathbb{R}^N)\}. $$ Denote by $X_a$ the closure of the direct sum: $$ \bigoplus_{\lambda \in \mathbb{R}}X_{\lambda,a}. $$ Question: Is $X_a$ a proper subspace of $W^{1,2}(\mathbb{R}^N)$? sobolev-spaces fa.functional-analysis Won't elements in this subspace have radial symmetry around the line in the direction of $a$? – Christopher A. Wong Jan 17 '12 at 21:24 add comment 1 Answer active oldest votes Yes, all functions in $X_a$ are still symmetric wrto the line generated by $a$. And all traces of these functions on the affine hyperplanes orthogonal to $a$ are radially symmetric in dimension $N-1$. up vote 4 edit. Consider any pair $x$ and $x'$ in $\mathbb{R}^N$ with $|x|=|x'|=1$ and $a\cdot x=a\cdot x'$. Then $|x-\lambda a |=|x'-\lambda a|\, ,$ so $u(x)=u(x')$ for all $\lambda$ and all $u\in down vote X_{\lambda,a}$. And this symmetry is preserved taking the linear span and the closure: $u=u\circ R$ holds for any $u\in X_a$ for any orthogonal $R$ that fixes $a\, .$ Thanks! Could you please give an idea of the proof? – Mercy Jan 17 '12 at 21:30 Imagine it geometrically. If you're summing these symmetric functions, they preserve their symmetry around the line parallel to $a$. – Christopher A. Wong Jan 17 '12 at 21:46 add comment Not the answer you're looking for? Browse other questions tagged sobolev-spaces fa.functional-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/85926/subspaces-of-a-sobolev-space?sort=oldest","timestamp":"2014-04-16T16:17:45Z","content_type":null,"content_length":"53097","record_id":"<urn:uuid:ffdcecc5-e400-45d9-83fe-2be1133f91e5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
derived smooth manifold A derived smooth manifold is the generalization of a smooth manifold in derived differential geometry: the derived geometry over the Lawvere theory for smooth algebras ($C^\infty$-rings): it is a structured (∞,1)-topos whose structure sheaf of functions is a smooth (∞,1)-algebra. Motivation: correction of limits According to the general logic of derived geometry, passing from smooth manifolds to derived smooth manifold serves to correct certain limits that do exist in Diff but do not have the correct cohomological behaviour. This concerns notably pullbacks along smooth functions that are not transversal maps. Pontrjagin-Thom construction For $X$ a compact smooth manifold, by the Pontrjagin-Thom construction there is a smooth function $f : S^n \to M O$ from an $n$-sphere to the Thom spectrum such that if chosen transversal to the zero-section $B \hookrightarrow M O$ the pullback $f^* B$ $\array{ f^* B &\to& B \\ \downarrow && \downarrow \\ S^n &\stackrel{f}{\to}& M O }$ is a manifold cobordant to $X$, so that $[X] \simeq [f^* B]$ in the cobordism ring $\Omega$. By using derived smooth manifolds instead of ordinary smooth manifolds here, the condition that $f$ be transversal to $B$ could be dropped. String topology Floer homology The following definition characterizes the design criterion for derived smooth manifolds as being objects for which homotopy-intersections $A \cap_X B := A \times_X^h B$ preserve the cup product in the cobordism ring $[A] \smile [B] \simeq [A \cap_X B] \,.$ We say an (∞,1)-category $C$ supports derived cup products for cobordisms if • it is equipped with a full and faithful functor $i : Diff \hookrightarrow C$ embedding the category of smooth manifolds into it; • for any two submanifolds $A \to X \leftarrow B$ (transversal or not) the (∞,1)-pullback $A \cap_X B := i(A) \times_{i(X)} i(B)$ exists in $C$; • if $A \to X \leftarrow B$ happen to be transverse maps then $i(A \times_X B) \simeq i(A) \times_{i(X)} i(B) \,,$ with the image under $i$ of the ordinary pullback in Diff on the left; • $i$ preserves the terminal object; • (…nice interaction with underlying topological spaces…) • for each $X \in Diff$ there is a derived cobordism ring $\Omega(X)$ such that • for any submanifolds $A \to X \leftarrow B$ we have $[A] \smile [B] = [A \cap_X B]$ in $\Omega(X)$ A central statement about derived smooth manifolds will be The $(\infty,1)$-category of derived smooth manifolds has derived cup products for cobordisms. This is (Spivak, theorm 1.8). The definition of derived smooth manifolds is indicated at the very end of A detailed construction and discjussion in terms of the model category presentation by homotopy T-algebras is in Something roughly related is discussed in • Dominic Joyce, D-orbifolds, Kuranishi spaces, and polyfolds talk notes (Jan 2010) (pdf) There is also • Dennis Borisov, Justin Noel, Simplicial approach to derived differential manifolds (arXiv:1112.0033) Seminar notes on differential derived geometry in general and derived smooth manifolds in particular are in
{"url":"http://ncatlab.org/nlab/show/derived+smooth+manifold","timestamp":"2014-04-17T15:32:10Z","content_type":null,"content_length":"58253","record_id":"<urn:uuid:daa6fd1d-6eac-4ea3-83fc-7d05197a2cd1>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Graphical partitions Discussion Room, Newton Institute In 1962, S. L. Hakimi proved necessary and sufficient conditions for a given sequence of positive integers d_1, d_2, ..., d_n to be the degree sequence of a non-separable graph or that of a connnected graph. Our goal in this talk is to utilize Hakimi's results to provide generating functions for the functions d_{ns}(2m) and d_c(2m), the number of degree sequences with degree sum 2m representable by non-separable graphs and connected graphs (respectively). From these generating functions, we prove nice formulas for d_{ns}(2m) and d_c(2m) which are simple linear combinations of the values of p(j), the number of integer partitions of j. The proofs are elementary and the talk will be accessible to a wide audience. This is joint work with Oystein Rodseth, University of Bergen, Norway. Related Links
{"url":"http://www.newton.ac.uk/programmes/CSM/seminars/2008022511003.html","timestamp":"2014-04-20T16:26:09Z","content_type":null,"content_length":"4511","record_id":"<urn:uuid:4b4df99a-fb84-4b46-b8fa-285aed952780>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
The strength of genetic interactions scales weakly with mutational effects Genetic interactions pervade every aspect of biology, from evolutionary theory, where they determine the accessibility of evolutionary paths, to medicine, where they can contribute to complex genetic diseases. Until very recently, studies on epistatic interactions have been based on a handful of mutations, providing at best anecdotal evidence about the frequency and the typical strength of genetic interactions. In this study, we analyze a publicly available dataset that contains the growth rates of over five million double knockout mutants of the yeast Saccharomyces cerevisiae. We discuss a geometric definition of epistasis that reveals a simple and surprisingly weak scaling law for the characteristic strength of genetic interactions as a function of the effects of the mutations being combined. We then utilized this scaling to quantify the roughness of naturally occurring fitness landscapes. Finally, we show how the observed roughness differs from what is predicted by Fisher's geometric model of epistasis, and discuss the consequences for evolutionary dynamics. Although epistatic interactions between specific genes remain largely unpredictable, the statistical properties of an ensemble of interactions can display conspicuous regularities and be described by simple mathematical laws. By exploiting the amount of data produced by modern high-throughput techniques, it is now possible to thoroughly test the predictions of theoretical models of genetic interactions and to build informed computational models of evolution on realistic fitness landscapes. Epistasis; Evolution; Fitness landscapes; Genetic interactions; Yeast Genetic interactions [1] have shaped the evolutionary history of life on earth. They have been found to limit the accessibility of evolutionary paths [2], to confine populations to suboptimal evolutionary states and, on larger time scales, to control the rate of speciation [3]. Epistatic interactions can also be relevant to the development of complex human diseases such as diabetes [4]. Complex traits and diseases are determined by a multiplicity of genomic loci [5], whose independent effects and interactions [6] are often necessary to understand the phenotype of interest. Despite the broad implications of epistatic interactions, a quantitative characterization of their typical strength is still lacking. In this study, we consider growth rate in yeast as an example of a complex trait modulated by genetic interactions. Previous studies [7-10] on the relation between the growth effects of a mutation and its epistatic interactions have often been based on a handful of mutations, and only in recent years has anecdotal evidence started being replaced by robust statements based on large data sets. Perhaps the most impressive of these datasets is the one made publicly available with the publication of the article entitled 'The genetic landscape of a cell' by Costanzo et al. [11]. The genome of the budding yeast Saccharomyce cerevisiae includes approximately 6,000 genes, about 1,000 of which are essential. Viable mutants can be constructed by knocking out any of the approximately 5,000 non-essential genes, by reducing the expression of the essential genes, or by partially compromising the functionality of the gene products. The dataset (see Additional file 1, Figure S1) has been compiled with the growth rates of about 5.4 million double knockout mutants, a sizable fraction of all possible double knockout mutants in yeast. Supported by the Costanzo et al. dataset, we consider the fundamental question of whether mutations with larger effects have stronger genetic interactions. Results and discussion An unbiased definition of genetic interactions A basic approach to study genetic interactions is to consider two mutations with known effects on a quantitative trait, and to measure their combined effect in the double mutant [12]. Given [11,13] the growth rates of a wild type S. cerevisiae strain (g[00 ]= 1) and of two single knockout mutants (g[01 ]and g[10]), the growth rate of the double knockout mutant (g[11]) is adequately predicted by a multiplicative null model: Equivalently, defining 'log growth' as the logarithm of the relative growth rate, the log growth of the double knockout mutant is predicted by an additive null model (Figure 1a): Figure 1. The log growth rates of two mutations combine additively. (a) The average effect of a double knockout (G[11]) as a function of the effects of the single knockouts (G[01 ]and G[10]) is G[11 ]= G[01 ]+ G[10]. Experimental mean +/- standard deviation (blue line and blue shaded area) and prediction of the additive null model (red line). (b) Given two mutations, there are four possible mutants with their corresponding log growth rates (black dots). If three of the four log growth rates are known, the fourth one can be predicted by a linear extrapolation (red plane), and epistasis can be defined as the linear deviation from such prediction (red arrow). The magnitude of the deviation is the same regardless of which three of four mutants are chosen. Epistatic interactions are identified as deviations from the null model, but several non-equivalent alternatives exist for quantifying these deviations [14]. The most common definition of epistasis considers the difference between the measured and the predicted growth rates for the double knockout mutant [11]: Importantly, this definition of e subtly constrains the possible values of epistasis. In fact, when combining very deleterious mutations, e cannot be large and negative even when the double knockout mutant is a synthetic lethal mutant: In order to avoid a priori constraints on the intensity of epistasis, genetic interactions can be defined as the ratio between the measured and predicted relative growth rates, leading to: As an example, E = +1 indicates a double mutant whose growth rate is twice as large as would be expected based upon the multiplicative null model, whereas E = -1 indicates a double mutant whose growth rate is half as large as predicted. This definition of epistasis as fold deviation in the multiplicative model for growth rates is equivalent to a natural definition of epistasis as linear deviation in the additive model for log growth rates (Figure 1b): A second bias of the common definition of epistasis is that e depends on the choice of which genotype is labeled as 'wild type' or '00', a choice which is always arbitrary, but more obviously so when studying engineered organisms or populations evolving in alternating environments [15]. By contrast, depends only on which pair of genes is considered, being a geometric measure for the 'curvature' of the fitness landscape (Figure 1b). The definition of E has found some favor in the theoretical literature [7,16], but it is not routinely used to analyze experimental data apart from rare exceptions [8,17]. Its main drawback is that synthetic lethals have a log growth rate of -∞, and require a separate although simpler analysis in which lethal interactions can simply be counted. The definition of E proves instead to be extremely valuable when quantifying the strength of non-lethal genetic interactions. Epistatic interactions scale weakly with mutational effects With the appropriate definition of epistasis, a simple relation between the growth rate effects of two mutations and the expected strength of their interaction emerges. Let us consider two groups of mutations; in the first group, all mutations have log growth effect G[01], and in the second group, all mutations have log growth effect G[10]. We can then build all possible double mutants obtained by combining one mutation from each group. In the absence of epistasis, all the double mutants have a log growth rate and the distribution of genetic interactions is sharply peaked at E = 0. When epistasis is present, the distribution of genetic interactions has, in general, non-zero mean and standard deviation. Experimentally, however, the mean of genetic interactions is close to zero (this is why the null model remains approximately valid) (Figure 1a; Figure 2d). Even when the mean interaction is vanishing, the difference between the experimental dataset and the ideal case without interactions can be quantified by the finite value of the experimental standard deviation σ(G[01], G[10]), which provides a numerical estimate for the characteristic strength of epistatic interactions. Figure 2. The strength of epistatic interactions scales with the log growth effects of the interacting knockouts. (a) Each dot represents the variance of several thousand epistatic interactions binned according to the log growth effects of the two single knockouts, G[01 ]and G[10]. The blue surface is the phenomenological fit: Slices of the plot in (a) for G [01 ] = constant. The dots are the same as in (a), and the solid lines represent the corresponding slice of the one-parameter fitting surface. Diagonal slice of the plot in (a) with finer bins (G [01 ] = G [10 ] within 20%, G = mean(G , G )). The blue shaded area is the 25 to 75% confidence interval computed by bootstrap; the red line (var(G, G) = 0.079 G) is computed from the phenomenological model, and the dashed gray line, for which var(G, G) is proportional to G , represents the lower bound to the slope predicted by the Fisher's geometric model. (c, inset) The epistatic interactions between beneficial mutations are vanishingly small, independently of the effect of the combined mutations. Probability density functions p(E') for the strength of genetic interactions between two deleterious knockouts with similar log growth effects. Different colors correspond to knockouts with different effects: the growth rates effects of the single knockouts being combined are close to -38% (red), -22% (yellow), -12% (green), -6% (blue), and -3% (purple). Each curve has been rescaled so that all distributions have a standard deviation = 1. The left tail of the distributions displays a fat tail, describing the occurrence of strong negative genetic interactions (for comparison, the dashed-dotted black line is a normal distribution). In order to produce reliable numerical results, thousands of growth rates are necessary to characterize the probability distribution of epistasis. We analyzed the Costanzo et al. dataset by binning pairs of mutations according to the log growth effects of their single knockouts G[01 ]and G[10], using the method described above to outline the probability distribution of epistasis. We chose bin sizes that grow exponentially with G in order to ensure an approximately constant number of data points in each bin (see Materials and Methods; see Additional file 1, Figure S2). Most bins contain from thousands to tens of thousands of data points. For each bin, we computed that is, the variance of the random variable E relative to the bin labeled by growth rates G[01 ]and G[10]. In the rest of the paper we will refer to such variance as var(G[01], G[10]), emphasizing that the variance in the strength of epistatic interactions is, eventually, a function of G[01 ]and G[10 ](Figure 2a). The square root of the variance, σ(G[01], G[10]), then represents the expected strength of epistasis as a function of the independently varying effects of the two single knockouts. A natural expectation for the dependence of epistasis on the effect of the combined mutations comes from rescaling Figure 1a; if all the log growth effects of single and double knockouts increase by a factor of two, then the strength of epistasis should also increase by a factor of two. Unexpectedly, however, when combining deleterious mutations, the strength of epistatic interactions does grow with the effects of the mutations that are combined, but the dependence is much weaker; when the effect of both single knockouts is doubled, the strength of epistasis increases only by a factor of √2 (Figure 2). In more detail, we observed that if the effect of the first knockout (G[01]) is held constant, the dependence of the variance of epistasis on the effect of the second knockout (G[10]) is well approximated by a Michaelis-Menten law (Figure 2b): When the effects of both knockouts are free to vary, the requirement that the variance is a symmetric function of its two variables, G[01 ]and G[10], implies that K = |G[01]| and that v is proportional to G[01]. A one-parameter function which fits the seen variance over the whole range of deleterious fitness effects (Figure 2a) is then: This functional form can also be obtained from a simple model based on diffusion in fitness space (see Additional file 1, Supplementary text 1). An even simpler phenomenological fit, although slightly less accurate, is: (see Additional file 1, Figure S3). Importantly, these functions capture two major features of the data; first, epistasis vanishes when G[01 ]or G[10 ]= 0; second, when the effects of the two knockouts are similar (G[01 ]= G[10 ]= G along the diagonal of the surface in Figure 2a), the variance of epistasis is approximately proportional to G (Figure 2c): The scaling described above is seen only for deleterious knockouts. When combining the beneficial knockouts in the dataset instead, the strength of epistasis is close to zero (Figure 2c, inset). This might be because the slightly beneficial knockouts are not adaptive mutations, but simply remove genes that are not needed in the conditions chosen for the experiment, so that their interactions are likely to be negligible. However, in apparent contrast to this observation, recent studies [8,18] on adaptive mutations in Escherichia coli suggest that genetic interactions between adaptive mutations are mostly negative. In fact, during adaptation, the prevalence of negative interactions is likely to be caused by biased sampling, because the mutations that fix in the population are likely to be the ones that solve environmental or biological challenges for an organism. Diminishing returns arise because the appearance of multiple 'solutions' to the same challenge is not necessarily preferable over the presence of a single solution. Rather than focusing on mutations that fix during a bout of adaptation, the Costanzo et al. dataset includes a large fraction of all possible pairs of genes in the yeast genome. Because for most pairs the two genes are involved in unrelated biological processes, interactions are often vanishingly small. We did observe, however, that the distribution of epistatic interactions is asymmetric, with a heavy tail of deleterious interactions (Figure 2d). Experimental uncertainty generates spurious epistatic interactions When inferring genetic interactions from experimental data, it is important to take into account that each measured growth rate is affected by some uncertainty, and that measurement errors in the growth rates could erroneously be interpreted as genetic interactions. Importantly, for each single and double mutant, the Costanzo et al. dataset provides the mean growth rate together with its estimated experimental uncertainty (the growth rate of each mutant being measured at least four times). In order to quantify the effect of the experimental uncertainty on the inferred epistatic interactions, we constructed a number of mock datasets, assuming that the null model without epistatic interactions described biology exactly. In these datasets, each single knockout had the same growth rate as in the original dataset, and each double knockout had a growth rate equal to the product of the relative growth rates of the corresponding single knockouts. We then randomized the mock datasets by shifting each growth rate by a random amount sampled from a Student's t-distribution, with width depending on the corresponding experimental uncertainty reported in the original dataset (see Additional file 1, Supplementary text 3). As expected, analysis of these 'noisy' datasets revealed some epistasis, clearly caused by our addition of experimental noise rather than by any biological mechanism. We found that for pairs involving beneficial or neutral mutations, the variance computed in the mock datasets was comparable to or even greater than the variance observed in the original dataset (Figure 3a, black curves; Figure 3b, blue regions). This fact provides an important internal control, suggesting that the experimental noise has not been underestimated. In spite of this, for pairs of knockouts with substantially deleterious effects, experimental noise accounted for less than half of the total observed variance, with the rest representing genuine biological interactions (Figure 3a, red curves; Figure 3b, red regions). Figure 3. Experimental noise does not account for all of the observed variance of epistasis. (a) Comparison of experimentally measured variance (solid lines; shaded areas: 25 to 75% confidence intervals) and variance caused by experimental noise (dashed lines). If one of the two mutations is neutral, noise accounts for all of the observed variance (black). When deleterious mutations are combined, noise accounts for less than half of the observed variance (red, G[01 ]≈ -0.7). (b) Ratio between total observed variance and noise-generated variance as a function of the log growth of the knockouts being combined. For deleterious knockouts, the ratio can be significantly greater than 1. We then decomposed the variance observed in the original dataset into a contribution produced by experimental uncertainty and a contribution of biological origin; the strength of epistatic interactions was finally computed as the square root of the biological part of the variance. For deleterious knockouts, the relative difference between epistasis computed from the raw data and from the data after subtracting the experimental noise was less than 30%, emphasizing the significant but not overwhelming contribution of experimental noise to the observed variability. Figure 2(a-c) represents the 'biological' part of the observed epistasis; before subtracting the contribution of the experimental uncertainty, the plots are qualitatively similar, but quantitatively slightly different (see Additional file 1, Figure S4). Importantly, because variances are additive, the estimated contribution of the experimental uncertainty to epistasis is largely independent of the choice of the statistical distribution used to model experimental uncertainty. In two instances, however, the unknown details of the full distribution of experimental noise are important; when outlining the distribution of epistatic interactions (Figure 2d) and when describing the probability to observe sign epistasis (Figure 4b). In those two figures, we plotted the raw data, and did not attempt to deconvolve the contribution of experimental uncertainty. Figure 4. Sign epistasis is less likely to occur between mutations with large effects. (a) Examples of a smooth landscape with paths of monotonically increasing fitness (left) and a rugged landscape characterized by reciprocal sign epistasis (right). (b) Experimentally measured probability of observing sign epistasis as a function of the log growth of two single knockouts with similar effects (G [01 ]= G[10 ]within 20%, G = mean(G[01], G[10])). The blue shaded area is the standard error of the mean computed by bootstrap. Comparison between theory and experiment The scaling of epistasis observed in the Costanzo et al. dataset (Figure 2) is in sharp contrast to the predictions of Fisher's geometric model [19], a popular model of epistasis in which genetic interactions emerge from geometry. As we saw, when the effects of the two knockouts are similar (G[01 ]= G[10 ]= G), the variance of epistasis is approximately proportional to G. By contrast, in the Fisher's model, the variance var(G, G) would grow faster than G^2 (Figure 2c; see Additional file 1, Supplementary text 2), a much stronger dependence than the linear dependence observed A concrete numerical example can highlight the importance of the weaker-than-expected scaling of epistasis described in this study. Let us consider two gene knockouts, each of which reduces the relative growth rate by 5%, from 1.0 to 0.95. According to the multiplicative null model, the growth rate of the double knockout will be approximately 0.95^2, or approximately 0.90. The questions now are: What kind of deviations could be expected around 0.90? Would a growth rate of 0.85 be surprising? What about a growth rate of 0.50?. Let us use the analytic fit discussed in the previous section A +/- one standard deviation interval for the growth rate of the double knockout is then Notice that it is not unlikely that epistasis will cancel the effect of the second mutation, so that the growth rate of the double knockout mutant is greater than 0.95, that is, greater than the growth rate of either of the single knockout mutants. Let us now consider two gene knockouts with stronger effects, each of which reduces the growth rate from 1.0 to 0.60. Then about 10 times as large as the log growth of the single mutants in the previous example. The Fisher's model would predict a σ(G, G) at least 10 times larger than in the previous example (σ(G, G) ≥0.76), and an interval of likely growth rates for the double knockout mutants at least as large as Notice how, once again, it is not unlikely that owing to genetic interactions, the growth rate of the double knockout mutant is greater than 0.60, the growth rate of either of the two single knockout mutants. The analytic model derived from the experimental data leads to a strikingly different conclusion: and the +/- one standard deviation interval for the growth rate of the double knockout becomes In this case, a deviation from the null model that is greater than three standard deviations would be needed for the double knockout mutant to have a growth rate greater than that of the single knockout (0.60), making the event extremely unlikely. Epistasis constrains the evolutionary dynamics The previous section provided two examples of reciprocal sign epistasis, realized when two deleterious mutations produce a double mutant that is fitter than either of the two single mutants (Figure 4a). In those cases, a fitness valley limits the evolutionary accessibility of the fitter double mutant, and only on longer time scales may the simultaneous appearance of two mutations [20,21] drive a population to the new local fitness maximum. In this context, the scaling behavior of epistasis is of great importance, because it determines the number and the topology of the evolutionarily accessible paths [2,22,23], ultimately affecting the possible outcomes of the evolutionary process. In order to describe how epistasis shapes the naturally occurring fitness landscapes, let us consider S(G, G), the probability to observe sign epistasis when combining two mutations with similar growth rate effects, G. Here, S(G, G) depends on the typical interaction strength, In particular, if σ(G, G) is proportional to G, then the probability of observing sign epistasis is independent of G. The Fisher's model implies a super-linear dependence of σ(G, G) on G, thus predicting a greater probability of observing sign epistasis among mutations with strong effects. Instead, if the scaling of σ(G, G) is proportional to √G (Figure 2), then sign epistasis is more likely to occur among mutations with small effects (Figure 3b). When the relative growth rate effects of the single knockouts are small (<2 to 3%), experimental uncertainty prevents us from pinpointing which pairs of genes are epistatic. This does not mean, however, that mutations with small effects do not interact. Assuming that the scaling of epistasis we measured directly for mutations with intermediate and large effects extends to mutations with small effects, a consequence of the observed scaling of epistasis is the roughening of the local fitness landscape in the proximity of an evolutionary optimum; when the fitness effects of available mutations become small [24], epistatic interactions become increasingly relevant [25,26], reducing the accessibility of evolutionary paths and further slowing down the rate of adaptation [27,28]. The evolutionary dynamics on correlated fitness landscapes [10,29] with the realistic correlations described here certainly deserves further experimental and theoretical investigation. The scaling of genetic interactions may be generic To date, our analysis has been limited to interactions between entire gene knockouts. Although mutations with extreme effects on gene regulation and horizontal gene transfer are biologically relevant mechanisms for the removal or acquisition of whole genes at once, organisms explore possible genetic variants largely through the accumulation of single point mutations. The Costanzo et al. data et contains thousands of double mutants for which the first mutation is a gene knockout and the second mutation consists of one or more point mutations in a different gene, causing the gene product to misfold in a temperature-sensitive way. Although the distribution of growth rate effects for point mutations is different than for single gene knockouts (see Additional file 1, Figure S2), the statistics of genetic interactions are remarkably similar when combining two single knockouts and when combining a single knockout with a point mutation (Figure 5). A similar scaling is also seen for the epistatic interactions between single gene knockouts and decreased abundance by mRNA perturbation [30] (DamP) perturbations of a second gene (see Additional file 1 Figure S5). The analysis of these hybrid double mutants suggests that the statistics of the interactions between any two genetic perturbations are determined only by their growth rate effects [31], and not by their biological origin in terms of point mutations or gene knockouts. Figure 5. Point mutations have similar epistatic interactions to those of entire gene knockouts. (a) Comparison between the variance observed in double gene knockout mutants (rainbow dots, same as in Figure 2a) and the variance observed in mixed double mutants generated by combining a gene knockout with point mutations in a different gene (black dots). (b) The red curve is the diagonal slice of the plot in (a) (G[01 ]= G[10 ]within 20%, G = mean(G[01], G[10])), and the red shaded area is the 25 to 75% confidence interval for the mixed double mutant variance. For comparison, the blue curves describe the variance for double gene knockouts as in Figure 2c. As in Figure 2c, the red line has equation var(G, G) = 0.079 G. A comparison between different definitions of epistasis Importantly, any quantitative result on epistasis is a consequence of how epistasis is defined. Of particular interest is how strong an epistatic interaction is deemed to be, based upon its ranking when compared with that of other pairs of mutations. Although the 'traditional' definition and the 'geometric' definition agree about the sets of positive and negative interactions, they assign different strengths and, more importantly, different rankings to the same pair of interacting mutations. As an example, if the Costanzo et al. dataset is analyzed using the 'traditional' definition of genetic interactions, then the linear dependence of var(G, G) on G in Figure 2c is replaced by an oddly non-monotonic dependence, displaying weaker interactions for pairs of genes with either very small or very large fitness effects (Figure 6a). As mentioned previously, this decrease in the inferred strength of epistatic interactions for very deleterious mutations is a mathematical consequence of the traditional definition of epistasis, rather than a property of genetic interactions. The same bias would lead us to conclude that genes with strong effects on growth are almost non-interacting (Figure 6b, red line). However, because previous studies have determined that essential genes partake in more interactions than do non-essential genes [32], it is also reasonable to expect that non-lethal genes with large growth effects are involved in more interactions than genes with small growth effects. Indeed, according to the 'geometric' definition of epistasis, the fraction of genes with which a gene interacts steadily increases with the growth rate effect of the gene (Figure 6b, blue line). By contrast, the traditional definition of epistasis, consistently assigns low rankings to interactions between genes with large growth rate defects, as confirmed by a further analysis comparing the two definitions of epistasis against interactions inferred from the Gene Ontology (GO) database [33] (see Additional file 1, Figure S6). According to the geometric definition of epistasis, genetic networks [34] are denser than expected not only among essential gene [32], but also among genes with large growth effects. Figure 6. Comparison between the traditional and the geometric definitions of epistasis (e and E, respectively). (a) Figure equivalent to Figure 2c, using the traditional definition of epistasis. (b) The fraction of genes interacting with a specific gene is a function of the growth rate effect of such gene. Only the 10,000 most interacting pairs the geometric definition (blue) and the traditional definition (red) are considered to be interactions. Finally, it is important to emphasize that the traditional definition of epistasis remains slightly more successful at discovering the functional relations between genes, as cataloged in the GO database (see Additional file 1, Figure S6). Part of the reason for this could be that some of those functional characterizations were suggested by the traditional definition of epistasis in the first place. It is certainly true, however, that many of the top-ranking interactions according to the geometric definition of epistasis involve single and double mutants with small growth rates; for those mutants, experimental noise is relatively large, and this may cause a few weakly interacting pairs to be incorrectly ranked as strongly interacting. It is likely that the experimental protocols could be easily adjusted to reduce the relative uncertainty on the growth rate of especially slow-growing mutants to avoid this issue (for example, by allowing for a much longer time for growth or by measuring the growth rates of additional replicates). We analyzed the growth rates of about five million double mutants in the dataset associated with the work by Costanzo et al. We characterized how the strength of genetic interactions depends on the growth effects of the mutations being combined, and found a weaker dependence than that predicted by current theoretical models. Although the results were obtained mainly from entire gene knockouts, there is some evidence that the observed scaling might extend to the interactions between single point mutations. The scaling of epistasis might or might not be generic [35,36]; important drivers could be the harshness of the environment [37], details about the evolutionary history [38-40], sexual versus asexual reproduction [41] and, perhaps most importantly, metabolic [42-45] and genetic complexity [46,47]. In general, the experimentally observed scaling suggests a previously unexplored class of correlated fitness landscapes with tunable roughness, in which epistasis depends explicitly on the effects of the mutations being combined. A clear limitation of our discussion is that only pair interactions were considered. Although high-throughput experiments will provide data on higher-order interactions, a solid understanding of pair interactions remains necessary before addressing n-mutation interactions. A genuine three-mutation interaction, for instance, should be defined as the unexplained deviation from what can be computed by combining the effects of all relevant mutations and their pair interactions [10,48], perhaps using linear fits within the additive null model for log growth rates. The results we present here were based on a geometric definition of epistasis. We compared this definition with a more standard definition, highlighting the desirable mathematical properties of the geometric definition and the simple phenomenological relations it produces. In conclusion, although each epistatic interaction between specific genes depends on biological details and remains largely unpredictable from first principles, we have shown that the statistical properties of an ensemble of interactions can display conspicuous regularities, and can be described by simple mathematical laws. Materials and methods The Costanzo et al. dataset is publicly available [49]. The file http://sgadata_costanzo2009_rawdata_101120.txt.gz webcite was downloaded on August 17, 2010 and analyzed with Mathematica (code available at the Gore laboratory website [50]). We restricted our analysis to double knockout mutants whose growth rates were positive numerical values and for which the growth rates of both single mutants were numerical values (see Additional file 1, Figure S1). Some genes appear in the dataset both as query and array genes; care was taken to avoid double counting. The exponentially growing intervals used for the binning of the log growth rate effects were defined as [-2^n, -2^n-1] for an appropriate range of integer n's. Owing to the rarity of extremely deleterious mutations, bins for positive n's contained only a few data points, while bins with large negative n's were extremely small. In the figures we reported only bins for n = -7 to 0, containing log growth rate effects ranging from -2^0 = -1 to -2^-8 = -0.0039 or, alternatively, relative growth rate effects ranging from 2^-1 = 0.5 to 2^-0.0039 = 0.997. Different choices for the binning sizes and positions did not significantly alter the results of the analysis. In order to quantify the contribution of experimental uncertainty to epistasis, we generated nine randomized mock datasets. The mean level of noise-generated epistasis in these nine datasets is reported in Figure 4 (dashed lines), and we provide an extensive discussion of the choice of Student's t-distributions to generate the mock datasets from the original dataset (see Additional file 1, Supplementary text 3). The GO database go_201207-assocdb-tables.tar.gz was downloaded from the GO site [51] on July 19, 2012. The MySQL database was queried with Python and analyzed Mathematica (code available upon Authors' contributions AV and JG designed research; AV performed research and analyzed data; AV and JG wrote the paper. Both authors have read and approved the final manuscript. We are grateful to Mingjie Dai for collaboration during the early stages of the study. We thank Kirill Korolev, Pankaj Mehta, and the members of the Gore laboratory for providing comments and advice on the manuscript. This research was funded by an NIH Pathways to Independence Award, NSF CAREER Award, Pew Biomedical Scholars Program, and Alfred P. Sloan Foundation Fellowship. 1. Phillips PC: Epistasis - The essential role of gene interactions in the structure and evolution of genetic systems. Nat Rev Genet 2008, 9:855-867. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 2. Weinreich DM, Delaney NF, DePristo MA, Hartl DL: Darwinian evolution can follow only very few mutational paths to fitter proteins. Science 2006, 312:111-114. PubMed Abstract | Publisher Full Text 3. Dettman JR, Sirjusingh C, Kohn LM, Anderson JB: Incipient speciation by divergent adaptation and antagonistic epistasis in yeast. Nature 2007, 447:585-588. PubMed Abstract | Publisher Full Text 4. Hoh J, Ott J: Mathematical multi-locus approaches to localizing complex human trait genes. Nat Rev Genet 2003, 4:701-709. PubMed Abstract | Publisher Full Text 5. Mackay TFC, Stone EA, Ayroles JF: The genetics of quantitative traits: challenges and prospects. Nat Rev Genet 2009, 10:565-577. PubMed Abstract | Publisher Full Text 6. Jansen RC: Studying complex biological systems using multifactorial perturbation. Nat Rev Genet 2003, 4:145-151. PubMed Abstract | Publisher Full Text 7. Gros PA, Le Nagard H, Tenaillon O: The evolution of epistasis and its links with genetic robustness, complexity and drift in a phenotypic model of adaptation. Genetics 2009, 182:277-293. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 8. Khan AI, Dinh DM, Schneider D, Lenski RE, Cooper TF: Negative epistasis between mutations in an evolving bacterial population. Science 2011, 332:1193-1196. PubMed Abstract | Publisher Full Text 9. Wilke CO, Adami C: Interaction between directional epistasis and average mutational effects. Proc R Soc Lond B Biol Sci 2001, 268:1469-1474. Publisher Full Text 10. Beerenwinkel N, Pachter L, Sturmfels B, Elena SF, Lenski R: Analysis of epistatic interactions and fitness landscapes using a new geometric approach. BMC Evol Biol 2007, 7:60-73. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text 11. Costanzo M, Baryshnikova A, Bellay J, Kim Y, Spear ED, Sevier CS, Ding H, Koh JLY, Toufighi K, Mostafavi S, Prinz J, St Onge RP, VanderSluis B, Makhnevych T, Vizeacoumar FJ, Alizadeh S, Bahr S, Brost RL, Chen Y, Cokol M, Deshpande R, Li Z, Lin Z-Y, Liang W, Marback M, Paw J, San Luis B-J, Shuteriqi E, Tong AHY, van Dyk N, et al.: The genetic landscape of a cell. Science 2010, 327:425-431. PubMed Abstract | Publisher Full Text 12. Dixon SJ, Costanzo M, Baryshnikova A, Andrews B, Boone C: Systematic mapping of genetic interaction networks. Annu Rev Genet 2009, 43:601-625. PubMed Abstract | Publisher Full Text 13. Baryshnikova A, Costanzo M, Kim Y, Ding H, Koh J, Toufighi K, Youn J-Y, Ou J, San Luis B-J, Bandyopadhyay S, Hibbs M, Hess D, Gingras A-C, Bader GD, Troyanskaya OG, Brown GW, Andrews B, Boone C, Myers CL: Quantitative analysis of fitness and genetic interactions in yeast on a genome scale. Nat Methods 2010, 7:1017-1024. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 14. Mani R, StOnge RP, Hartman JL, Giaever G, Roth FP: Defining genetic interaction. Proc Natl Acad Sci USA 2008, 105:3461-3466. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 15. Tan L, Gore J: Slowly switching between environments facilitates reverse evolution in small populations. Evolution 2012, 66:3144-3154. PubMed Abstract | Publisher Full Text 16. Peters AD, Lively CM: Epistasis and the maintenance of sex. In Epistasis and the evolutionary process. Edited by Wolf JB, Brodie ED, Wade MJ. Oxford: Oxford University Press; 2000:99-112. 17. Martin G, Elena SF, Lenormand T: Distributions of epistasis in microbes fit predictions from a fitness landscape model. Nat Genet 2007, 39:555-560. PubMed Abstract | Publisher Full Text 18. Chou HH, Chiu HC, Delaney NF, Segre D, Marx CJ: Diminishing return epistasis among beneficial mutations decelerates adaptation. Science 2011, 332:1190-1192. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 19. Weinreich DM, Watson RA, Chao L: Sign epistasis and genetic constraint on evolutionary trajectories. Evolution 2005, 59:1165-1174. PubMed Abstract 20. Weissman DB, Desai MM, Fisher DS, Feldman MW: The rate at which asexual populations cross fitness valleys. Theor Pop Biol 2009, 75:286-300. Publisher Full Text 21. Poelwijk FJ, Kiviet DJ, Weinreich DM, Tans ST: Empirical fitness landscapes reveal accessible evolutionry paths. Nature 2007, 445:383-386. PubMed Abstract | Publisher Full Text 22. Velenich A, Gore J: Synthetic approaches to understanding biological constrains. Curr Opin Chem Biol 2012, 16:323-328. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 23. Eyre-Walker A, Keightley PD: The distribution of fitness effects of new mutations. Nat Rev Genet 2007, 8:610-618. PubMed Abstract | Publisher Full Text 24. Tan L, Serene S, Chao HX, Gore J: Hidden randomness between fitness landscapes limits reverse evolution. Phys Rev Lett 2011, 106:198102. PubMed Abstract | Publisher Full Text 25. Woods RJ, Barrick JE, Cooper TF, Shrestha U, Kauth MR, Lenski RE: Second-order selection for evolvability in a large Escherichia coli population. Science 2011, 331:1433-1436. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 26. Orr HA: The population genetics of adaptation the distribution of factors fixed during adaptive evolution. Evolution 1998, 52:935-949. Publisher Full Text 27. Orr HA: The genetic theory of adaptation a brief history. Nat Rev Genet 2005, 6:119-127. PubMed Abstract | Publisher Full Text 28. Kryazhimskiy S, Tkčik G, Plotkin JB: The dynamics of adaptation on correlated fitness landscapes. Proc Natl Acad Sci USA 2009, 106:18638-18643. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 29. Breslow DK, Cameron DM, Collins SR, Schuldiner M, Stewart-Ornstein J, Newman HW, Braun S, Madhani HD, Krogan NJ, Weissman JS: A comprehensive strategy enabling high-resolution functional analysis of the yeast genome. Nat Methods 2008, 5:711-718. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 30. Xu L, Barker B, Gu Z: Dynamic epistasis for different alleles of the same gene. Proc Natl Acad Sci USA 2012, 109:10420-10425. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 31. Davierwala AP, Haynes J, Li Z, Brost RL, Robinson MD, Yu L, Mnaimneh S, Ding H, Zhu H, Chen Y, Cheng X, Brown GW, Boone C, Andrews BJ, Hughes TR: The synthetic genetic interaction spectrum of essential genes. Nat Genet 2005, 37:1147-1152. PubMed Abstract | Publisher Full Text 32. Barabási AL, Oltvai ZN: Network biology: understanding the cell's functional organization. Nat Rev Genet 2004, 5:101-113. PubMed Abstract | Publisher Full Text 33. The Gene Ontology Consortium: Gene ontology: tool for the unification of biology. Nat Genet 2000, 25:25-29. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 34. Dixon SJ, Fedyshyn Y, Koh JLY, Keshava Prasad TS, Chahwan C, Chua G, Toufighi K, Baryshnikova A, Hayles J, Hoe K-L, Kim D-U, Park H-O, Myers CL, Pandey A, Durocher D, Andrews BJ, Boone C: Significant conservation of synthetic lethal genetic interaction networks between distantly related eukaryotes. Proc Natl Acad Sci USA 2008, 105:16653-16658. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 35. Tischler J, Lehner B, Fraser AG: Evolutionary plasticity of genetic interaction networks. Nat Genet 2008, 40:390-391. PubMed Abstract | Publisher Full Text 36. Harrison R, Papp B, Pál C, Oliver SG, Delneri D: Plasticity of genetic interactions in metabolic networks of yeast. Proc Natl Acad Sci USA 2007, 104:2307-2312. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 37. Wagner A: Gene duplications robustness and evolutionary innovations. BioEssays 2008, 30:367-373. PubMed Abstract | Publisher Full Text 38. Wagner A: Distributed robustness versus redundancy as causes of mutational robustness. BioEssays 2005, 27:176-188. PubMed Abstract | Publisher Full Text 39. Roguev A, Bandyopadhyay S, Zofall M, Zhang K, Fischer T, Collins SR, Qu H, Shales M, Park H-O, Hayles J, Hoe K-L, Kim D-U, Ideker T, Grewal SI, Weissman JS, Krogan NJ: Conservation and rewiring of functional modules revealed by an epistasis map in fission yeast. Science 2008, 322:405-410. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 40. Azevedo RBR, Lohaus R, Srinivasan S, Dang KK, Burch CL: Sexual reproduction selects for robustness and negative epistasis in artificial gene networks. Nature 2006, 440:87-90. PubMed Abstract | Publisher Full Text 41. Segrè D, DeLuna A, Church GM, Kishony R: Modular epistasis in yeast metabolism. Nat Genet 2005, 37:77-83. PubMed Abstract | Publisher Full Text 42. He X, Qian W, Wang Z, Li Y, Zhang J: Prevalent positive epistasis in Escherichia coli and Saccharomyces cerevisiae metabolic networks. Nat Genet 2010, 42:272-276. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 43. Szappanos B, Kovács K, Szamecz B, Honti F, Costanzo M, Baryshnikova A, Gelius-Dietrich G, Lercher MJ, Jelasity M, Myers CL, Andrews BJ, Boone C, Oliver SG, Pál C, Papp B: An integrated approach to characterize genetic interaction networks in yeast metabolism. Nat Genet 2011, 43:656-662. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 44. Almaas E, Kovács B, Vicsek T, Oltvai ZN, Barabási AL: Global organization of metabolic fluxes in the bacterium Escherichia coli. Nature 2004, 427:839-843. PubMed Abstract | Publisher Full Text 45. Sanjuán R, Elena SF: Epistasis correlates to genomic complexity. Proc Natl Acad Sci USA 2006, 103:14402-14405. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 46. Sanjuán R, Nebot MR: A network model for the correlation between epistasis and genomic complexity. PLoS One 2008, 3:e2663. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 47. Wood K, Nishida S, Sontag ED, Cluzel P: Mechanism-independent method for predicting response to multidrug combinations in bacteria. Proc Natl Acad Sci USA 2012, 109:12254-12259. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 48. The Genetic Landscape of the Cell. [http://drygin.ccbr.utoronto.ca/~costanzo2009] webcite 49. Gore Laboratory. [http://www.gorelab.org/software.html] webcite 50. Gene Ontology. GO Database Downloads. [http://www.geneontology.org/GO.downloads.database.shtml] webcite Sign up to receive new article alerts from Genome Biology
{"url":"http://genomebiology.com/2013/14/7/R76","timestamp":"2014-04-18T13:56:46Z","content_type":null,"content_length":"172732","record_id":"<urn:uuid:46ba42a7-ddd5-459b-9176-c0edb3e05a89>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Optimization Quadratic Programming Replies: 1 Last Post: Jan 3, 2014 7:32 AM Messages: [ Previous | Next ] Re: Optimization Quadratic Programming Posted: Jan 3, 2014 7:32 AM On 1/2/2014 6:23 PM, Dahnial wrote: > Hi, > i have a function let say f= 2*x1*x2 + x1^2 - x1 - 2*x2 + .............. > (note that the function can not be displayed by matlab because it is > the result of inverse matrix, and it is very, very long so matlab can > not display it) > i want to optimize f so it tends to zero (minimize) with some constraint > since it is quadratic i can use quadprog in matlab optimization toolbox > [x,fval] = quadprog(H,f,A,b,Aeq,beq,lb,ub) > but however, there comes my problem. if i use this function, i need to > specify H matrix (hessian of the function) and f matrix. of course it > will be a problem, since my function is very long enough that even > matlab can not display it. > Do you have suggestion for me to do the optimization? > thanks > best regards > Dahnial S I guess the question is how do you have the function f defined in MATLAB? If you have it as a symbolic expression, you can use the 'hessian' function to extract the H matrix. If you have it some other way, let us know and maybe we can come up with an idea. Alan Weiss MATLAB mathematical toolbox documentation Date Subject Author 1/2/14 Optimization Quadratic Programming Dahnial 1/3/14 Re: Optimization Quadratic Programming Alan Weiss
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2614225&messageID=9355502","timestamp":"2014-04-20T06:10:26Z","content_type":null,"content_length":"18233","record_id":"<urn:uuid:158b28df-5859-4511-bdc2-09d92228588c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: todays math problem [Date Prev][Date Next][Thread Prev][Thread Next] - [Date Index][Thread Index][Author Index] Re: todays math problem • Subject: Re: [amsat-bb] todays math problem • From: "Frederick M. Spinner" <fspinner@xxxxxxxxxxx> • Date: Tue, 04 Feb 2003 17:02:24 +0000 Doubling the diameter of a dish gives it four times the area. The area is directly proportional to gain. If the efficiencies are the same (key point here), four times the area is 6dB extra gain. If an optimal dish is say 20dB and then you lose 50% of the signal to inefficiency that gives you -3dB or 17 dB. If the twice as big optimal dish is 26dB then you lose 50% (-3 dB) then your at 23 dB. 23 dB - 17 dB = 6 dB. The inefficiency isn't hurting the bigger dish more. Oh okay Now, I see Jerry is talking about the aperature area doubling not the diameter... I've just figured that out from the message. So yes area doubling on a dish = 3 dB more gain regardless of efficiency. Diameter doubling == area times four == 6 dB more gain. In this case: 1959 sq/in / 791 sq/in = 2.47 x as large in area. 10 log(2.47) == 3.93 dB more gain. At S-band, however these are both well below established "minimum" dish sizes (10 wavelengths or more are considered to be "well behaved dishes"). So contrary to what one might think, the bigger dish is likely to be more efficient than the first, and if a dB were picked up that way it would be possible to see an S-unit difference. Couple that with less noise pickup and it is probably worth upgrading to that dish. But I doubt if it would be 7.5 dB better... Fred W0FMS >From: K5OE@aol.com >To: w7lrd@juno.com, amsat-bb@AMSAT.Org >Subject: Re: [amsat-bb] todays math problem >Date: Tue, 04 Feb 2003 03:29:41 -0500 >One of the biggest factors in dish gain is the efficiency of illumination, >but ignoring that complexity (since you didn't give us any specifics) and >assuming a 50% efficient dish/feed combination in both cases, a convenient >short-cut equation for "typical" dishes at 2.4 GHz is: >Gain = Log10(0.256 * Area in sq in.) >Thus: 791 sq in = 23 dBi and 1959 sq in = 27 dBi >...for a delta of only 4 dB. Roughly, each doubling of the dish aperture >is worth 3 dB when all other variables are unchanged. >Jerry, K5OE/G >In a message dated 2/3/2003 9:49:50 PM Eastern Standard Time, >w7lrd@juno.com writes: > > Hello > > Bob has two dishes, dish 1 is 1959 square inches. Dish 2 is 791 square > > inches. So dish 1 is about 2.5 times larger than dish 2. Bob thinks > > dish 1 should be 7.5 db better than dish 2, that's 2.5X3db, > > or maybe a > > little over one S unit. Is Bob right? > > > > 73...Bob...W7LRD > > Seattle >Sent via amsat-bb@amsat.org. Opinions expressed are those of the author. >Not an AMSAT member? Join now to support the amateur satellite program! >To unsubscribe, send "unsubscribe amsat-bb" to Majordomo@amsat.org MSN 8 helps eliminate e-mail viruses. Get 2 months FREE*. Sent via amsat-bb@amsat.org. Opinions expressed are those of the author. Not an AMSAT member? Join now to support the amateur satellite program! To unsubscribe, send "unsubscribe amsat-bb" to Majordomo@amsat.org
{"url":"http://www.amsat.org/amsat/archive/amsat-bb/200302/msg00129.html","timestamp":"2014-04-16T13:46:37Z","content_type":null,"content_length":"6058","record_id":"<urn:uuid:f330e562-f25a-4117-8b27-6d6456b846cc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51447555e4b0e8e8f6bc2e6a","timestamp":"2014-04-18T03:32:36Z","content_type":null,"content_length":"58368","record_id":"<urn:uuid:b44fb72f-e8b6-4f8c-8f1e-cbb7fbac44fb>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Find the indicated probability. A die with 6 sides is - JustAnswer Experts are full of valuable knowledge and are ready to help with any question. Credentials confirmed by a Fortune 500 verification firm. Mr. Gregory White and other Calculus and Above Specialists are ready to help you Tutor for Algebra, Geometry, Statistics. Explaining math in plain English. I have a MS in Mathematics and I have taught Mathematics for 10 years at the college level. BS mathematics, MS biostatistics, 35+ yrs designing & analyzing biological experiments. Degrees in Math, Education and Accounting, years and years of tutoring. Ph.D. in Mathematics, 1978, from the California Institute of Technology, over 20 published papers Disclaimer: Information in questions, answers, and other posts on this site ("Posts") comes from individual users, not JustAnswer; JustAnswer is not responsible for Posts. Posts are for general information, are not intended to substitute for informed professional advice (medical, legal, veterinary, financial, etc.), or to establish a professional-client relationship. The site and services are provided "as is" with no warranty or representations by JustAnswer regarding the qualifications of Experts. To see what credentials have been verified by a third-party service, please click on the "Verified" symbol in some Experts' profiles. JustAnswer is not intended or designed for EMERGENCY questions which should be directed immediately by telephone or in-person to qualified
{"url":"http://www.justanswer.com/calculus-and-above/68m7n-1-find-indicated-probability-a-die-sides.html","timestamp":"2014-04-25T09:44:05Z","content_type":null,"content_length":"79198","record_id":"<urn:uuid:699f3184-a604-4aac-84dd-5d7705794561>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Eigenvalues of Hamiltonian 1. The problem statement, all variables and given/known data Consider two Ising spins coupled together −βH = h(σ1 + σ2) + Kσ1σ2, where σ1 and σ2 commute and each independently takes on the values ħ1. What are the eigenvalues of this Hamiltonian? What are the degeneracies of the states? 3. The attempt at a solution Four possible combinations for (σ1,σ2): (1,1), (1,-1), (-1,1) and (-1,-1). Therefore H=(-h/β)*(σ1 + σ2) + K/β*σ1σ2 can be written in a 2×2 matrix. And the eigenvalues λ are obtained by det(H-Eλ)=0. it follows: [(-2h/β)-(K/β)-λ)][(-2h/β)-(K/β)-λ)]-(2K/β)=0 and so: λ[1,2]=-((2h-K)/β)ħsqrt[(2h-K)^2/β^2)-((2h-K)^2/β^2-(2K/β)] and: λ[1,2]=-((2h-K)/β)ħsqrt[2k/β] Are these really the eigenvalues of the hamiltonian? I dont gain any physical insight by this solution and therefore I doubt my calculation. I dont know how to go on and clculate the degeneracies of the states. Thanks in advance!
{"url":"http://www.physicsforums.com/showthread.php?t=256735","timestamp":"2014-04-19T15:07:45Z","content_type":null,"content_length":"28632","record_id":"<urn:uuid:ef4564a3-89e3-4f98-96d8-e3b775533446>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help March 13th 2010, 07:19 PM #1 Mar 2010 Hi all, Does anyone know haw I can assign names to the equations as they are solves in Maxima? I am solving some simultaneous equations using and I get three equations as output as I am not solving for actual values of x,y, and x, I'm looking for algebraic solutions. What I want to do is label each of the output equations as i,j and k. Any help? Hi all, Does anyone know haw I can assign names to the equations as they are solves in Maxima? I am solving some simultaneous equations using and I get three equations as output as I am not solving for actual values of x,y, and x, I'm looking for algebraic solutions. What I want to do is label each of the output equations as i,j and k. Any help? See attachment: Yes, I know how to do that. But this time I am not looking for actual numbers as output, I'm solving algebraic equations to get an equation for the solutions to the simultaneous equations. The result of the solve formula is three equations. I want to know how to label these equations. I've tried [a,b,c]: solve([f(x),g(y),h(z)],[x,y,z]); but it doesn't work. yes, i know how to do that. But this time i am not looking for actual numbers as output, i'm solving algebraic equations to get an equation for the solutions to the simultaneous equations. The result of the solve formula is three equations. I want to know how to label these equations. I've tried [a,b,c]: Solve([f(x),g(y),h(z)],[x,y,z]); but it doesn't work. That is verging on the rude but try something based on the following: Sorry, about that. No offense intended. Below is a simplified version of the problem I am dealing with (%i1) X: a*b+c*d=e; (%o1) c d + a b = e (%i2) Y:g*b+h*d=f; (%o2) d h + b g = f (%i3) solve([X,Y],[b,d]); e h - c f e g - a f (%o3) [[b = - ---------, d = ---------]] c g - a h c g - a h In my real problem I more equations and they are a lot longer. What I want to do is name the equations that are produced as output (the equations for b and d above) so that they can be recalled Thanks for your help Last edited by kungal; March 14th 2010 at 04:34 AM. Sorry, about that. No offense intended. Below is a simplified version of the problem I am dealing with (%i1) X: a*b+c*d=e; (%o1) c d + a b = e (%i2) Y:g*b+h*d=f; (%o2) d h + b g = f (%i3) solve([X,Y],[b,d]); e h - c f e g - a f (%o3) [[b = - ---------, d = ---------]] c g - a h c g - a h In my real problem I more equations and they are a lot longer. What I want to do is name the equations that are produced as output (the equations for b and d above) so that they can be recalled Thanks for your help Is this what you are looking for: You can use b and d instead of sol1 and sol2 if you want in the last two entries That's got it. Thanks a lot. March 13th 2010, 09:50 PM #2 Grand Panjandrum Nov 2005 March 13th 2010, 10:20 PM #3 Mar 2010 March 13th 2010, 10:58 PM #4 Grand Panjandrum Nov 2005 March 14th 2010, 03:28 AM #5 Mar 2010 March 14th 2010, 05:13 AM #6 Grand Panjandrum Nov 2005 March 14th 2010, 05:54 AM #7 Mar 2010
{"url":"http://mathhelpforum.com/math-software/133679-maxima.html","timestamp":"2014-04-17T09:43:15Z","content_type":null,"content_length":"50646","record_id":"<urn:uuid:49fc3317-4e06-48ed-87cc-abe2962bb002>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
why in smithchart 360 degree corresponds to half the wavelength Don't expect Smith chart to work like an Argand diagram. Consider a short circuit on an electrically short transmission line. This will have reflection coefficient of -1, which will be plotted on the far left side of Smith chart. If I lengthen transmission line by 1/4 wavelength in front of short circuit, it will have a reflection coefficient of +1 (i.e. it will look like an open circuit), which will be plotted on the far right hand side of Smith chart. Extending transmission line 1/4 wavelength moved us halfway around Smith chart.
{"url":"http://www.physicsforums.com/showthread.php?s=f5682e4c8d4304e6ddd718b23a89b041&p=4311528","timestamp":"2014-04-21T12:12:45Z","content_type":null,"content_length":"22729","record_id":"<urn:uuid:6e6bb4e2-a4e0-4e38-a912-239fa2607a14>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Common Errors in Statistics (And How to Avoid Them) Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/common-errors-statistics-how-avoid-them/bk/9781118294390","timestamp":"2014-04-16T10:32:35Z","content_type":null,"content_length":"30627","record_id":"<urn:uuid:9482eb62-894b-4fa4-9ac6-bab4e0a6f13b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Glen Burnie Geometry Tutor Find a Glen Burnie Geometry Tutor I majored Physics in my undergraduate (BS) and graduate study (MS). However, my Ph.D. was obtained in Biophysics. Therefore, I have a very strong background to teach math and science (Biology/ Physics). I have many experiences to teach Math and Science from middle school student to graduate school student. I am very patient and know how to teach all different levels of students. 18 Subjects: including geometry, chemistry, calculus, physics ...I have taken all relevant high school level classes: algebra, geometry, probability, precalculus, calculus, statistics, and more. I graduated with a B.S. in Biology from UNC Asheville with a concentration in ecology and evolutionary biology. I took several plant-related electives in my degree. 26 Subjects: including geometry, chemistry, calculus, statistics ...As an inclusion teacher, about half of my students are diagnosed with ADD or some other form of disability. I have developed a full range of strategies to sustain attention, break activities up into manageable segments, and engage students through technology and hands on activities. Microsoft O... 28 Subjects: including geometry, reading, English, writing ...Through studying mathematics, students can learn to process information and make decisions based on data and established facts rather than through gut feelings and bad info. Some students aren't really reaping these benefits. Maybe the class is moving too fast, or there are holes in their basic math knowledge. 19 Subjects: including geometry, English, reading, physics ...My name is Molly, and I tutor privately in the Bowie, Maryland, area. I currently help children with their math, science, writing, language arts and other general homework subjects on a daily basis. As a tutor, I am very patient and very positive. 16 Subjects: including geometry, reading, algebra 1, English
{"url":"http://www.purplemath.com/Glen_Burnie_Geometry_tutors.php","timestamp":"2014-04-19T19:55:06Z","content_type":null,"content_length":"24097","record_id":"<urn:uuid:e550f06a-2141-4705-9044-1d4dee68d25e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Framingham SAT Math Tutor Find a Framingham SAT Math Tutor ...Algebra, Trigonometry, Geometry, precalculus, and calculus. In some cases, I've worked under difficult circumstances. Either because of time pressure, or otherwise, where we had a lot of material to cover quickly. 47 Subjects: including SAT math, reading, chemistry, statistics ...I have experience teaching, lecturing, and tutoring undergraduate level math and physics courses for both scientists and non-scientists, and am enthusiastic about tutoring at the high school level. I am currently a research associate in materials physics at Harvard, have completed a postdoc in g... 16 Subjects: including SAT math, calculus, physics, geometry ...I received all As in my Macroeconomics, Microeconomics, International Economics and Calculus in Economics. I was approved by my school to be an Economics tutor in our Academic Center, but I didn't take the job because it required me to stay on campus most of the time during the weekdays that I just couldn't do it. I am a native Chinese-mandarin speaker. 11 Subjects: including SAT math, geometry, accounting, Chinese ...As an educator, I build excitement for learning by motivating, inspiring, and sparking curiosity by meeting students at their OWN level, demonstrating respect for students as people with unique lives and interests, and creating interesting and relevant lessons. I have a Masters Degree in Element... 16 Subjects: including SAT math, reading, writing, algebra 1 ...I was trained with the Suzuki Method and completed all levels of Suzuki by age 10. When I was an elementary school student, I played with the middle school orchestra. Since elementary school, I was concert mistress and principal violinist of my elementary, middle, and high school orchestras. 11 Subjects: including SAT math, Spanish, accounting, ESL/ESOL
{"url":"http://www.purplemath.com/framingham_sat_math_tutors.php","timestamp":"2014-04-19T09:42:40Z","content_type":null,"content_length":"23978","record_id":"<urn:uuid:ab206a19-aec9-43d4-aeac-5364fd9aff35>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
MLN and LEGO.com Help Blog There are 4 symbols in order to create a cheat code. Number 1 represents the first symbol. Number 2 represents the second symbol. Number 3 represents the third symbol. Number 4 represents the fourth However you need to know what the first symbol is. :P The first symbol is the "e" turned to right. Cheat codes: SPOILER: CHEATCODES BELOW! YOU NEED TO HIGH LIGHT THEM IN ORDER TO SEE THEM. 500 coins : 1144 1000 coins : 1134 5000 coins : 2214 10000 coins : 4214 1000 experience points : 1334 - 1212 - 3234 - 3134 Unlock weapons : 4231 - 2234 - 4213 - 3113 Unlock cards : 4434 - 2112 - 3344 - 3424 Unlocks a set of cards: 2111 - 2112 - 2113 - 2114 After choosing which ninja you want to play with, you see 4 symbols in the middle of the screen. The top symbol is the first number. To verify your code, click the dragon at the top of the game. ;) If you have any questions, feel free to ask them here. ;) 70 comments: need high light them??? In the section called "Unlocks a set of cards" what are numbers 5 and 6? You have a repeat that is 2112 in "Unlocks a set of cards" and "Unlocks cards." Know anymore card codes? All the cheatcodes are listed here, there are no more codes. :) How do you enter the code? All I see are symbols? Tasukete kure. Mahalo's. where do you put in the numbers? After you choose a ninja, you come on the homepage with the Sensei Wu. In the middle of the screen are the numbers in characters. The first number is above, the other one under that, and so on. Click the dragon to verify. In the section called "Unlocks a set of cards" what are numbers 5 and 6? You have a repeat that is 2112 in "Unlocks a set of cards" and "Unlocks cards."-please reply asap there are more cards that are still not unlocked-even with the codes you are showing-how do you unlock them? In the section called "Unlocks a set of cards" what are numbers 5 and 6? You have a repeat that is 2112 in "Unlocks a set of cards" and "Unlocks cards."-please reply asap There are cards that are not unlocked even with your codes-please tell if they can be unlocked-tell me what to do if they can be unlocked-please reply asap Thank you! Sorry, buddy, but I guess you're gonna have to figure that one out on your own. theres is no 5 and 6! Have to highlight the letters above to see the tricks Thank you very much! Good contribution! I just edited the codes. WHAT DOES EACH SYMBOL MEAN? can the cards with a ! in front of them be unlocked? -- The first symbol is the "e" turned to right. -- I don't know what it means. -- If you tryed all the cheatcodes, and you didn't unlocked it, then you cannot obtain it by using a cheatcode. je n'arrive pas a débloquer les cheat Pas de problème, c'est simple. Vous surlignez ou soulignez les tricheurs dans le post ci-dessus et voila! one cheat is swirl lizard lizard # another is dragon dragon lizard # another is dragon dragon swirl # another is lizard lizard # # One is "t#H# what is "dragon" and "lizard"? i dont understand coloca um code assim entao ,redemoinho,redemoinho,t , # Thanks for the cheats they really helped me in the game , beat it using them i cant use my cards! What should i do? Don't post your cheatcodes here, all the cheatcodes are listed in our post ;) Where did you get them? Thanks for your help! And one more thing, there is a repeated code. (2112) How does that work out? This one should unlock the second set of cards ;) What does the first symbol is. :P The first symbol is the "e" turned to right mean? What numbers? What are 1 2 3 and 4 in it? in card library you can enter codes right above where you click to enter the card library. what is the second , third and fourth symbol? i unlocked the universe code it did not work none of them work D:< a question from my son where are the four symbols and where is the dragon? Thank you wht dose that mean 1st symbol: e 2nd symbol: # 3rd symbol:lizard Last symbol: # with a flat bottom Here's a video of the totem pole code as well as the Card Library numerical codes. 4434 - Lego Game Card 2112 - Lego Universe Card 3344 - Ninjago DS Card 3424 - Lego Club Card You can go to Card Library and it will say enter code. Ex: Enter Code Where does the stuff show??? I redeemed 2 of it and it doesn't work!? D: SORRY BUT ONLY 2 WEAPON CODES WORK HOW DO YOU UNLOCK THE OTHER WEAPONS? AND NONE OF THE SET OF CARDS CHEATS WORK EITHER CAN YOU HELP ME? PLEASE how do you use the cards i knew the symbols. symbols are also letters. here are the symbol letters: A (upside down), E, H, and T. Some of the codes don't work :/ 500 coins : 1144 1000 coins : 1134 Ninjago Smash Party Codes 5000 coins : 2214 10000 coins : 4214 1000 experience points : 1334 - 1212 - 3234 - 3134 Unlock weapons : 4231 - 2234 - 4213 - 3113 Unlock cards : 4434 - 2112 - 3344 - 3424 Unlocks a set of cards: 2111 - 2112 - 2113 - 2114 only the useful cheats worked the unusefull didn't (i know from youtube (from checkng ) ) hey do you no how to EQUIP the weapons um I'm 10 and i don't get how the whole symbol thing works i did it like 10 freaking times but it won't work. why? what do the numbers mean? when i put a code some of them don't work some of the codes don't work. Lego Ninjago are awesome! children love playing with them! who needs mr miagi when you have ninjago lol what do the codes mean? very HIGHLIGHT Is there any chance to solve sandlevel 3 ( collecting all bricks )and to solve the Quest killing als vipers in sandlevel 8 ? There is two codes that you didnt show they both are for lego universe these dont work When you first select your Ninja, the pole in the center of the screen has the symbols on it. The top symbol is the first symbol. The symbols then go through the cycle in order as you click on them. 1 = The first symbol you see at the top. Click on the symbol. The symbol that appears next is #2. Click on it again, the next that appears is #3. And then again is #4. Then it goes back to #1. I hope this helped a bit. What the heck, my 4 cards from cheat codes are my best and I can't even use them!? So here's my question for you. Why can't I use them??? NO cards or weapons worked You get 500 Coins, I think hat code is missing. Thank you so much. All of the codes worked. I also appreciated the spoiler resistance by writing the codes in white. Looks like codes don't work or I don't understand them
{"url":"http://mylegonetworkblog.blogspot.com/2011/01/ninjago-spinjitzu-smash-cheat-codes.html","timestamp":"2014-04-20T21:27:06Z","content_type":null,"content_length":"142034","record_id":"<urn:uuid:11dc572f-c73b-438e-9619-ae9ad615cc2d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: The Annals of Statistics 2007, Vol. 35, No. 5, 2261­2286 DOI: 10.1214/009053607000000226 © Institute of Mathematical Statistics, 2007 Tel Aviv University, The Open University of Israel and University of Central Florida We consider a problem of recovering a high-dimensional vector µ ob- served in white noise, where the unknown vector µ is assumed to be sparse. The objective of the paper is to develop a Bayesian formalism which gives rise to a family of l0-type penalties. The penalties are associated with various choices of the prior distributions n(·) on the number of nonzero entries of µ and, hence, are easy to interpret. The resulting Bayesian estimators lead to a general thresholding rule which accommodates many of the known thresh- olding and model selection procedures as particular cases corresponding to specific choices of n(·). Furthermore, they achieve optimality in a rather general setting under very mild conditions on the prior. We also specify the class of priors n(·) for which the resulting estimator is adaptively optimal
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/108/3058624.html","timestamp":"2014-04-16T13:44:09Z","content_type":null,"content_length":"8250","record_id":"<urn:uuid:17e5f414-9112-4828-8321-ea12c2d474c8>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Kennedale Math Tutor Find a Kennedale Math Tutor ...I am holding a Master's degree from University of Zurich, Switzerland. The Classes for matriculation (eligibility to enter University) Type B included Algebra, Geometry and Math/Calculus. I have home-schooled my children and I have prepared them for AP Math/Calculus, AP World History, AP French and AP German. 15 Subjects: including prealgebra, French, German, Italian ...I am interested in crafting with my pupils (and their parents as the case may be) a reasonable plan to achieve their goals. In any case, I am demanding of my pupils but on my part give the necessary effort in return. In addition to competency, I try to provide good example to my pupils. 40 Subjects: including trigonometry, discrete math, precalculus, prealgebra ...In that capacity I worked with students who had a wide range of abilities in math, both the math-challenged and the proficient. In addition, I worked extensively with students to improve their testing strategies to help with the TAKS test and the SAT. As a Secondary Math Teacher, an essential part of my job was preparing my students for the TAKS exam. 82 Subjects: including ACT Math, biology, calculus, ASVAB My educational background is in the rigorous Oxford-style Liberal Arts and Classical Disciplines of Theology, Philosophy, and Literature. While my specialties are these subjects in particular, such a well-rounded education in the fundamental thought of Western Civilization makes it possible for me ... 52 Subjects: including geometry, reading, SAT math, English ...I have been a tutor for the Arlington, TX Library for almost one year now. I know firsthand how difficult math can be and have hired tutors while I was in college. Now, working as a tutor, it is a real pleasure for me to see the "light bulbs" turn on in my students minds. 4 Subjects: including algebra 2, geometry, algebra 1, prealgebra
{"url":"http://www.purplemath.com/Kennedale_Math_tutors.php","timestamp":"2014-04-18T05:40:55Z","content_type":null,"content_length":"23937","record_id":"<urn:uuid:4799a05b-fada-4e38-aca5-7c15beee67c3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Field arithmetic. 2nd revised and enlarged ed. (English) Zbl 1055.12003 Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge 11. Berlin: Springer (ISBN 3-540-22811-X/hbk). xxii, 780 p. $ 159.00; EUR 149.95; £ 115.00; sFr 254.00 (2005). Since the appearance of the first edition of Field Arithmetic in 1986, the research on the subject has remarkably increased. The goal of this new edition is to enrich the book with an extensive account of the progress made in this field, even if, as the authors say, the task of giving a fully complete account in just one book is probably beyond reach. In this review, also necessarily incomplete, we recall a few of the major changes introduced in the new edition, referring to Zbl 0625.12001 for comments to the first edition. The background material, roughly speaking the material contained in Chapters 1 to 12, has been reorganized and enlarged; in particular, arguments like linear disjointness of fields and algebraic function fields of one variable have been significantly expanded. Chapter 13, devoted to classical Hilbertian fields, now includes recent results of Zannier and Haran, which lead to the solution of classical problems about the Hilbertianity of some fields (see the list of open problems in the first edition). Matzat’s results on the GAR realization of simple finite groups and their implications on the solvability of the embedding problem are included in Chapter 16. Melnikov’s formations $𝒞$, i.e., the sets of all finite groups whose composition factors belong to a given set of finite simple group, are discussed in Chapter 17. Chapter 21 includes a full proof of Schur’s conjecture about polynomials with coefficients in the ring of integers ${𝒪}_{K}$ of a number field $K$ that induce bijections on ${𝒪}_{k}/P$ for infinitely many prime ideals of ${𝒪}_{k}$. The study of free profinite groups of infinite rank has been substantially expanded in Chapter 25. Probabilistic arguments regarding generators of free profinite groups are given in Chapters 26. As in the first edition, an important role to the development of the theory is recognized to the methods coming from logic. Therefore there are a number of chapters devoted to logical arguments, like ultraproducts, decision procedures, nonstandard model theory and undecidability. Each chapter includes notes on related literature and almost every chapter has a list of exercises. The last chapter recalls the list of open problems of the first edition, with comments on partial or full solutions, and presents a new list of open problems. In the reviewer’s opinion, the book is a very rich survey of results in Field Arithmetic and could be very heplful for specialists. On the other hand, it also contains a large number of results of independent interest, and therefore it is highly recommendable to many others too. 12E30 Field arithmetic 12-02 Research monographs (field theory) 12E25 Hilbertian fields; Hilbert’s irreducibility theorem 12F12 Inverse Galois theory 12Lxx Connections of field theory with logic 14G05 Rational points
{"url":"http://zbmath.org/?format=complete&q=an:1055.12003","timestamp":"2014-04-17T04:34:27Z","content_type":null,"content_length":"23863","record_id":"<urn:uuid:5824b12c-b84e-434c-b73c-ece538464b67>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrating Factor help November 16th 2010, 09:45 AM Integrating Factor help The soloution to this might be trivial but I just need a bit of explaining. The question I have is in the form of dy/dx + Py = Q but in front of the dy/dx there is another x and i'm just not sure what I need to do with this. Any help would be appreciated. Thanks! November 16th 2010, 09:52 AM If I understand correctly This is the same as Then your integrating factor is $\displaystyle e^{\int \frac{P(x)}{x} dx}$ November 16th 2010, 09:54 AM Yes thats exactly what I meant. Thanks!
{"url":"http://mathhelpforum.com/calculus/163435-integrating-factor-help-print.html","timestamp":"2014-04-18T21:09:05Z","content_type":null,"content_length":"5369","record_id":"<urn:uuid:dcb759ef-eeac-43be-b9e1-51c5521aef4e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
On the computational power of neural nets Results 11 - 20 of 122 , 1998 "... We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of well-defined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a phi ..." Cited by 31 (1 self) Add to MetaCart We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of well-defined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations. , 1996 "... We show closed-form analytic functions consisting of a finite number of trigonometric terms can simulate Turing machines, with exponential slowdown in one dimension or in real time in two or more. 1 A part of this author's work was done when he was visiting DIMACS at Rutgers University. 1 Introduc ..." Cited by 31 (4 self) Add to MetaCart We show closed-form analytic functions consisting of a finite number of trigonometric terms can simulate Turing machines, with exponential slowdown in one dimension or in real time in two or more. 1 A part of this author's work was done when he was visiting DIMACS at Rutgers University. 1 Introduction Various authors have independently shown [9, 12, 4, 14, 1] that finite-dimensional piecewise-linear maps and flows can simulate Turing machines. The construction is simple: associate the digits of the x and y coordinates of a point with the left and right halves of a Turing machine's tape. Then we can shift the tape head by halving or doubling x and y, and write on the tape by adding constants to them. Thus two dimensions suffice for a map, or three for a continuous-time flow. These systems can be thought of as billiards or optical ray tracing in three dimensions, recurrent neural networks, or hybrid systems. However, piecewise-linear functions are not very realistic from a physical p... , 1992 "... This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function oe; if the two n ..." Cited by 31 (14 self) Add to MetaCart This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function oe; if the two nets have equal behaviors as "black boxes" then necessarily they must have the same number of neurons and ---except at most for sign reversals at each node--- the same weights. Moreover, even if the activations are not a priori known to coincide, they are shown to be also essentially determined from the external measurements. Key words: Neural networks, identification from input/output data, control systems 1 Introduction Many recent papers have explored the computational and dynamical properties of systems of interconnected "neurons." For instance, Hopfield ([7]), Cowan ([4]), and Grossberg and his school (see e.g. [3]), have all studied devices that can be modelled by sets of nonlinear dif... - Advances in Algorithms, Languages, and Complexity , 1997 "... Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuous-time computation. However, while special-case algorithms and devices are being developed, relatively little work exists o ..." Cited by 29 (6 self) Add to MetaCart Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuous-time computation. However, while special-case algorithms and devices are being developed, relatively little work exists on the general theory of continuous-time models of computation. In this paper, we survey the existing models and results in this area, and point to some of the open research questions. 1 Introduction After a long period of oblivion, interest in analog computation is again on the rise. The immediate cause for this new wave of activity is surely the success of the neural networks "revolution", which has provided hardware designers with several new numerically based, computationally interesting models that are structurally sufficiently simple to be implemented directly in silicon. (For designs and actual implementations of neural models in VLSI, see e.g. [30, 45]). However, the more fundamental... - IEEE TRANSACTIONS ON NEURAL NETWORKS , 1994 "... We examine the representational capabilities of first-order and second-order Single Layer Recurrent Neural Networks (SLRNNs) with hard-limiting neurons. We show that a secondorder SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output lay ..." Cited by 24 (4 self) Add to MetaCart We examine the representational capabilities of first-order and second-order Single Layer Recurrent Neural Networks (SLRNNs) with hard-limiting neurons. We show that a secondorder SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforward neurons, it can implement any finitestate recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNNs. - SIAM Journal on Computing , 1997 "... this paper is to prove that BP(PAR IR ) = PSPACE/poly where PAR IR is the class of sets computed in parallel polynomial time by (ordinary) real Turing machines. As a consequence we obtain the existence of binary sets that do not belong to the Boolean part of PAR IR (an extension of the result in [20 ..." Cited by 24 (3 self) Add to MetaCart this paper is to prove that BP(PAR IR ) = PSPACE/poly where PAR IR is the class of sets computed in parallel polynomial time by (ordinary) real Turing machines. As a consequence we obtain the existence of binary sets that do not belong to the Boolean part of PAR IR (an extension of the result in [20] since PH IR ` PAR IR ) and a separation of complexity classes in the real setting. - Neural Computation , 2005 "... this paper, the progress of this development is reviewed and analysed in detail. In order to structure the survey and to evaluate the techniques, a taxonomy, specifically designed for this purpose, has been developed. Moreover, important open research issues are identified, that, if addressed pr ..." Cited by 24 (3 self) Add to MetaCart this paper, the progress of this development is reviewed and analysed in detail. In order to structure the survey and to evaluate the techniques, a taxonomy, specifically designed for this purpose, has been developed. Moreover, important open research issues are identified, that, if addressed properly, possibly can give the field a significant push forward , 1993 "... We deal with computational issues of loading a fixed-architecture neural network with a set of positive and negative examples. This is the first result on the hardness of loading networks which do not consist of the binary-threshold neurons, but rather utilize a particular continuous activation func ..." Cited by 23 (3 self) Add to MetaCart We deal with computational issues of loading a fixed-architecture neural network with a set of positive and negative examples. This is the first result on the hardness of loading networks which do not consist of the binary-threshold neurons, but rather utilize a particular continuous activation function, commonly used in the neural network literature. We observe that the loading problem is polynomial-time if the input dimension is constant. Otherwise, however, any possible learning algorithm based on particular fixed architectures faces severe computational barriers. Similar theorems have already been proved by Megiddo and by Blum and Rivest, to the case of binary-threshold networks only. Our theoretical results lend further justification to the use of incremental (architecture-changing) techniques for training networks rather than fixed architectures. Furthermore, they imply hardness of learnability in the probably-approximately-correct sense as well. , 1997 "... Most of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on feedforward networks. However, recurrent networks are also widely used in learning applications, in particular when time is a relevant parameter. This paper provides lower and upper bounds for the VC dimensi ..." Cited by 23 (5 self) Add to MetaCart Most of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on feedforward networks. However, recurrent networks are also widely used in learning applications, in particular when time is a relevant parameter. This paper provides lower and upper bounds for the VC dimension of such networks. Several types of activation functions are discussed, including threshold, polynomial, piecewisepolynomial and sigmoidal functions. The bounds depend on two independent parameters: the number w of weights in the network, and the length k of the input sequence. In contrast, for feedforward networks, VC dimension bounds can be expressed as a function of w only. An important difference between recurrent and feedforward nets is that a fixed recurrent net can receive inputs of arbitrary length. Therefore we are particularly interested in the case k AE w. Ignoring multiplicative constants, the main results say roughly the following: ffl For architectures with activation oe = a... , 1995 "... 95 CDC- Keywords: complexity, controllability, nonlinear Extended Summary for Invited Session entitled Computational Complexity Issues in Control 1. Introduction It is obvious that many control problems are in general easier to solve for linear systems than for arbitrary, not necessarily linear, one ..." Cited by 23 (0 self) Add to MetaCart 95 CDC- Keywords: complexity, controllability, nonlinear Extended Summary for Invited Session entitled Computational Complexity Issues in Control 1. Introduction It is obvious that many control problems are in general easier to solve for linear systems than for arbitrary, not necessarily linear, ones. An interesting and worthy area of research deals with the attempt to make mathematically precise the increases in difficulty that may arise when passing to the nonlinear case. By obtaining such precise statements, one gains an understanding of which analysis and/or design problems may be expected to be intractable. For instance, even for apparently mildly nonlinear systems it becomes impossible to check if a state ever reaches the origin. More interestingly perhaps, one also can then explain in what sense some variants of problems are easier than others for nonlinear systems. An example of this later aspect is given by comparing the characterization of the accessibility property (being ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=154865&sort=cite&start=10","timestamp":"2014-04-18T01:45:03Z","content_type":null,"content_length":"37746","record_id":"<urn:uuid:d325b131-87b9-4279-ac4d-6aabd6001e96>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
scipy.fftpack.idct(x, type=2, n=None, axis=-1, norm=None, overwrite_x=0)[source]¶ Return the Inverse Discrete Cosine Transform of an arbitrary type sequence. x : array_like The input array. type : {1, 2, 3}, optional Type of the DCT (see Notes). Default type is 2. n : int, optional Length of the transform. Parameters : axis : int, optional Axis over which to compute the transform. norm : {None, ‘ortho’}, optional Normalization mode (see Notes). Default is None. overwrite_x : bool, optional If True the contents of x can be destroyed. (default=False) idct : ndarray of real Returns : The transformed input array. For a single dimension array x, idct(x, norm='ortho') is equal to MATLAB idct(x). ‘The’ IDCT is the IDCT of type 2, which is the same as DCT of type 3. IDCT of type 1 is the DCT of type 1, IDCT of type 2 is the DCT of type 3, and IDCT of type 3 is the DCT of type 2. For the definition of these types, see dct.
{"url":"http://docs.scipy.org/doc/scipy/reference/generated/scipy.fftpack.idct.html","timestamp":"2014-04-20T10:56:03Z","content_type":null,"content_length":"8609","record_id":"<urn:uuid:d2d5241f-cd8b-4b37-b47c-a220b3ccac55>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Seventh Grade Mathematics In Grade 7, instructional time should focus on four critical areas: (1) developing understanding of and applying proportional relationships; (2) developing understanding of operations with rational numbers and working with expressions and linear equations; (3) solving problems involving scale drawings and informal geometric constructions and working with two- and three-dimensional shapes to solve problems involving area, surface area and volume; and (4) drawing inferences about populations based on samples. Next Generation Resources Instructional Resources
{"url":"http://wvde.state.wv.us/instruction/SeventhGradeMathematics.htm","timestamp":"2014-04-20T11:20:11Z","content_type":null,"content_length":"2019","record_id":"<urn:uuid:e4065479-b90a-49c4-aa17-bd1f34c9e55d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
C language tricky good pointers questions answers and explanation operators data types arrays structures questions functions recursion preprocessors, looping, file handling, strings questions switch case if else printf advance c linux objective types mcq faq interview questions and answers with explanation and solution for freshers or beginners. Placement online written test prime numbers Armstrong Fibonacci series factorial palindrome code programs examples on c c++ tutorials and pdf 22 comments: 2. please make a change of if statement 3. Great work :) thanks a lot.. my doubt got cleared ... 7. YOU CAN TRUNCATE THE MIDDLE PART NUM%100 BECAUSE THERE ARE SOME LEAP WHOSE MODULUS ARE EQUAL TO HUNDERED 8. why dont u write it as year%4==0 it means year divided by 4 leaves remainder 0.. is it correct? 9. can u help me do it without using if statement 10. write a c program swapping of to no with out using third variable & with out using addition & substracting? 12. 1500 and 1700 are leap years... you mentioned that it is not a leap year... 1. From wiki: 1600 was a leap year, but 1700, 1800 and 1900 were not. To understand concept of leap year I hope this link will help you: Concept of leap year 13. i hav a doubt...anybody can clear tis pls??? is thr any need to divide a year by 100?? y r v using tat??? 1. I hope this will helpful .. 1600, 2000 etc leap year while 1500, 1700 are not leap year. // if we enter 1500 if ( ( (year%4==0) && (year%100!=0)) || (year%400==0) ) // according to this 1500 not a leap year.. if ( (year%4==0) || (year%400==0) ) // according to this 1500 a leap year.. 16. can you do same program without using any logical operator? 17. Nice Explanation available in this : http://www.youtube.com/watch?v=1x-oBF3vgk0&list=PLPgpWDN1BdTvWebvqTXQie6yIjx6yOXFZ&index=2 18. int main(){ int year; printf("Enter any year: "); printf("%d is a leap year",year); printf("%d is not a leap year",year); return 0; I think it's a wrong code. when I try 2200 it shows that the year isn't leap year. I think the code will be, int main(){ int year; printf("Enter any year: "); printf("%d is a leap year",year); printf("%d is not a leap year",year); return 0; If I'm wrong , please explain......
{"url":"http://www.cquestions.com/2008/01/write-c-program-for-checking-leap-year.html","timestamp":"2014-04-16T13:31:41Z","content_type":null,"content_length":"173095","record_id":"<urn:uuid:54b4f866-a9cb-4b0f-9bef-cfb8eff6d59a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Start-Up of a Plate Gas Absorption Unit The start-up behavior of a six-plate gas absorption unit is described by the following equations: , where , , and . and are the liquid and gas flow rates; is the equilibrium constant (i.e., for , where and are the liquid and gas mole fractions), and is the liquid hold-up in the plates. It is assumed that the liquid solvent entering the absorption column is solute free (i.e., . The gas to be treated entering the absorption column is such that One can solve the system of ODEs using the built-in . An elegant analytical solution was made available by Lapidus and Amundson in 1950 (and later by Acrivos and Amundson, 1955) and is described in detail by J. M. Douglas (see reference below). This Demonstration shows that both methods give the same result for the output gas composition versus time. J. M. Douglas, Process Dynamics and Control, Vol. 1: Analysis of Dynamic Systems , Englewood Cliffs, NJ: Prentice Hall, 1972.
{"url":"http://demonstrations.wolfram.com/StartUpOfAPlateGasAbsorptionUnit/","timestamp":"2014-04-20T23:32:21Z","content_type":null,"content_length":"43848","record_id":"<urn:uuid:cf2a8731-24a6-469c-b307-a59b790f72ec>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Jman on Wednesday, October 24, 2012 at 3:30pm. I posted this earlier but no one answered. :( 1. Given a scale factor of 2, find the coordinates for the dilation of the line segment with endpoints (-1,2) and (3,-3). a. (2,4) and (6,6) b. (2,4) and (6,6) c. (-2,4) and (6,-6) d. (2,-1) and (-3,3) 2. Given a scale factor of 1/2, find the coordinates for the dilation of the triangle with vertices at (0,0), (0,2), and (4,0). a. (0.0), (0,4), (8,0) b. (0,0), (0,1), (2,0) c. (1/2,1/2), (1/2,1), (2,1/2) d. (2,2), (2,4), (8,2) • Math Help - Jman, Wednesday, October 24, 2012 at 3:43pm Please help • Steve or Ms. Sue Please Help - Jman, Wednesday, October 24, 2012 at 3:50pm Please help me • Math Help - Jman, Wednesday, October 24, 2012 at 4:02pm Please some help me • Math Help - Kasey, Thursday, November 8, 2012 at 3:02pm Kid you need to do your homework. I do my lessons and take my test. The answers I miss I research. I go to Connections academy and have been following your posts for some time. I know who you are. Now if you can't figure theses out do the lesson again. If you post one more anywhere I will notify all 8th grade teachers and you will be kicked out of CA. DO YOUR OWN WORK!!!! • Math Help - Kasey, Thursday, November 8, 2012 at 3:03pm Hope this helped, God bless and have a wonderful day:) Loves! Related Questions Algebra - I posted this earlier but it didn't get answered. So I'm posting it ... Math-Reiny - I posted this problem earlier and you answered it for me. The ... Math HELP! - I posted a few questions and then went back and posted how I ... Algebra - I posted this earlier in the day but no one answered. Can anyone help ... intro chemistry - Can you tell me why ketones and aldehydes don't bond with ... For George - At 11:37 and at 12:24 you posted the same question the earlier one ... help? - the last few times I've posted questions no one has answered them...I'm ... Math Expert Help Me - I posted this earlier but no one answered. :( 1. Given a ... Science EASY HELP - I had posted this question before but no one answered for ... Math - I posted this problem the other day and someone answered it but when I ...
{"url":"http://www.jiskha.com/display.cgi?id=1351107025","timestamp":"2014-04-20T11:58:43Z","content_type":null,"content_length":"9753","record_id":"<urn:uuid:ffeb604c-f71b-42b9-a665-37269836585c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] PCA on set of face images devnew@gmai... devnew@gmai... Fri Feb 29 13:15:00 CST 2008 hi guys I have a set of face images with which i want to do face recognition using Petland's PCA method.I gathered these steps from their docs 1.represent matrix of face images data 2.find the adjusted matrix by substracting the mean face 3.calculate covariance matrix (cov=A* A_transpose) where A is from 4.find eigenvectors and select those with highest eigenvalues 5.calculate facespace=eigenvectors*A when it comes to implementation i have doubts as to how i should represent the matrix of face images? using PIL image.getdata() i can make an array of each greyscale image. Should the matrix be like each row contains an array representing an image? That will make a matrix with rows=numimages and cavariancematrix =A *A_transpose will create a square matrix of Using numpy.linalg.eigh(covariancematrix) will give eigenvectors of same shape as the covariance matrix. I would like to know if this is the correct way to do this..I have no big expertise in linear algebra so i would be grateful if someone can confirm the right way of doing this More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-February/031684.html","timestamp":"2014-04-16T07:39:03Z","content_type":null,"content_length":"3543","record_id":"<urn:uuid:cfab5233-3e38-415e-a77d-7489dbbe41f3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
The Plus sports page: The curse of the duck December 2008 Cricket fans love their stats. Even the most casual follower can rattle off the batting averages of their favourite players or tell you how many wickets such-and-such a bowler took in the last test. The most passionate followers can recite each scorecard from this year's Wisden. The recent news of the great Indian batsman Sachin Tendulkar surpassing West Indian Brian Lara's record number of test runs has given maths-loving cricket geeks another opportunity to pull out their calculators and Excel spreadsheets. I'm openly one of these nuts and did just that. At the time of writing, Tendulkar had scored 12,027 runs across 247 innings, to overtake Lara's 11,953 from 232 innings. After a little investigation, I found that despite his outstanding average of over 54 runs per innings, Tendulkar's most common score in test cricket is ... zero! This was quite a shock — the most prolific run-scorer in test cricket has been out for nought (a duck in cricket parlance) 14 times, well ahead of his second most common score — which incidentally is the next lowest you can get: one! Donald Bradman was well known for his high backlift and lengthy forward stride. This is completely counter-intuitive, so I took this investigation further. Australian cricketer Sir Donald Bradman is universally regarded as the best batsman ever to have played the game. His average, an astounding 99.94, is so far above every other batsman in the history of the game that he is often acclaimed as not only the best cricketer ever, but the best player ever of any sport. His average is so iconic in Australia that the postcode of the ABC (the Australian version of the BBC) is 9994 in every capital city. If it wasn't for the fact that much more test cricket is played nowadays than in the early 1900s, and for World War II interrupting his career for six years, Bradman would have scored many more than the 6996 runs he did score. So, guess what Bradman's most common score was? That's right, zero! Indeed, looking at every innings by the most prolific batsmen in test history from Tendulkar at number 1 to Bradman at number 34, the most common score is zero — and by quite a long way too. The following figures show the distribution of scores from these top batsmen — on the horizontal axis you see the number of runs and the vertical axis measures the frequency of dismissals at a particular number of runs. The first chart shows every score between 0 and 100, and the second uses five-run wide bins to show scores up to 250. The data only include scores where the batsman was dismissed and so does not include not-out scores. Scores plotted against dismissal frequency. Scores in bins of five plotted against dismissal frequency. Model cricket A closer look at these distributions shows that they very closely fit what is known as an exponential distribution. An exponential distribution has the form In this case The graph of this function, plotting A straight line fitted to the data. The blue dots represent observed data and the black line represents the model. A straight line fitted to the data from the second chart above. The blue dots represent observed data and the black line represents the model. There is a very strong straight line fit in both charts. Using a standard technique called least-squares regression, we can find the straight line that best fits the data.We can determine The mean of an exponential distribution, a sort of average, is So what? Now, so far you might be thinking that all of this is only of passing statistical interest. So what if cricket scores follow an exponential distribution? Well, I'm glad you asked! Let's turn for a second to a different distribution, the geometric distribution. You will be familiar with this distribution from a simple 50/50 coin toss. The geometric distribution describes the number of coin tosses you need before a head (or tail) first turns up. The probability of your first head turning up on your continuous equivalent of this distribution, extending it to work for all numbers, not just integers. Given that cricket batting scores seem to fit a exponential distribution, this means that we can picture cricket batting scores on a geometric distribution with the probability of you being dismissed at score Sachin Tendulkar against Australia in the 2nd test at the SCG in 2008. Image by Privatemusings. Can you spot the profound result here? Remembering that the geometric distribution is memory-less, you can interpret this as saying that no matter what score you are currently on, you have the same chance constant hazard model. This seems to go against every cricketing manual I have ever read. Accepted cricketing wisdom says that a batsman is more dangerous when (s)he "has the eye in" and has scored 10 or 20 runs. Our result seems to suggest that, apart from when a batsman is on 0, you have just as much chance of dismissing him or her on the current score as on any other score. The next question to ask is, what is the probability of dismissing a batsman on the current score (that is, what is Knowing that the mean of the exponential distribution is Scores near zero The biggest deviation from the geometric distribution is for scores near zero. According to our data, the chance of being dismissed for a duck is 6.9% — around 3 times more than expected for a geometric (or exponential) distribution. But by the time the batsman has scored two or three runs, the geometric distribution starts to fit well. There is a small peak at four runs, perhaps because you can relatively easily get to four before you become comfortable — it only takes one streaky shot to the boundary. Whilst you can get to three with one shot, you are more likely to have played a few shots and so may be comparatively more "set". The data and the geometric distribution. The blue dots represent observed data and the black line represents the model. An analysis of scores near zero has been completed by Brendon J. Brewer from the University of New South Wales in Getting your eye in: A Bayesian analysis of early dismissals in cricket. Brewer indeed found that batsmen are more vulnerable at the beginning of their innings. By assuming a constant hazard model, Brewer determined the effective average of a batsman before they have scored — that is, assuming a constant hazard model with probability In our data from the best batsmen of all time, dismissal for a duck occurred with a 6.9% chance. The mean of a geometric distribution built around this probability is This means that even though our batsmen have a mean of about 43, before they've scored they bat like cricketers with a mean of 13.5. Even the best batsmen bat like tail-enders before they get off the What should we take away from this analysis? The conclusion seems to be that there is a very small window in the beginning of a batsman's innings in which there is a greater chance of dismissal than there ordinarily is. This makes sense — batsmen take some time to acclimatise to the game conditions. But this is a small window — once the batsman has scored about three runs, you have the same chance of dismissal whatever the current score. Interestingly, tiredness does not seem to play a part — the exponential distribution holds well out to 250 runs (quite a few hours of batting). It should be remembered that this analysis was completed on the top 34 run scorers of all time (5953 innings) and so represents the best ever batsmen. Lesser batsmen are likely to get low scores, so perhaps this window is slightly wider for them. But if we turn to the greatest of the great, Bradman, the window is essentially one run. His effective average before he had scored was a very mediocre nine runs. After he had scored two runs, this effective average had risen to 69. You had to get Bradman out very early! • The data was retrieved from cricinfo during the second test between Australia and India on the 19th of October 2008; • Not-out scores were removed from the analysis; • The exponential distribution does break down a little for scores above 250 as there simply isn't enough data; • Yes, Marc has scored a duck in his cricket career. Further reading Previously on the Plus sports page Marc West is a freelance science writer and former Assistant Editor of Plus who currently works in operations analysis in Sydney. As a wannabe Australian cricket player, the stars aligned when Marc somehow scored 114 against Mount Colah in a Sydney shires cricket game. He loves to write about science and sport and has been published in a variety of magazines and newspapers. You can read more of his writing on his personal blog.
{"url":"http://plus.maths.org/content/os/issue49/sport/index","timestamp":"2014-04-18T00:52:53Z","content_type":null,"content_length":"45439","record_id":"<urn:uuid:388c44ec-02e5-4c8d-8093-fb59b0544f25>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Section: C Library Functions (3) Updated: SGI Local index Up pmLookupDesc, pmReceiveDesc, pmRequestDesc - obtain a description for a performance metric #include <pcp/pmapi.h> int pmLookupDesc(pmID pmid, pmDesc *desc) int pmRequestDesc(int ctx, pmID pmid) int pmReceiveDesc(int ctx, pmDesc *desc) Given a Performance Metrics Identifier (PMID) as pmid, fill in the given pmDesc structure, pointed to by the parameter desc, from the current Performance Metrics Application Programming Interface (PMAPI) context. The pmDesc structure provides all of the information required to describe and manipulate a performance metric via the PMAPI, and has the following declaration. /* Performance Metric Descriptor */ typedef struct { pmID pmid; /* unique identifier */ int type; /* base data type (see below) */ pmInDom indom; /* instance domain */ int sem; /* semantics of value (see below) * pmUnits units; /* dimension and units (see below) */ } pmDesc; /* pmDesc.type -- data type of metric values */ #define PM_TYPE_NOSUPPORT -1 /* not impl. in this version */ #define PM_TYPE_32 0 /* 32-bit signed integer */ #define PM_TYPE_U32 1 /* 32-bit unsigned integer */ #define PM_TYPE_64 2 /* 64-bit signed integer */ #define PM_TYPE_U64 3 /* 64-bit unsigned integer */ #define PM_TYPE_FLOAT 4 /* 32-bit floating point */ #define PM_TYPE_DOUBLE 5 /* 64-bit floating point */ #define PM_TYPE_STRING 6 /* array of char */ #define PM_TYPE_AGGREGATE 7 /* arbitrary binary data */ #define PM_TYPE_AGGREGATE_STATIC 8 /* static pointer to aggregate */ #define PM_TYPE_UNKNOWN 255 /* used in pmValueBlock, not pmDesc */ /* pmDesc.sem -- semantics/interpretation of metric values */ #define PM_SEM_COUNTER 1 /* cumulative ctr (monotonic incr) */ #define PM_SEM_INSTANT 3 /* instant. value continuous domain */ #define PM_SEM_DISCRETE 4 /* instant. value discrete domain */ The type field in the pmDesc describes various encodings (or formats) for a metric's value. If a value is counted in the underlying base instrumentation with less than 32 bits of integer precision, it is the responsibility of the Performance Metrics Domain Agent (PMDA) to promote the value to a 32-bit integer before it is exported into the Performance Metrics Collection Subsystem (PMCS); i.e. applications above the PMAPI never have to deal with 8-bit and 16-bit counters. If the value of a performance metric is of type PM_TYPE_AGGREGATE (or indeed PM_TYPE_STRING), the interpretation of the value is unknown to the PMCS. In these cases, the application using the value, and the PMDA providing the value must have some common understanding about how the value is structured and interpreted. Each value for a performance metric is assumed to be drawn from a set of values that can be described in terms of their dimensionality and scale by a compact encoding as follows. The dimensionality is defined by a power, or index, in each of 3 orthogonal dimensions, namely Space, Time and Count (or Events, which are dimensionless). For example I/O throughput might be represented as while the running total of system calls is Count, memory allocation is Space and average service time is In each dimension there are a number of common scale values that may be used to better encode ranges that might otherwise exhaust the precision of a 32-bit value. This information is encoded in the pmUnits structure which is embedded in the pmDesc structure. * Encoding for the units (dimensions Time and Space) and scale * for Performance Metric Values * For example, a pmUnits struct of * { 1, -1, 0, PM_SPACE_MBYTE, PM_TIME_SEC, 0 } * represents Mbytes/sec, while * { 0, 1, -1, 0, PM_TIME_HOUR, 6 } * represents hours/million-events typedef struct { int dimSpace:4; /* space dimension */ int dimTime:4; /* time dimension */ int dimCount:4; /* event dimension */ int scaleSpace:4; /* one of PM_SPACE_* below */ int scaleTime:4; /* one of PM_TIME_* below */ int scaleCount:4; /* one of PM_COUNT_* below */ } pmUnits; /* dimensional units and scale of value */ /* pmUnits.scaleSpace */ #define PM_SPACE_BYTE 0 /* bytes */ #define PM_SPACE_KBYTE 1 /* Kilobytes (1024) */ #define PM_SPACE_MBYTE 2 /* Megabytes (1024^2) */ #define PM_SPACE_GBYTE 3 /* Gigabytes (1024^3) */ #define PM_SPACE_TBYTE 4 /* Terabytes (1024^4) */ /* pmUnits.scaleTime */ #define PM_TIME_NSEC 0 /* nanoseconds */ #define PM_TIME_USEC 1 /* microseconds */ #define PM_TIME_MSEC 2 /* milliseconds */ #define PM_TIME_SEC 3 /* seconds */ #define PM_TIME_MIN 4 /* minutes */ #define PM_TIME_HOUR 5 /* hours */ * pmUnits.scaleCount (e.g. count events, syscalls, interrupts, * etc.) these are simply powers of 10, and not enumerated here, * e.g. 6 for 10^6, or -3 for 10^-3 #define PM_COUNT_ONE 0 /* 1 */ Special routines (e.g. pmExtractValue(3), pmConvScale(3)) are provided to manipulate values in conjunction with the pmUnits structure that defines the dimension and scale of the values for a particular performance metric. Below the PMAPI, the information required to complete the pmDesc structure, is fetched from the PMDAs, and in this way the format and scale of performance metrics may change dynamically, as the PMDAs and their underlying instrumentation evolve with time. In particular, when some metrics suddenly become 64-bits long, or change their units from Mbytes to Gbytes, well-written applications using the services provided by the PMAPI will continue to function correctly. pmRequestDesc and pmReceiveDesc are used by applications which must communicate with PMCD asynchronously. These functions take explict context handle ctx which must refer to a host context (i.e. a context created by passing PM_CONTEXT_HOST to pmNewContext). pmRequestDesc sends request to PMCD and returns without waiting for the response, pmReceiveDesc reads reply from PMCD. It is the responsibility of the application to make sure the data are ready before calling pmReceiveDesc to avoid blocking. PMAPI(3), pmAtomStr(3), pmConvScale(3), pmExtractValue(3), pmGetConfig(3), pmTypesStr(3), pmUnitsStr(3), pcp.conf(4) and pcp.env(4). The requested PMID is not known to the PMCS The PMDA responsible for providing the metric is currently not available Context is currently in use by another asynchronous call. This document was created by man2html, using the manual pages. Time: 21:52:39 GMT, April 16, 2011
{"url":"http://www.makelinux.net/man/3/P/pmRequestDesc","timestamp":"2014-04-19T07:21:34Z","content_type":null,"content_length":"15808","record_id":"<urn:uuid:7624d0ad-4d09-4329-90af-ed40dd6b4344>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How many 3 digit numbers can be made if no digit is repeated and the number can not begin with a zero? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f6e5a9e4b007c4a2eb19e7","timestamp":"2014-04-19T19:59:23Z","content_type":null,"content_length":"87954","record_id":"<urn:uuid:4df6cff1-2c76-4ddd-b91a-3609bbd72bd1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Perhaps someone can see where i am wrong? 02-12-2002 #1 Registered User Join Date Feb 2002 I have agonized over this, for the past hour, i am not sure where i am going wrong, i have yet to try the while and do while loops, perhaps that is where i am going wrong. anyways, i am making a program that when you input a number it gives you the multiplication table(onl up to 12)for that number. well what i need is for it to ask for another number (after it has executed the first number) and repeat the process and if the number is over 12 to output an error. this is what i have(ps....i am a newbie to C++) int main() int k; float number; cout<<"\n Please enter a number you would like multiplied:"; return 0; int main() int k; float number = 1; char quit='z'; while((quit != 'Q')&&(quit != 'q')) while ((number > 12)||(number < 0)) cout<<"\n Please enter a number from 1-12 you would like multiplied:"; } // end inner while loop cout << "\nEnter 'Q' to quit or 'C' to continue: "; cin >> quit; }//end outter while loop-loop. end program if quit = q or Q return 0; isn't it k++? > isn't it k++? In that case, either one will work. thanks for the help, but............. Maybe i am not cut out for this stuff i am soo lost in my first post i wrote down what i have now what i need is for the person who is using the program to enter a number between 1-12 dos will then display the times table then i need to ask the person to enter another number, i need the loop to quit after 10 times(of asking for another number) if a number outside of 1-12, it has to display error..............if anyone can help i would be gratful, if anyon needs to see what i have check out the first post from me, i have a little more now, but its not doing what i want it too look more closely at Betazep's code. It will allow the user to input an unlimited number of choices one after the other. This is controlled by the first, or outer, while loop wherein the conditional checks to make sure that the flag, called quit, isn't the char Q or the char q. If you want to limit the user to making no more than 10 passes through the table then just add another conditional, this time using a counter which has been declared an int and which is incremented by one each time you get to the end of outer while loop's code loop and is incoporated into the conditional with another logical AND symbol -- && counter < 11--where counter is initialized to one outside either of the while loops. Thank you I would just like to say thanks for all of you replying to my post. It is really nice to come here to the message board and get many answers instead of talking to my teacher or a classmate. I apprieciate it alot. It all helped and i got my program working and handed it in to my teacher. p.s( i am brand new to learning C++) 02-12-2002 #2 02-12-2002 #3 Registered User Join Date Dec 2001 02-12-2002 #4 02-12-2002 #5 Registered User Join Date Feb 2002 02-12-2002 #6 02-12-2002 #7 Registered User Join Date Feb 2002
{"url":"http://cboard.cprogramming.com/cplusplus-programming/10806-perhaps-someone-can-see-where-i-am-wrong.html","timestamp":"2014-04-24T21:05:18Z","content_type":null,"content_length":"63977","record_id":"<urn:uuid:31d82a01-7c04-4314-9f72-91e89798c8b7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Southfield, MI Calculus Tutor Find a Southfield, MI Calculus Tutor ...Algebra I subject matter include: a. One and two step equations b. Solving Inequalities c. 30 Subjects: including calculus, chemistry, writing, reading Hello there! My name is Brad and I look forward to helping you learn. I believe that all people are capable of learning and this translates into a tutoring style that centers around you learning rather than me teaching. 23 Subjects: including calculus, English, reading, precalculus ...The Upper Level is for students in grades 8 through 11 who are candidates for grades 9 through 12. The ISEE consists of three parts: (a) carefully constructed and standardized verbal and quantitative reasoning tests that measure a student's capability for learning; (b) reading comprehension and ... 31 Subjects: including calculus, reading, English, physics ...I am confident that given the chance, I can add you or your child to my list.My approach to Algebra 2 is always two fold. I work hard to make sure a student has the basic skills to realistically progress throughout the class while maintaining current work levels. I do this through a set of uniq... 16 Subjects: including calculus, chemistry, Spanish, physics ...While Algebra and Trigonometry may seem boring, Calculus should never be boring. Having the fundamentals down pat in Calculus will tremendously help a student in future math/science/ engineering courses in college. As a science major, I've seen many students flinch once any calculus becomes involved in the course. 23 Subjects: including calculus, chemistry, biology, algebra 1 Related Southfield, MI Tutors Southfield, MI Accounting Tutors Southfield, MI ACT Tutors Southfield, MI Algebra Tutors Southfield, MI Algebra 2 Tutors Southfield, MI Calculus Tutors Southfield, MI Geometry Tutors Southfield, MI Math Tutors Southfield, MI Prealgebra Tutors Southfield, MI Precalculus Tutors Southfield, MI SAT Tutors Southfield, MI SAT Math Tutors Southfield, MI Science Tutors Southfield, MI Statistics Tutors Southfield, MI Trigonometry Tutors Nearby Cities With calculus Tutor Berkley, MI calculus Tutors Beverly Hills, MI calculus Tutors Bingham Farms, MI calculus Tutors Farmington Hills, MI calculus Tutors Farmington, MI calculus Tutors Lathrup Village, MI calculus Tutors Livonia, MI calculus Tutors Oak Park, MI calculus Tutors Redford Twp, MI calculus Tutors Redford, MI calculus Tutors Royal Oak Twp, MI calculus Tutors Royal Oak, MI calculus Tutors Southfield Township, MI calculus Tutors Troy, MI calculus Tutors West Bloomfield, MI calculus Tutors
{"url":"http://www.purplemath.com/southfield_mi_calculus_tutors.php","timestamp":"2014-04-17T01:11:42Z","content_type":null,"content_length":"24101","record_id":"<urn:uuid:5ea2336f-6d7d-4bbe-8b62-cbdab960aa09>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Analyzing car efficiency at different power outputs (accelerations) Physics dictates that the amount of energy required to accelerate a given mass to a specific velocity is independent of the acceleration. (Final kinetic energy = 1/2mV^2 => how fast you got there doesn't change the amount of energy required). This is of course under ideal conditions (in a vacuum, rolling resistance is constant, etc.) but it should still be usable as a guideline in real world Based on some very crude experiments with my Model S, I've got the idea that heavy acceleration to a given speed is less efficient (uses more energy) than light acceleration. If true, this must be an artifact of some other component of the system: the motor, inverter, battery, etc. • I don't know much about efficiency of 3-phase inverters at varying outputs, so I'll just ignore that. :) • I think we do know that the efficiency of the motor is fairly constant at the low end (say, 0-60mph in the Telsa: real world use). • So that leaves the battery, and I think I have a model that explains why the battery causes the car to "waste" energy during heavy acceleration (or even deceleration?) Batteries are often modeled as an ideal voltage source and a "internal resistance". This isn't perfect, but works pretty well. Here's an article discussing the internal resistance and other factors in 3 types of EV batteries: www.evs24.org/wevajournal/php/download.php?f=vol3/WEVJ3-5340444.pdf In this model, the "waste energy" from the battery is I^2 * Ri, where I is the current, and Ri is the internal resistance of the battery. I think this waste energy essentially becomes heat. Here is some information about these Lithium 18650 batteries Tesla uses: http://laserpointerforums.com/f67/how-healthy-your-batteries-how-measure-internal-resistance-57576.html#!/exjun_. It says a "healthy battery" has an internal resistance of ~0.1ohm, and a depleted battery more like 0.250ohms. At 0.400ohms internal resistance, the battery is toast. The power meter on the Model S goes from -60kW (regen, pedal up) to +320kW (pedal down), so lets run throw some numbers on how much "waste energy" is used at 2 different accelerations. Lets accelerate at 40kW and 320kW. I'll do a lot of rounding here, because I don't know exact numbers--like how many cells there are, or how many are in use at the same time--and hoping that it kind of averages out to the right I believe there are in the range of 7500 lithium batteries in the 85kWh models. Lets assume they are all in use all the time. These cells deliver ~4V. So how much current is each cell delivering when they are delivering 40kW of power? P = I * V or I = P/V 40000W / 4V = 10000Amps 10000A/7500 cells = 1.3Amps per cell "Waste energy" per cell is I^2 * Ri = 1.3^2 * 0.1ohm == 0.169W 0.169 W * 7500 cells = 1267W 1.267kW / (40+1.267kW) = 3% wasted energy (battery loss) at 40kW I would say 3% "waste" is small enough that there may be other causes of loss that are much bigger, so it might even be "negligible". Now lets try it with the pedal floored: 320kW of useful power. 320000W/4v = 80000Amps 80000A/7500 cells = 10.7Amps per cell "Waste energy" per cell is I^2 * Ri = 10.7^2 * 0.1ohm = 11.4W 11.4 W * 7500 cells = 86kW 86kW / (320+86)kW = 21% wasted energy (battery loss) at 320kW At the point that you are producing 86kW of heat, presumably you need to start spending some more energy to cool the batteries as well... My conclusion is that heavy acceleration uses more energy. I think it's also likely that heavy regen is less efficient than light regen. This could explain another post I read that claimed using the "reduced regen" mode was giving better mileage. The analysis of how the internal resistance changes as the battery degrades over time also has repercussions on how much power the cells can deliver as they age. If we go forward in time to the point when the batteries have an internal resistance of 0.2ohms (5 years? 8 years?), then that roughly doubles the waste energy (P = I^2 * Ri and Ri doubled) So if the car could still output 320kW of useful energy, now the waste energy would be: 172kW / (320+172kW) == 35% wasted energy (battery loss) But note it's still fairly efficient at low power 2.5 / (40 + 2.5) == 6% Heavy power consumption comes at a price. As the batteries age, this cost increases. It seems likely that top acceleration will degrade as the batteries age. Got Amped | April 9, 2013 Very interesting! The only thing I'd like to add is that for me, driving about 1000 miles per month, the difference in cost for using 300 Wh/mi and 360 Wh/mi is about $7 (using a spreadsheet posted by a forum member which includes Vampire losses). After 1900 miles I'm averaging 330 Wh/mi, and I like to "get it" at least a few times every day, and I always accelerate faster than my previous ICE. So it may take more energy to accelerate faster, but it's piddly more (at least at my $0.10/kW-hr), so my recommendation is "Enjoy what ya got!" Great post! Leofingal | April 9, 2013 Good post, and it makes sense based on what I see (hard accel hurts efficiency). But on the other hand, hard accel sells Model S'! I think my neighbor is buying one after I let him drive it this past CnJsSigP | April 10, 2013 Very valid point andy! I would've never gone thru all the trouble you just did to explain that detail. But yes, the closer you come to transferring no power, the more you lower parasitic losses. So my little game is to keep the needle as close to zero as possible. My only little gripe with the cruise control is that it seems a little aggressive in maintaining a speed. It could be smoother with lower spikes in power to keep the speed. lolachampcar | April 10, 2013 someone needs to include those expensive tires in their cost analysis (when using the right foot) smorgasbord | April 14, 2013 Are you saying: 1) that the battery puts out 320kW, but with 86kW of losses only 234kW makes it to the motor? 2) that the battery puts out 406kW, but with 86kW of losses only 320kW makes it to the motor? Remember that Tesla specs the HP of the Perf model at 416HP or 310kW. To which do you think that refers? Mark K | April 15, 2013 smorgasbord - it's the latter, #2. Motor power figures are quoted net to the motor. Because of internal battery impedance losses, the battery must use up more of its charge to actually yield the required kilowatts to the motor. andycrews - your assumptions are good. There are more dimensions to the equation though, and there are nonlinearities. It is not only SOC and degradation that affects impedance, the instantaneous current demand has an effect as well. The battery modeled as an ideal voltage source in series with a linear resistance is a useful simplification, but it is not accurate under acute demand. The battery is a nonlinear animal. In the future, both fast acceleration and regen will get more efficient with hybrids of supercaps and batteries. The supercap impedance is nearly zero, which makes it the ideal buffer for high rate draw or charge. That combo is a ways off for cars, but supercaps are already in use on trains for regen. 10kWh of supercap, with 100+kWh of battery will be a sweet combo. Brian H | April 16, 2013 Slip a few kWh in to replace the dummy cells in a 60S, and you'd have a hot beast. They're lightweight, but bulky (volume), so you might get 5kWh in there. Winnie796 | April 16, 2013 I always thought it was the motor inverters that determine what power is taken out of the batteries and not the batteries "putting out power". So I don't see how that will affect acceleration in the future except to say that as the battery degrades you won't get to accelerate as often on one charge but the acceleration itself will still be as intense. As for efficiency I would estimate 4% losses through the inverter, 3% losses through the motor and other heat losses (although variable) <3%. So, overall about 10% losses. That is bloody marvelous compared to ICE petrol heat losses which are about >80% and ICE diesel heat losses which are >50%. Basically, nearly all the energy you pour into your ICE car goes out of the exhaust as heat. Regen is not included, which will only increase the efficiency of the electric car. Pasadena-S | April 16, 2013 Something doesn't seem right here. All these cells are not in parallel, but in some combination of parallel and series, right? If the cells are 4v and the inverter takes in 400v (guessing), then there are 100 cells in series. That means that each series has 100 times the current shown in your analysis. But if that were the case, your I^2 calculation would blow up by 10,000. We're missing something. larryh@WaveMetr... | April 16, 2013 Series and parallel are irrelevant to the calculation. Each cell runs at about 4 Volts so, given a specific power output, you can calculate the current. P= V*A ghillair | April 16, 2013 Is it possible that under heavy acceleration some of the extra energy goes to stretching face muscle into that famous GRIN? Mark K | April 16, 2013 Pasadena-S - yes, something is amiss in your statement. TM has not published its cell array architecture, but if indeed there are clusters of 100 cells in series, it would pencil out as follows: At 1C, each cell can provide about 3 amps at 3.6V (loaded conditions). If you stack 100 in series, you get the sum of their voltages at that same current (see Kirchoff's law). So at 1C discharge, each cluster would provide about 360V at 3A = 1.08 kW With 80 clusters in parallel comprising 8000 cells, you'd get about 80kW output, net, after internal impedance losses. (That is manifest in the 3.6V loaded number instead of the 4.2V open circuit At 4C discharge, you'd get 320kW. The fact that 4C discharge is already demonstrated is, in turn, why many of us believe we are in for a pleasant surprise on improvements to SuperCharger times. andycrews | April 17, 2013 This statement It seems likely that top acceleration will degrade as the batteries age. was not at all clear, sorry about that. It is based on my assumption that the total power output of the batteries will remain constant (or lessen) over time. Thus if more of the power is wasted as heat as the batteries degrade, there is less useful power available. I'm not sure if that's true, but it seems reasonable to me. DavidN | April 18, 2013 Battery issues aside, there's another reason that heavy acceleration appears to be less efficient: you're not really comparing apples to apples. You can't just compare the energy it takes to go from 0-60 in 6 seconds to the energy it takes to go 0-60 in 12 seconds. Obviously, the quicker car is going to be farther down the road at any given point in time, and will have a higher average speed at that point. Obviously, to go farther faster takes a lot more energy, regardless of the rate of acceleration. To make a fair comparison, you'd have to pick acceleration profiles that, at some point, put the cars at the same place and the same time, going the same speed. Will the faster-accelerating car have used more energy? Interesting question. If you graph speed vs. time, the area under the curve is distance traveled. You'd need to pick two acceleration profiles that, at some point in time, have equal area under the curves. Take an example: Car A accelerates at a steady rate of 10 mph/sec. It hits 60 mph in six seconds, then holds a steady 60 mph. Car B accelerates at 5 mph/sec, and takes 12 seconds to hit 60 mph. At this point, obviously, it is well behind Car A. But if car B continues to accelerate at 5 mph/sec to 90 mph, it will begin to catch up. If car B then begins decelerating at 5 mph/second it will slow to 60 mph just as it catches up to car A, which is also still going 60. Thus we have a fair apples-to-apples comparison, in which rate of acceleration is the only variable. So which car will have used less energy at this point? Beats me, With rockets in space, which are totally friction-free, the situation is a lot simpler. Final speed (and therefore energy) achieved is directly proportional to what rocket scientists call the impulse, which is just the thrust times the time the thrust is applied. A million pounds of thrust for one second, or one pound of thrust for a million seconds--the final speed and energy are the With the rolling resistance and aerodynamic drag of cars, it's a lot more complicated. cliff@hannelcon... | April 18, 2013 OK, let me try this: (1) resistive losses are proportional to the square of the current [ri^2] (2) torque in electrical motors is proportional to current (3) acceleration is proportional to torque (and thus current) (4) distance covered to achieve a certain speed is proportional to acceleration][.5*a*t^2) From (1)-(3), losses are proportional to the square of acceleration. So, when you double acceleration, you have 4x the losses but from (4) you've covered only 2x the distance, so you have 2x the losses per distance, thus reducing your efficiency. So, for example, if you have 10% electrical losses (90% efficiency) when doing 0-60 in 8.4 seconds, you will have 20% electrical losses (80% efficiency) when doing 0-60 in 4.2 seconds. The higher dynamic loads (and resulting increased friction, etc) probably don't count for much. So, I for one, will NOT be holding back!!! I don't have this in the spreadsheets yet, but you can see other useful calculations at http://EVTripPlanner.com/calcs.php
{"url":"http://www.teslamotors.com/en_HK/forum/forums/analyzing-car-efficiency-different-power-outputs-accelerations","timestamp":"2014-04-16T05:03:05Z","content_type":null,"content_length":"49853","record_id":"<urn:uuid:0881aaf9-e591-4122-9fa4-b9f0adba53dc>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Cryptographic Protocols Roughly speaking, the purpose of a cryptographic protocol is to perform some task involving multiple people without letting anyone involved learn any privileged information, and, as far as possible, without being disrupted by people attempting to cheat. The prototypical cryptographic task is sending a secret message from Alice to Bob without a third party, Eve, learning what the message is. The standard classical methods for solving this problem rely on Eve having only limited computational power. For instance, RSA, a commonly used scheme, relies on Eve not being able to factor large numbers. But what if Eve has some secret device that enables her to do so? (For instance, factoring can be done efficiently on a quantum computer; or she might have some particularly clever classical To remove this assumption about Eve's computational power, Alice and Bob need to share a secret key, about which Eve knows nothing at all. The key needs to be as long as the message to be sent. Alice encodes the message using the key, and sends it to Bob, who decodes it. Since Eve knows nothing about the key, she can learn nothing about the message. The key cannot be used again, however, or Eve might be able to guess some information about it by comparing the two messages. This protocol is called a one-time pad. The one-time pad relies on Alice and Bob having a large secret key, or they will quickly run out. They could renew their key by having a meeting somewhere safe from Eve and exchanging codebooks with long lists of keys. However, frequently an actual physical meeting is impractical, and it would be desireable to be able to renew the key using only public communications. One possible method for doing so is quantum key distribution, which involves sending non-orthogonal quantum states from Alice to Bob. There are many other applications of quantum mechanics to cryptography, which tend to come in three flavors: • Quantum mechanics can be used to break classical cryptographic protocols (as with quantum factoring). • Quantum states can make possible new or improved cryptographic protocols protecting classical information (as with quantum key distribution or uncloneable encryption). • Cryptographic methods can be applied to protect quantum information instead of classical information. Examples would include quantum secret sharing schemes and quantum authentication protocols. Even if we restrict ourselves to protecting classical information, there are many different types of cryptographic protocol. For instance, a digital signature scheme allows Alice to send a message to Bob in such a way that Bob can verify that the message is really from Alice and that it has not been altered at all. A zero-knowledge proof allows Alice to prove to Bob that she knows how to solve some problem without Bob learning anything about how Alice's solution works. A secure computation allows two or more people to compute some function based on all of their inputs, without any of them learning any more about the others' inputs than implied by the value of the function. There are classical solutions to all of the above problems, but all rely on making some sort of assumption, about the computational power of a cheater, about the number of cheaters, or something of this kind. Based on quantum key distribution, one might hope that a quantum computer might allow us to weaken or remove these assumptions. For instance, it is possible to make a quantum digital signature which is secure against all attacks allowed by quantum mechanics. Many classical cryptographic protocols work by building up the protocol from simpler protocols. One particularly useful simple protocol is called bit commitment. In a bit commitment protocol, Alice chooses a bit (possibly at random), and sends some proof of her choice to Bob. However, due to the nature of the proof, Bob cannot figure out what Alice's bit is until she tells him, but once she does, Bob can easily verify that she is telling the truth. A simple example of bit commitment would be if Alice writes her choice on a piece of paper and puts it in a locked box, which she gives to Bob. Bob cannot open the box until Alice gives him the key, but Alice cannot change her choice once she has given the box to Bob. Standard classical cryptographic protocols for bit commitment rely on Bob having limited computational power. For a while, it was thought quantum bit commitment protocols existed which were unconditionally secure. However, it turns out that if Alice and Bob have quantum computers, any protocol for which Bob cannot determine the value of Alice's bit allows Alice to safely change the bit without Bob finding out. This was a great disappointment, and later results proved that many other quantum cryptographic protocols were also impossible. However, there are still a number of possible protocols that have not been ruled out, including some of considerable interest. Quantum computation may allow us to perform some of these operations more safely than any classical protocol. For further information: Back to Daniel Gottesman's home page Updated: September 5, 2003
{"url":"http://www.perimeterinstitute.ca/personal/dgottesman/crypto.html","timestamp":"2014-04-17T02:11:58Z","content_type":null,"content_length":"6632","record_id":"<urn:uuid:18db2f57-fdf7-41fb-b459-480b0b30abf0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
ST_Distance, the faster edition or Birgers Boost When I was working on the new functions described in previous post I found that the distance calculation in general is very heavy and slow. The distance function gets two geometries to find the shortest distance in between. The approach has been to calculate the distance between all possible combinations of vertex-vertex and vertex-edge between the two geometries. That means that two geometries with 1000 vertexes each causes one million iterations and even if computers are fast, that takes some time. The ideas how to make it faster came to me by the time of the birth of my son. I guess you get some extra boost from something like that. I was home from job for 10 days to help my wife and son, and I did, I promise The idea was to find a way to not do this distance calculation between all and every vertexes. I thought that at least the ones behind the middle of the geometry must be possible to avoid. I imagined like a wall that I projected against the geometries and then I could sort the vertexes as they appear on the other side of the wall as I move it through the geometry. I guess it maybe doesn’t make sense but I thought it was a little fun to describe how the idea appeared. The resulting algorithm uses a line from the middle of the first geometry to the middle of the second geometry. Then it orders the vertexes along that line and calculates the distances in the order of how close they are along that line. The big difference from the old function is that the preparation here, giving the vertexes a value along this line only happens once per vertex. So in the example of 1000 vertexes per geometry it takes only 2000 calculations to get those values. Then, when the vertexes is ordered we can do the distance calculations in the right order. And when the distance between those abstract walls that I imagined is bigger than the smallest found distance, then we know that the shortest distance is found. How many distances we have to calculate before we know this will vary depending on how the geometries is related to each other. From the testing we have done it seems like it in general gives a quite good increase in speed. For larger geometries it is between 10 and 100 times faster than the old algorithm. In some special cases it is not that fast and in some cases it is even faster. This way of doing it will not work if the geometries overlap. The easiest way to be sure they don’t overlap is to check for overlapping bounding boxes. So, if there is overlapping bounding boxes the calculation is sent to the old hard way of doing it. The same is the situation if one of the geometries is a point because then there is no gain to get. Then it is done the same way as before This is a problem but hopefully this will be solved. Paul Ramsey have come up with ideas that might make my way of doing it short lived, see his blog: He is mostly discussing his new geography functions but probably it will be a good way of doing it for geometry too. So in PostGIS 2.0 the development will continue Those distance calculations enhancements might be quite important because it makes it possible to calculate directly with the geometries in nearest neighbor calculations and thing like that instead of using the centroids. Using points will still be faster bu sometimes it may be useful to be able to run on the whole geometry and before it was often more or less impossible because of too heavy This will be in PostGIS 1.5. A Beta release will hopefully be out soon. For windows there is experimental builds already available here: And of course the source code is available to compile for other platforms. I have wrote some lines in the wiki too, to describe this Tags: birgers boost, calculations, distance, faster, performance, slow, st_distance
{"url":"http://blog.jordogskog.no/2009/12/23/birgers-boost/","timestamp":"2014-04-19T02:27:06Z","content_type":null,"content_length":"12078","record_id":"<urn:uuid:cc674e07-57b3-4742-9ed9-7c5882d48f8a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Constantinos Daskalakis Short Bio CSAIL, EECS, MIT X-Window Consortium Associate Professor, 2012 - present X-Window Consortium Assistant Professor, 2011 - 2012 Assistant Professor, 2009 - 2011 MICROSOFT RESEARCH, NEW ENGLAND Post-Doctoral Researcher, 2008-2009 UNIVERSITY OF CALIFORNIA, BERKELEY Ph.D. in Computer Science, 2004-2008 2008 ACM Doctoral Dissertation Award 2007 Microsoft Research Fellowship 2004 UC Regents Fellowship Advisor: Christos H. Papadimitriou NATIONAL TECHNICAL UNIVERSITY of ATHENS (NTUA), GREECE Diploma in Electrical and Computer Engineering, 1999-2004 Thesis: On the Existence of Pure Nash Equilibria in Graphical Games with succinct description Stathis Zachos 9.98/10.00 (Summa Cum Laude) Research Interests My research interests lie in the area of theoretical computer science, in particular algorithmic game theory, computational biology and applied probability. Here is a list of Journal Articles • Constantinos Daskalakis, Alan Deckelbaum and Anthony Kim: Near-Optimal No-Regret Algorithms for Zero-sum Games. To appear in Games and Economic Behavior. Special Issue for STOC/FOCS/SODA 2011. Invited. pdf • Constantinos Daskalakis: On the Complexity of Approximating a Nash Equilibrium. ACM Transactions on Algorithms (TALG), 9(3): 23, 2013. Special Issue for SODA 2011. Invited. pdf • Constantinos Daskalakis and Sebastien Roch: Alignment-Free Phylogenetic Reconstruction: Sample Complexity via a Branching Process Analysis. Annals of Applied Probability, 23(2): 693--721, 2013. arxiv • Alexandr Andoni, Constantinos Daskalakis, Avinatan Hassidim and Sebastien Roch: Global Alignment of Molecular Sequences via Ancestral State Reconstruction. Stochastic Processes and their Applications, 122(12): 3852--3874, 2012. pdf • Sanjeev Arora, Constantinos Daskalakis and David Steurer: Message-Passing Algorithms and Improved LP Decoding. IEEE Transactions on Information Theory, 58(12): 7260--7271, 2012. • Constantinos Daskalakis, Richard M. Karp, Elchanan Mossel, Samantha Riesenfeld and Elad Verbin: Sorting and Selection in Posets. SIAM Journal on Computing, 40(3): 597-622, 2011. arxiv • Constantinos Daskalakis, Elchanan Mossel and Sebastien Roch: Phylogenies without Branch Bounds: Contracting the Short, Pruning the Deep. SIAM Journal on Discrete Mathematics, 25(2): 872-893, 2011. arxiv • Constantinos Daskalakis, Alexandros G. Dimakis and Elchanan Mossel: Connectivity and Equilibrium in Random Games. Annals of Applied Probability, 21(3):987-1016, 2011. pdf • Constantinos Daskalakis, Elchanan Mossel and Sebastien Roch: Evolutionary Trees and the Ising Model on the Bethe Lattice: a Proof of Steel's Conjecture. Probability Theory and Related Fields, 149(1-2):149-189, 2011. arxiv • Constantinos Daskalakis, Aranyak Mehta and Christos H. Papadimitriou: A Note on Approximate Nash Equilibria. Theoretical Computer Science, 410(17), 1581--1588, 2009. Special Issue for WINE 2006. Invited. pdf • Constantinos Daskalakis, Paul W. Goldberg and Christos H. Papadimitriou: The Complexity of Computing a Nash Equilibrium. SIAM Journal on Computing, 39(1), 195--259, May 2009. Special issue for STOC 2006. Invited. pdf • Constantinos Daskalakis, Alexandros G. Dimakis, Richard Karp and Martin Wainwright: Probabilistic Analysis of Linear Programming Decoding. IEEE Transactions on Information Theory, 54(8), 3565-3578, August 2008. arXiv Expository Articles • Constantinos Daskalakis: Nash equilibria: Complexity, symmetries, and approximation. Computer Science Review 3(2): 87--100, 2009. pdf • Constantinos Daskalakis, Paul W. Goldberg and Christos H. Papadimitriou: The complexity of computing a Nash equilibrium. Communications of the ACM 52(2):89--97, 2009. pdf Recent Work: 2008 (i.e. post-Berkeley)-present • Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: The Complexity of Optimal Mechanism Design. In the 25th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SODA 2014. arxiv • Constantinos Daskalakis, Anindya De, Ilias Diakonikolas, Ankur Moitra, and Rocco Servedio: A Polynomial-time Approximation Scheme for Fault-tolerant Distributed Storage. In the 25th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SODA 2014. arxiv • Constantinos Daskalakis, Ilias Diakonikolas, Ryan O'Donnel, Rocco SIIRVedio and Li-Yang Tan: Learning Sums of Independent Integer Random Variables. In the 54th IEEE Symposium on Foundations of Computer Science, FOCS 2013. pdf • Yang Cai, Constantinos Daskalakis and Matt Weinberg: Understanding Incentives: Mechanism Design becomes Algorithm Design. In the 54th IEEE Symposium on Foundations of Computer Science, FOCS 2013. arxiv • Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: Mechanism Design via Optimal Transport. In the 14th ACM Conference on Electronic Commerce, EC 2013. Best Paper and Best Student Paper Award. • Yang Cai, Constantinos Daskalakis and Matt Weinberg: Reducing Revenue to Welfare Maximization : Approximation Algorithms and other Generalizations. In the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SODA 2013. pdf • Pablo Azar, Constantinos Daskalakis, Silvio Micali and Matt Weinberg: Optimal and Efficient Parametric Auctions. In the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SODA 2013. pdf • Constantinos Daskalakis, Ilias Diakonikolas, Rocco Servedio, Greg Valiant and Paul Valiant: Testing k-Modal Distributions: Optimal Algorithms via Reductions. In the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SODA 2013. arxiv • Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: Optimal Pricing is Hard. In the 8th Workshop on Internet & Network Economics, WINE 2012. pdf • Yang Cai, Constantinos Daskalakis and Matt Weinberg: Optimal Multi-Dimensional Mechanism Design: Reducing Revenue to Welfare Maximization. In the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS), FOCS 2012. arxiv • Constantinos Daskalakis and Matt Weinberg: Symmetries and Optimal Multi-Dimensional Mechanism Design. In the 13th ACM Conference on Electronic Commerce, EC 2012. ECCC Report. Best Student Paper Award. • Yang Cai, Constantinos Daskalakis and Matt Weinberg: An Algorithmic Characterization of Multi-Dimensional Mechanisms. In the 44th ACM Symposium on Theory of Computing, STOC 2012. ECCC Report. • Constantinos Daskalakis, Ilias Diakonikolas and Rocco A. Servedio: Learning Poisson Binomial Distributions. In the 44th ACM Symposium on Theory of Computing, STOC 2012. arXiv • Constantinos Daskalakis, Ilias Diakonikolas and Rocco A. Servedio: Learning k-Modal Distributions via Testing. In the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012. arXiv • Constantinos Daskalakis and George Pierrakos: Simple, Optimal and Efficient Auctions. In the 7th Workshop on Internet & Network Economics, WINE 2011. pdf • Yang Cai and Constantinos Daskalakis: Extreme-Value Theorems for Optimal Multidimensional Pricing. In the 52nd Annual IEEE Symposium on Foundations of Computer Science, FOCS 2011. arxiv Invited to Games and Economic Behavior (Special Issue for STOC/FOCS/SODA 2011.) • Constantinos Daskalakis, Alexandros G. Dimakis and Elchanan Mossel: Connectivity and Equilibrium in Random Games. Annals of Applied Probability, 21(3):987-1016, 2011. pdf • Constantinos Daskalakis, Elchanan Mossel and Sebastien Roch: Evolutionary Trees and the Ising Model on the Bethe Lattice: a Proof of Steel's Conjecture. Probability Theory and Related Fields, 149(1-2):149-189, 2011. arxiv • Constantinos Daskalakis: On the Complexity of Approximating a Nash Equilibrium. In the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011. Transactions on Algorithms (TALG), to appear. Special Issue for SODA 2011. Invited. pdf • Constantinos Daskalakis, Alan Deckelbaum and Anthony Kim: Near-Optimal No-Regret Algorithms for Zero-sum Games. In the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011. pdf Invited to Games and Economic Behavior (Special Issue for STOC/FOCS/SODA 2011.) • Yang Cai and Constantinos Daskalakis: On Minmax Theorems for Multiplayer Games. In the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011. pdf • Constantinos Daskalakis and Christos Papadimitriou: Continuous Local Search. In the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011. pdf • C. Daskalakis, R. Frongillo, C. H. Papadimitriou, G. Pierrakos and G. Valiant: On Learning Algorithms for Nash Equilibria. In the 3rd International Symposium on Algorithmic Game Theory, SAGT 2010. pdf • Constantinos Daskalakis and Sebastien Roch: Alignment-Free Phylogenetic Reconstruction. In the 14th Annual International Conference on Research in Computational Molecular Biology, RECOMB 2010. pdf Also in Annals of Applied Probability, 23(2): 693--721, 2013. arxiv • Alexandr Andoni, Constantinos Daskalakis, Avinatan Hassidim and Sebastien Roch: Global Alignment of Molecular Sequences via Ancestral State Reconstruction. In the 1st Symposium on Innovations in Computer Science, ICS 2010. conference version Stochastic Processes and their Applications, 122(12): 3852--3874, 2012. journal version • Constantinos Daskalakis, Ilias Diakonikolas and Mihalis Yannakakis: How good is the Chord algorithm? In the 21st Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010. pdf • Ilan Adler, Constantinos Daskalakis and Christos H. Papadimitriou: A Note on Strictly Competitive Games. In the 5th Workshop on Internet & Network Economics, WINE 2009. pdf • Constantinos Daskalakis and Christos H. Papadimitriou: On a Network Generalization of the Minmax Theorem. In the 36th International Colloquium on Automata, Languages and Programming, ICALP 2009. pdf • Constantinos Daskalakis and Christos H. Papadimitriou: On Oblivious PTAS's for Nash Equilibrium. In the 41st ACM Symposium On Theory of Computing, STOC 2009. arXiv • Sanjeev Arora, Constantinos Daskalakis and David Steurer: Message-Passing Algorithms and Improved LP Decoding. In the 41st ACM Symposium On Theory of Computing, STOC 2009. pdf. Also in IEEE Transactions on Information Theory, to appear. • Constantinos Daskalakis, Elchanan Mossel and Sebastien Roch: Phylogenies without Branch Bounds: Contracting the Short, Pruning the Deep. In the 13th Annual International Conference on Research in Computational Molecular Biology, RECOMB 2009. arxiv Also in SIAM Journal on Discrete Mathematics, 25(2): 872-893, 2011. • Kamalika Chaudhuri, Constantinos Daskalakis, Robert Kleinberg and Henry Lin: Online Bipartite Perfect Matching With Augmentation. In the 28th Conference on Computer Communications, IEEE INFOCOM 2009. pdf • Constantinos Daskalakis, Grant Schoenebeck, Gregory Valiant and Paul Valiant: On the Complexity of Nash Equilibria of Action-Graph Games. In the 20th Annual ACM-SIAM Symposium On Discrete Algorithms, SODA 2009. arxiv • Constantinos Daskalakis, Richard M. Karp, Elchanan Mossel, Samantha Riesenfeld and Elad Verbin: Sorting and Ranking in Partially Ordered Sets. In the 20th Annual ACM-SIAM Symposium On Discrete Algorithms, SODA 2009. arxiv SIAM Journal on Computing, 40(3): 597-622, 2011. • Constantinos Daskalakis: An Efficient PTAS for Two-Strategy Anonymous Games. In the 4th Workshop on Internet and Network Economics, WINE 2008. arxiv • Constantinos Daskalakis and Christos H. Papadimitriou: Discretized Multinomial Distributions and Nash Equilibria in Anonymous Games. In the 49th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2008. arxiv Complete List of Publications by Topic Algorithmic Game Theory • Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: The Complexity of Optimal Mechanism Design. In the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SODA 2014. arxiv • Yang Cai, Constantinos Daskalakis and Matt Weinberg: Understanding Incentives: Mechanism Design becomes Algorithm Design. In the 54th IEEE Symposium on Foundations of Computer Science, FOCS 2013. arxiv • Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: Mechanism Design via Optimal Transport. In the 14th ACM Conference on Electronic Commerce, EC 2013. Best Paper and Best Student Paper Award. • Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: Optimal Pricing is Hard. In the 8th Workshop on Internet & Network Economics, WINE 2012. pdf • Yang Cai, Constantinos Daskalakis and Matt Weinberg: Reducing Revenue to Welfare Maximization : Approximation Algorithms and other Generalizations. In the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SODA 2013. pdf • Pablo Azar, Constantinos Daskalakis, Silvio Micali and Matt Weinberg: Optimal and Efficient Parametric Auctions. In the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SODA 2013. pdf • Yang Cai, Constantinos Daskalakis and Matt Weinberg: Optimal Multi-Dimensional Mechanism Design: Reducing Revenue to Welfare Maximization. In the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS), FOCS 2012. arxiv • Constantinos Daskalakis and Matt Weinberg: Symmetries and Optimal Multi-Dimensional Mechanism Design. In the 13th ACM Conference on Electronic Commerce, EC 2012. ECCC Report. Best Student Paper Award. • Yang Cai, Constantinos Daskalakis and Matt Weinberg: An Algorithmic Characterization of Multi-Dimensional Mechanisms. In the 44th ACM Symposium on Theory of Computing, STOC 2012. ECCC Report, 2011. • Constantinos Daskalakis and George Pierrakos: Simple, Optimal and Efficient Auctions. In the 7th Workshop on Internet & Network Economics, WINE 2011. pdf • Yang Cai and Constantinos Daskalakis: Extreme-Value Theorems for Optimal Multidimensional Pricing. In the 52nd Annual IEEE Symposium on Foundations of Computer Science, FOCS 2011. arxiv Invited to Games and Economic Behavior (Special Issue for STOC/FOCS/SODA 2011.) • Constantinos Daskalakis, Alexandros G. Dimakis and Elchanan Mossel: Connectivity and Equilibrium in Random Games. Annals of Applied Probability, Volume 21(3):987-1016, 2011. [abstract] [bib] [arXiv] • Constantinos Daskalakis: On the Complexity of Approximating a Nash Equilibrium. In the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011. pdf Journal version in Transactions on Algorithms (TALG), to appear. Special Issue for SODA 2011. Invited. pdf • Constantinos Daskalakis, Alan Deckelbaum and Anthony Kim: Near-Optimal No-Regret Algorithms for Zero-sum Games. In the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011. pdf Invited to Games and Economic Behavior (Special Issue for STOC/FOCS/SODA 2011.) • Yang Cai and Constantinos Daskalakis: On Minmax Theorems for Multiplayer Games. In the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011. pdf • Constantinos Daskalakis and Christos Papadimitriou: Continuous Local Search. In the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011. pdf • C. Daskalakis, R. Frongillo, C. H. Papadimitriou, G. Pierrakos and G. Valiant: On Learning Algorithms for Nash Equilibria. In the 3rd International Symposium on Algorithmic Game Theory, SAGT 2010. pdf • Ilan Adler, Constantinos Daskalakis and Christos H. Papadimitriou: A Note on Strictly Competitive Games. In the 5th Workshop on Internet & Network Economics, WINE 2009. [abstract] [bib] [pdf] • Constantinos Daskalakis and Christos H. Papadimitriou: On a Network Generalization of the Minmax Theorem. In the 36th International Colloquium on Automata, Languages and Programming, ICALP 2009. [abstract] [bib] [pdf] • Constantinos Daskalakis and Christos H. Papadimitriou: On Oblivious PTAS's for Nash Equilibrium. In the 41st ACM Symposium On Theory of Computing, STOC 2009. [abstract] [bib] [arXiv] • Constantinos Daskalakis, Grant Schoenebeck, Gregory Valiant and Paul Valiant: On the Complexity of Nash Equilibria of Action-Graph Games. In the 20th Annual ACM-SIAM Symposium On Discrete Algorithms, SODA 2009. [abstract] [bib] [arXiv] • Constantinos Daskalakis: An Efficient PTAS for Two-Strategy Anonymous Games. In the 4th Workshop on Internet and Network Economics, WINE 2008. [abstract] [bib] [arXiv] • Constantinos Daskalakis and Christos H. Papadimitriou: Discretized Multinomial Distributions and Nash Equilibria in Anonymous Games. In the 49th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2008. [abstract] [bib] [arXiv] • Constantinos Daskalakis and Christos H. Papadimitriou, Computing Equilibria in Anonymous Games, In the 48th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2007. [abstract] [bib] [arXiv] • Constantinos Daskalakis, Aranyak Mehta and Christos H. Papadimitriou, Progress in Approximate Nash Equilibria, In the 8th ACM Conference on Electronic Commerce, EC 2007. [abstract] [bib] [pdf] • Constantinos Daskalakis, Aranyak Mehta and Christos H. Papadimitriou, A Note on Approximate Nash Equilibria, In the 2nd international Workshop on Internet & Network Economics, WINE 2006. [abstract] [bib] [pdf] Journal version in Theoretical Computer Science, 410(17), 1581--1588, 2009. (Invited, special issue for WINE 2006.) • Constantinos Daskalakis, Alex Fabrikant and Christos Papadimitriou, The Game World is Flat: The Complexity of Nash Equilibria in Succinct Games, In the 33rd International Colloquium on Automata, Languages and Programming, ICALP 2006. [abstract] [bib] [pdf] • Constantinos Daskalakis and Christos Papadimitriou, Computing Pure Nash Equilibria via Markov Random Fields, In the 7th ACM Conference on Electronic Commerce, EC 2006. Best Student Paper Award. [abstract] [bib] [pdf] • Constantinos Daskalakis and Christos H. Papadimitriou, Three-Player Games Are Hard, Manuscript 2005. [abstract] [bib] [ECCC Report 2005] • Constantinos Daskalakis, Paul W. Goldberg and Christos H. Papadimitriou, The Complexity of Computing a Nash Equilibrium, In the 38th ACM Symposium on Theory of Computing, STOC 2006. [abstract] [bib] [pdf] Journal version in SIAM Journal on Computing, 39(1), 195--259, May 2009. (Invited, special issue for STOC 2006.) Expository article in Communications of the ACM 52(2):89--97, 2009. (Invited.) • Constantinos Daskalakis and Christos Papadimitriou, The Complexity of Games on Highly Regular Graphs, In the 13th Annual European Symposium on Algorithms, ESA 2005. [abstract] [bib] [pdf] • Constantinos Daskalakis, On the Existence of Pure Nash Equilibria in Graphical Games with succinct description, National Technical University of Athens, 2004 (In Greek). [abstract] [pdf] Computational Biology • Constantinos Daskalakis and Sebastien Roch: Alignment-Free Phylogenetic Reconstruction. In the 14th Annual International Conference on Research in Computational Molecular Biology, RECOMB 2010. conference version Journal version in Annals of Applied Probability, 23(2): 693--721, 2013. journal version • Alexandr Andoni, Constantinos Daskalakis, Avinatan Hassidim and Sebastien Roch: Global Alignment of Molecular Sequences via Ancestral State Reconstruction. First Symposium on Innovations in Computer Science, ICS 2010. [abstract] [bib] [arxiv] Journal version in Stochastic Processes and their Applications, 122(12): 3852--3874, 2012. journal version • Constantinos Daskalakis, Elchanan Mossel and Sebastien Roch: Phylogenies without Branch Bounds: Contracting the Short, Pruning the Deep. In the 13th Annual International Conference on Research in Computational Molecular Biology, RECOMB 2009. [abstract] [bib] [arXiv] Journal version in SIAM Journal on Discrete Mathematics, 25(2): 872-893, 2011. • Constantinos Daskalakis, Elchanan Mossel and Sebastien Roch: Optimal Phylogenetic Reconstruction. In the 38th ACM Symposium on Theory of Computing, STOC 2006. [abstract] [bib] [arXiv] Journal version in Probability Theory and Related Fields, 149(1-2):149-189, 2011. • C. Daskalakis, C. Hill, A. Jaffe, R. Mihaescu, E. Mossel and S. Rao: Maximal Accurate Forests From Distance Matrices. In the 10th Annual International Conference on Research in Computational Molecular Biology, RECOMB 2006. [abstract] [bib] [pdf] Applied Probability • Constantinos Daskalakis, Anindya De, Ilias Diakonikolas, Ankur Moitra, and Rocco Servedio: A Polynomial-time Approximation Scheme for Fault-tolerant Distributed Storage. In the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SODA 2014. arxiv • Constantinos Daskalakis, Ilias Diakonikolas, Ryan O'Donnel, Rocco SIIRVedio and Li-Yang Tan: Learning Sums of Independent Integer Random Variables. In the 54th IEEE Symposium on Foundations of Computer Science, FOCS 2013. pdf • Constantinos Daskalakis, Ilias Diakonikolas, Rocco Servedio, Greg Valiant and Paul Valiant: Testing k-Modal Distributions: Optimal Algorithms via Reductions. In the 24th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SODA 2013. arxiv • Constantinos Daskalakis, Ilias Diakonikolas and Rocco A. Servedio: Learning Poisson Binomial Distributions. In the 44th ACM Symposium on Theory of Computing, STOC 2012. arXiv • Constantinos Daskalakis, Ilias Diakonikolas and Rocco A. Servedio: Learning k-Modal Distributions via Testing. In the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012. arXiv • Sanjeev Arora, Constantinos Daskalakis and David Steurer: Message-Passing Algorithms and Improved LP Decoding. In the 41st ACM Symposium On Theory of Computing, STOC 2009. [abstract] [bib] [pdf] Journal version in IEEE Transactions on Information Theory, 58(12): 7260--7271, 2012. • Constantinos Daskalakis, Alexandros G. Dimakis, Richard Karp and Martin Wainwright: Probabilistic Analysis of Linear Programming Decoding. In the 18th Annual ACM-SIAM Symposium On Discrete Algorithms, SODA 2007. [abstract] [bib] [arXiv] Journal version in IEEE Transactions on Information Theory, 54(8), 3565-3578, August 2008. arxiv • Christian Borgs, Jennifer T. Chayes, Constantinos Daskalakis and Sebastien Roch: An Analysis of Preferential Attachment with Fitness. In the 39th ACM Symposium on Theory of Computing, STOC 2007. [abstract] [bib] [pdf] Other Topics • Constantinos Daskalakis, Ilias Diakonikolas and Mihalis Yannakakis: How good is the Chord algorithm?. In the 21st Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010. pdf • Kamalika Chaudhuri, Constantinos Daskalakis, Robert Kleinberg and Henry Lin: Online Bipartite Perfect Matching With Augmentation. In the 28th Conference on Computer Communications, IEEE INFOCOM 2009. [abstract] [bib] [pdf] • Constantinos Daskalakis, Richard M. Karp, Elchanan Mossel, Samantha Riesenfeld and Elad Verbin: Sorting and Ranking in Partially Ordered Sets. In the 20th Annual ACM-SIAM Symposium On Discrete Algorithms, SODA 2009. [abstract] [bib] [arxiv] Journal version in SIAM Journal on Computing, 40(3): 597-622, 2011. • Stergios Stergiou, Constantinos Daskalakis and George K. Papakonstantinou: Fast and Efficient Heuristic ESOP Minimization Algorithm. IEEE Great Lakes Symposium on VLSI (GLSVLSI 2004). [abstract] [bib] [pdf]
{"url":"http://people.csail.mit.edu/costis/academic.html","timestamp":"2014-04-17T03:54:02Z","content_type":null,"content_length":"83768","record_id":"<urn:uuid:b2b17e98-187e-48c1-a056-85bfd63130a1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Master Algebra for Android Master Algebra full version is ad free & contains 340 questions. Master Algebra has different sections like Tutorial, Practice Skills, Practice Timed Test & Algebra Challenge. It has following tutorials: Types of Number like real number, integer, negative number, complex number Addition, Subtraction, Multiplication & Division of Real Number Addition, Subtraction, Multiplication & Division of Negative Number Addition, Subtraction, Multiplication & Division of Complex Number Properties of Number Ratio & Proportion Exponent & Radical Integer exponents, real & rational exponents What is Monomial/Binomial/Polynomial? Addition, Subtraction, Multiplication, Division of polynomials Factoring polynomial Linear Equation What is variable, expression & equation? Solving linear equation with one variable Solving linear equation with two variables Solving linear equation with three variables Quadratic Equation Solving quadratic equation with one variable Solving quadratic equation with two variables Solving quadratic equation with three variables Equation with radical, Absolute value of equation Rational Expression; Addition, Subtraction, Multiplication, Division of rational expression What’s Inequality? Linear, Polynomial &Rational Inequalities Absolute value of Inequality With Practice Skills, You will be able to practice all the above learned skills with help. There are answer & steps to get the answer for each question. It contains 160 multiple choice questions covering Numbers, Ratio & proportion, Exponent & Radicals, Polynomial, Linear Equation, Quadratic Equation, Rational expression & Inequality. With Practice Timed Test, You will be able to practice all the above learned & practiced skills in specified timed environment. There is a timer. You need to finish within that time. It contains 80 multiple choice questions. With Algebra Challenge You will be prepared to compete with others. It contains 100 algebra challenge questions covering all the above topics & time allotted is 1 hour. Master Algebra will display each month champions/winners selected from around the world of this challenge in our website imathpractice.com too. Comprehensive report & progress chart. Reports are saved in database so you or your parent can check anytime. It can be email too. There is a lite version to try out too. Tags: exercic de multipla escolha adição subtração e multiplicação, adiçã e subtração exercic multipla escolha, pesquisa números negativos adição subtração multiplicação tipos de graficos, multiple choice questions about factoring trinomials, adição,subtração e multiplicação com numeros complexos exerc de multipla escolha, polynomial addition soustraction multiplication, integers addition subtraction multiplication division multiple choice questions, multiple choice quiz about addition, substraction, multiplication and divison of real numbers, codigo sumar y multiplicar numeros complejos, questões de múltipla escolha sobre razão e proporção. Comments and ratings for Master Algebra • There aren't any comments yet, be the first to comment!
{"url":"http://www.appszoom.com/android_applications/education/master-algebra_cffjj.html?nav=related","timestamp":"2014-04-21T16:55:54Z","content_type":null,"content_length":"43127","record_id":"<urn:uuid:4f05bb0b-8321-47ab-b466-41c4246aaeca>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
The use of metasystem transition in theorem proving and program optimization Results 1 - 10 of 14 - Final Report of the NSF Workshop on Scientific Database Management. SIGMOD RECORD , 1991 "... This article describes theoretical and practical aspects of an implemented self-applicable partial evaluator for the untyped... ..." - Proceedings of ILPS'95, the International Logic Programming Symposium , 1995 "... This paper presents a termination technique for positive supercompilation, based on notions from term algebra. The technique is not particularily biased towards positive supercompilation, but also works for deforestation and partial evaluation. It appears to be well suited for partial deduction too. ..." Cited by 74 (2 self) Add to MetaCart This paper presents a termination technique for positive supercompilation, based on notions from term algebra. The technique is not particularily biased towards positive supercompilation, but also works for deforestation and partial evaluation. It appears to be well suited for partial deduction too. The technique guarantees termination, yet it is not overly conservative. Our technique can be viewed as an instance of Martens ' and Gallagher's recent framework for global termination of partial deduction, but it is more general in some important respects, e.g. it uses well-quasi orderings rather than well-founded orderings. Its merits are illustrated on several examples. , 1996 "... This paper gives a gentle introduction to Turchin's supercompilation and its applications in metacomputation with an emphasis on recent developments. First, a complete supercompiler, including positive driving and generalization, is defined for a functional language and illustrated with examples. Th ..." Cited by 35 (4 self) Add to MetaCart This paper gives a gentle introduction to Turchin's supercompilation and its applications in metacomputation with an emphasis on recent developments. First, a complete supercompiler, including positive driving and generalization, is defined for a functional language and illustrated with examples. Then a taxonomy of related transformers is given and compared to the supercompiler. Finally, we put supercompilation into the larger perspective of metacomputation and consider three metacomputation tasks: specialization, composition, and inversion. - Static Analysis, volume 864 of Lecture Notes in Computer Science , 1994 "... . Our aim is to study how the interpretive approach --- inserting an interpreter between a source program and a program specializer --- can be used to improve the transformation of programs and to automatically generate program transformers by self-application of a program specializer. We show ..." Cited by 26 (7 self) Add to MetaCart . Our aim is to study how the interpretive approach --- inserting an interpreter between a source program and a program specializer --- can be used to improve the transformation of programs and to automatically generate program transformers by self-application of a program specializer. We show that a few semantics-preserving transformations applied to a straightforward interpretive definition of a first-order, call-by-name language are sufficient to generate Wadler's deforestation algorithm and a version of Turchin's supercompiler using a partial evaluator. The transformation is guided by the need to binding-time improve the interpreters. 1 Introduction Our aim is to study the interpretive approach to improve the transformation of source programs and to automatically generate stand-alone transformers [Tur93, GJ94]. The essence of the interpretive approach is to insert an interpreter between a source program and a generic program specializer. As defined by the specializer , 1996 "... Turchin`s supercompiler is a program transformer that includes both partial evaluation and deforestation. Although known in the West since 1979, the essence of its techniques, its more precise relations to other transformers, and the properties of the programs that it produces are only now becoming ..." Cited by 15 (0 self) Add to MetaCart Turchin`s supercompiler is a program transformer that includes both partial evaluation and deforestation. Although known in the West since 1979, the essence of its techniques, its more precise relations to other transformers, and the properties of the programs that it produces are only now becoming apparent in the Western functional programming community. This thesis gives a new formulation of the supercompiler in familiar terms; we study the essence of it, how it achieves its effects, and its relations to related transformers; and we develop results dealing with the problems of preserving semantics, assessing the efficiency of transformed programs, and ensuring termination. - IN THE ESSENCE OF COMPUTATION: COMPLEXITY, ANALYSIS, TRANSFORMATION , 2002 "... We survey fundamental concept in inverse programming and present the Universal Resolving Algorithm (URA), an algorithm for inverse computation in a first-order, functional programming language. We discusst he principles behind the algorithm, including a three-step approach based on the notion of a p ..." Cited by 13 (2 self) Add to MetaCart We survey fundamental concept in inverse programming and present the Universal Resolving Algorithm (URA), an algorithm for inverse computation in a first-order, functional programming language. We discusst he principles behind the algorithm, including a three-step approach based on the notion of a perfect process tree, and demonstrate our implementation with several examples. We explaint he idea of a semantics modifier for inverse computation which allows us to perform inverse computation in other programming languages via interpreters. - In Herbert Kuchen and Doaitse Swierstra, editors, International Symposium on Programming Languages, Implementations, Logics and Programs (PLILP '96 , 1997 "... . Memoization is a key ingredient in every partial evaluator. It enables folding by caching previously specialized functions. It is essential to make polyvariant specialization terminate. Its implementation is reasonably straightforward in a standard specializer that represents functions by closures ..." Cited by 5 (4 self) Add to MetaCart . Memoization is a key ingredient in every partial evaluator. It enables folding by caching previously specialized functions. It is essential to make polyvariant specialization terminate. Its implementation is reasonably straightforward in a standard specializer that represents functions by closures. With the advent of handwritten programgenerator generators (PGGs), implementing memoization gets harder, because PGGs use efficient standard representations of data at specialization time. We present several implementations of memoization for PGGs that are able to deal with all features of current partial evaluators, specifically partially static data and functions. The first implementation is based on message passing. It is simple, portable, and efficient, but only suitable for untyped higher-order languages such as Scheme. The second implementation is geared towards typed language such as SML. Whereas the first two implementations are completely portable, our third implementation exploit... - ACM Computing Surveys: Special Issue on Partial Evaluation , 1998 "... this paper we will essentially refer to these techniques as they have been developed in the fields of functional and logic programming. ..." - SECOND INTERNATIONAL WORKSHOP ON METACOMPUTATION IN RUSSIA (META 2010) , 2010 "... It has been long recognised that partial evaluation is related to proof normalisation. Normalisation by evaluation, which has been presented for theories with simple types, has made this correspondence formal. Recently Andreas Abel formalised an algorithm for normalisation by evaluation for System F ..." Cited by 3 (0 self) Add to MetaCart It has been long recognised that partial evaluation is related to proof normalisation. Normalisation by evaluation, which has been presented for theories with simple types, has made this correspondence formal. Recently Andreas Abel formalised an algorithm for normalisation by evaluation for System F. This is an important step towards the use of such techniques on practical functional programming languages such as Haskell which can reasonably be embedded in relatives of System Fω. Supercompilation is a program transformation technique which performs a super-set of the simplifications performed by partial evaluation. The focus of this paper is to formalise the relationship between supercompilation and normalisation by evaluation for System F with recursive types and terms. - FIRST INTERNATIONAL WORKSHOP ON METACOMPUTATION IN RUSSIA (META 2008) , 2008 "... It has previously been shown by Turchin in the context of supercompilation how metasystem transitions can be used in the proof of universally and existentially quantified conjectures. Positive supercompilation is a variant of Turchin’s supercompilation which was introduced in an attempt to study and ..." Cited by 2 (0 self) Add to MetaCart It has previously been shown by Turchin in the context of supercompilation how metasystem transitions can be used in the proof of universally and existentially quantified conjectures. Positive supercompilation is a variant of Turchin’s supercompilation which was introduced in an attempt to study and explain the essentials of Turchin’s supercompiler. In our own previous work, we have proposed a program transformation algorithm called distillation, which is more powerful than positive supercompilation, and have shown how this can be used to prove a wider range of universally and existentially quantified conjectures in our theorem prover Poitín. In this paper we show how a wide range of programs can be constructed fully automatically from first-order specifications through the use of metasystem transitions, and we prove that the constructed programs are totally correct with respect to their specifications. To our knowledge, this is the first technique which has been developed for the automatic construction of programs from their specifications using metasystem transitions.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=824224","timestamp":"2014-04-23T10:25:00Z","content_type":null,"content_length":"37980","record_id":"<urn:uuid:951407da-1449-4af9-893f-6b24eb683d77>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Windsor, CO SAT Math Tutor Find a Windsor, CO SAT Math Tutor ...I have tutored in a number of settings since I was in high school. After graduating from the Colorado School of Mines where I majored in Electrical Engineering and minored in Economics, I worked as a computer programmer for IBM for seven years. During this time, I spent weekends volunteering my time tutoring youths at my church. 18 Subjects: including SAT math, English, reading, writing ...At that point, I went back to school and became a Registered Nurse. After working in healthcare for a while, I have decided to split my time between helping the body and helping the mind. I love math and science and was at the top of my class in many courses. 13 Subjects: including SAT math, geometry, statistics, differential equations ...On top of that, I also worked for a national test prep agency where I was certified to teach: GRE, GMAT, ACT, SAT, and the Biology section of the MCAT. I have over 100 hours of training with this agency. The training taught me study skills for all types of subjects, tests, and learning styles. 26 Subjects: including SAT math, reading, geometry, biology ...In my current job, C++ and its interaction with other languages via APIs (Tcl, Perl, MATLAB, etc.) is an essential skill. Using the STL as building blocks to producing complex software is one of the many skills, but also understanding how to use the language (public versus private variable/funct... 47 Subjects: including SAT math, chemistry, physics, calculus ...In junior high, I ran for student body. This began my public-speaking "career." In high school I was part of the debate team, as well as mock trial. My undergraduate and master's program were both discussion-based, which required strong public speaking skills. 33 Subjects: including SAT math, reading, English, writing
{"url":"http://www.purplemath.com/windsor_co_sat_math_tutors.php","timestamp":"2014-04-17T16:05:12Z","content_type":null,"content_length":"23978","record_id":"<urn:uuid:8019b7ea-3dab-4d74-8aef-3b49a53e5f34>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Newport, DE Precalculus Tutor Find a Newport, DE Precalculus Tutor ...I taught Algebra 2 with a national tutoring chain for five years. I have taught Algebra 2 as a private tutor since 2001. I completed math classes at the university level through advanced 12 Subjects: including precalculus, calculus, writing, geometry ...I have a PhD from Carnegie-Mellon University in electrical engineering, control systems. Over my career in industry, I have programmed using more than thirty different computer languages and led numerous software development projects. I am certified in Delaware to teach computer science. 39 Subjects: including precalculus, chemistry, physics, calculus ...I find knowing why the math is important goes a long way towards helping students retain information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. I hold degrees in economics and business and an MBA. 13 Subjects: including precalculus, calculus, algebra 1, geometry I have taught middle school and high school mathematics in northern Virginia for 8 years. I have tutored privately most of that time as well. I know that everyone learns in a different way and I try to use real world objects, models and examples to help students understand abstract concepts with which they may be struggling. 28 Subjects: including precalculus, calculus, statistics, geometry ...There is an awesome reward when watching a struggling student as he begins to understand what he needs to do and how everything fits together. I previously taught Algebra I, II, III, Geometry, Trigonometry, Precalculus, Calculus, Intro to Statistics, and SAT review in a public school. I have tu... 12 Subjects: including precalculus, calculus, statistics, geometry Related Newport, DE Tutors Newport, DE Accounting Tutors Newport, DE ACT Tutors Newport, DE Algebra Tutors Newport, DE Algebra 2 Tutors Newport, DE Calculus Tutors Newport, DE Geometry Tutors Newport, DE Math Tutors Newport, DE Prealgebra Tutors Newport, DE Precalculus Tutors Newport, DE SAT Tutors Newport, DE SAT Math Tutors Newport, DE Science Tutors Newport, DE Statistics Tutors Newport, DE Trigonometry Tutors
{"url":"http://www.purplemath.com/Newport_DE_precalculus_tutors.php","timestamp":"2014-04-21T14:50:17Z","content_type":null,"content_length":"24112","record_id":"<urn:uuid:5b2af5cb-b75d-43b2-b879-492b56adfb38>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Irreversible process A reversible process does not imply that system entropy is not changing, but univers entropy not increasing. So, the process can proceed in such a way that system entropy does not change and entropy of the system environment (the univers) will increase. So such a process would be irreversible. Desirable? Something is desirable if it helps in pursuing a target. If you are looking for a big blast, for sure it won't be reversible. If you are lloking for a rechargeable battery, you would like to design an electrochemical process that proceeds closely reversibly. For the third point, I would say that for irreversible processes this equation does not apply. There is always some loss of energy when the process is not reversible: this is entropy.
{"url":"http://www.physicsforums.com/showthread.php?t=187874","timestamp":"2014-04-17T03:53:36Z","content_type":null,"content_length":"34682","record_id":"<urn:uuid:f5664b33-140f-453b-acac-26eae943f069>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
The SAT's recursive meaning The SAT’s recursive meaning If it weren’t for the god awful Analysis comp I’ll be taking on Saturday (and should be studying for now), I would certainly be going to EDcampNYC, the local unconference for teachers and the like. Anna and Justin will be going without me and leading a session or two about mathematical art stuff. If you go, you should definitely check out their session(s). Especially if you’re into making beautiful stuff. (Speaking of mathematical art, I have about 2400 words written about MArTH Madness, the huge math art event we hosted at Saint Ann’s. Must add pics. Coming soon.) Anyhow, let me tell you about Mai Li. She’s an amazing young mathematician in our mathematical art seminar. She’s also opted out of Trig/Analysis for her junior year in favor of Intro Topology and Modern Algebra electives with Anna and a semester course called Fractals and Chaos. She helped lead the doodling sessions at MArTH Madness, and she just rocks in general. Well, as much as she would love to come help them share MArTH with the people at EDcampNYC, she can’t. She has to take the SAT’s. No big deal. Whatever. She has to take them. It’s fine, but too bad. All of this is just the setup for some clever little thing that Justin said in our office the other day. * * * Q: You know what SAT stands for right? A: The SAT Aptitude Test. HA! Love it. It used to stand for “Scholastic Aptitude Test,” but I think they’ve abandoned that. (I wonder why?) I looked over some videos and stuff on the college board site, but I couldn’t find my answer. I did find this, however: “the combination of high school grades and SAT scores is the best predictor of your academic success in college.” hmm. Do you buy that? The problem that I have with the SAT, in particular the math section, is that it cannot test for the applicants ability to do math. If you disagree, it’s simply that we have different notions of what it means to “do math.” I’m not gonna get too deep into this, because I’m mostly writing just to share Justin’s little meme, which might otherwise be lost forever. “SAT Aptitude Test” is a fitting acronym, because the SAT is only really testing your ability to succeed on the SAT itself. Infer what you will. When students need to prepare, they don’t get a math tutor. They get an SAT math tutor. Could the inauthenticity be any more obvious? The biggest criticism of my department (accurate or not) is that we don’t adequately prepare students for this test. Perhaps that’s only further evidence of my point, since our primary objective is doing real mathematics with students as often as we possibly can. [disclaimer: I am not a spokesman for the school.] What do the SAT’s demand for success? Technical training, perhaps, which is only one aspect of a mathematical education. Maybe most of all, SAT success requires SAT experience. Want to prove that you’re SAT apt? Why not practice with the SAT aptitude test? Get it? * * * OK that’s it. This is just something I’ve been thinking over and really enjoying. Thank you, Justin for your unending nerdly wit. Oh by the way, we can keep expanding SAT and get the following, all of which are the same thing: The SAT The SAT Aptitude Test The (SAT Aptitude Test) Aptitude Test The [(SAT Aptitude Test) Aptitude Test] Aptitude Test… and on and on. * * * Good luck students, and try to remember it’s a collection of paper and ink. Don’t let it shake you too hard. 6 responses to “The SAT’s recursive meaning” 1. Ha, fantastic! That is quite witty indeed. Another thing the SAT is an indicator for is the wealth of the test takers family. Here is the TED talk where I first heard about it (it’s a great one btw): http://www.youtube.com/watch?feature=player_embedded&v=xyowJZxrtbg And here is the chart with data Daniel Pink compiled showing it: 2. The combination of SAT scores and high school GPA is still the best predictor of college grades that admissions officers have found. Various tweaks are often added to admissions formulas to get a more balanced class, but adding various additional components doesn’t improve the regression fits significantly. (Sorry, I don’t have a citation handy—I think this was from an internal document I saw 5-10 years ago.) SAT scores are certainly highly correlated with SES (socio-economic status), but SAT and GPA are a better predictor of college success than SES is. Note: GPA alone used to be a better predictor than SAT alone, but I believe that grade inflation is slowly robbing GPA of its predictive value (when over half the class gets As, then getting As doesn’t mean much any more). □ I certainly understand the statistics that are being pointed to, but what about my students? They don’t get grades. They’re insufficiently prepared for the SATs, or so I hear. What does that say about them? I guess I’m just not that interested in statistical correlation between grades here and grades there. ☆ If your students don’t get grades, then their SAT scores are about the only data the admissions officers have to determine whether they are suitable for admission to their colleges. If the students’ SAT scores are weak, then only very unselective colleges are likely to accept them. The unselective colleges tend to provide a lower level of education than the selective ones, so may not be the best fit for your top students. If you have students who are good at doing math, I would recommend that they have other evidence besides just the SAT test to show that, since the SAT test really only goes through about 10th grade math (I know a student who got over 700 on the SAT at the end of 6th grade). SAT II math 2 is a slightly higher level, and AP Calculus higher still. These tests do a somewhat better job of showing whether students can do math. Even better are the AMC-10 and AMC-12 tests, though college admissions officers are less likely to know about them. ☆ I think you’re either goading me on, or you thought I was asking how they could legitimize their learning with scores. Despite how admissions may operate, I don’t believe scores like this legitimize learning. Our students can definitely substantiate their ability in nonstandard ways, by doing remarkable work and sharing it. Looking to the AMC, SAT, ACT, NYML, AIME, SATII, or AP tests for validation is a mistake if what you care about is learning and doing mathematics. My primary critique of these contests is that they do not evaluate a student’s ability to do math. Primarily because authentic mathematical experiences are not regularly found in high-pressure, timed environments with strict restraints to ensure intellectual isolation. As far as college admissions, many of our students do attend “good colleges,” including Ivy League schools and renown liberal arts colleges etc., despite having never received grades, and perhaps even in the face of low SAT scores. We don’t provide much data at all for admissions, certainly no scores or ways to rank or sort our kids, but the pages and pages of anecdotal reports can provide a really rich and detailed sense of the student. Is that not better? Thanks for your comments as always. You give me the best push back, and I really appreciate it. 3. I agree with you that grades and exam scores can be quite meaningless, and that none of the exams really get at math ability. My point was just that a lot of colleges don’t know how else to judge math ability—the admissions officers have too little math to be able to judge a portfolio. What do you think? This entry was posted in Recursion and tagged scholastic aptitude test, School. Bookmark the permalink.
{"url":"http://lostinrecursion.wordpress.com/2012/05/02/the-sats-recursive-meaning/","timestamp":"2014-04-21T01:59:13Z","content_type":null,"content_length":"77247","record_id":"<urn:uuid:cb582000-b4e4-429b-a83c-d94eb3ec1698>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Windsor, CO SAT Math Tutor Find a Windsor, CO SAT Math Tutor ...I have tutored in a number of settings since I was in high school. After graduating from the Colorado School of Mines where I majored in Electrical Engineering and minored in Economics, I worked as a computer programmer for IBM for seven years. During this time, I spent weekends volunteering my time tutoring youths at my church. 18 Subjects: including SAT math, English, reading, writing ...At that point, I went back to school and became a Registered Nurse. After working in healthcare for a while, I have decided to split my time between helping the body and helping the mind. I love math and science and was at the top of my class in many courses. 13 Subjects: including SAT math, geometry, statistics, differential equations ...On top of that, I also worked for a national test prep agency where I was certified to teach: GRE, GMAT, ACT, SAT, and the Biology section of the MCAT. I have over 100 hours of training with this agency. The training taught me study skills for all types of subjects, tests, and learning styles. 26 Subjects: including SAT math, reading, geometry, biology ...In my current job, C++ and its interaction with other languages via APIs (Tcl, Perl, MATLAB, etc.) is an essential skill. Using the STL as building blocks to producing complex software is one of the many skills, but also understanding how to use the language (public versus private variable/funct... 47 Subjects: including SAT math, chemistry, physics, calculus ...In junior high, I ran for student body. This began my public-speaking "career." In high school I was part of the debate team, as well as mock trial. My undergraduate and master's program were both discussion-based, which required strong public speaking skills. 33 Subjects: including SAT math, reading, English, writing
{"url":"http://www.purplemath.com/windsor_co_sat_math_tutors.php","timestamp":"2014-04-17T16:05:12Z","content_type":null,"content_length":"23978","record_id":"<urn:uuid:8019b7ea-3dab-4d74-8aef-3b49a53e5f34>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00107-ip-10-147-4-33.ec2.internal.warc.gz"}