content
stringlengths
86
994k
meta
stringlengths
288
619
Geometry (10th) 144 terms · Geometry vocabulary and posulates Absolute Value The distance from 0 to a number, n, on a number line Acute Angle An angle whose degree measure is greater than 0 and less than 90. Acute Triangle A triangle that has three acute angles Additive Inverse a +0 =a and 0 + a= a a set of points that is the union of two rays having the same endpoint Associative Property of Addition a property of real numbers that states that the sum or product of a set of numbers is the same, regardless of how the numbers are grouped. (a +b) + c =a+ (b +c) Associative Property of Multiplication a property of real numbers that states that the sum or product of a set of numbers is the same, regardless of how the numbers are grouped. (ab)c =a(bc) Base Angles The angles whose vertices are the endpoints of the base of the triangle The third side of the triangle Given two numbers, another number is said to be between those two numbers if it is greater than the first but less than the second Bisector of an Angle A ray whose endpoint is the vertex of the angle an that divides the angle into two congruents angles Bisector of a line Segment any line, or subset of a line, that interests the segment at its midpoint Closure Property of Addition the sumber of two real numbers is a real number Closure property of multiplication the product of two real numbers is a real number Collinear set of points A set of points all of which lie on the same straight line Commutative Property of addition a + b = b+a Commutative Property of Multiplication ab = ba Congruent angles Angles that have the same measure an ordered pair of numbers that identifies a point on a coordinate plane, written as (x,y) a statements of the meaning of a term THe unit of measurement for angles, Distance from a point to a line the length of the perpendicular from the point to the line Distributive Property a property of real numbers that states that the product of the sum or difference of two numbers is the same as the sum or difference of their products Equilangular triangle a triagnel that has 3 congruent angles Equilateral Triangle A triangle that has 3 congruent sides Exterior of the angle the space on the outside of the two rays of an angle is a branch of mathematics that defines and relates the basic properties and measurement of the line segments and angles half- line/ray a part of line that consists of a point on the line, called an endpoint, and all the points on one side of the endpoint THe side of the triagnle that is opposite the right angle Interior of an angle The area that is in between the two rays Isosceles Triangle A triangle that has two congruent sides Two congruent sides of an isosceles triangle or the two sides that form a right angle Line segment a set of points consisteing of two points on a line, called endpoints, and all the points on the line between the endpoints An infinite set of points in opposite directions forming a straight path, it has only one dimension, length A point of the that line segment that divides the segment into tow congruent segments Multiplication Property of zero ab= 0 if and only if a=0 and b =0 Muliplicative Identity a 1 = a and 1 a =a Multiplicative inverse a*1/a= 1 Non- collinear set of points a set of three of more points that do not all lie on the same straight line number line a line on which each point represents a real number obtuse angle an angle whose degree measure is greater than 90 and less than 180 opposite ray two rays of the same line with a common endpoint and no other points in common perpendicular lines two lines that intersect to form right angles a set of points that form a flat surface extending indefinitely in all directions an exact location in space, a point has no dimension a closed figure in a plane that is the union of the line segments such that the segments intersect only at their endpoints and no segments sharing a common endpoint are collinear right angle an angle whose degree measure is 90 right triangle a triangle that has a right angle Scalene Triangle a triangle that has no congruent sides a collection of objects such that it is possible to determine whether a given object belong to a collection or not straight angle an angle that is the union of opposite rays and whose degree measure is 180 a ploygon that has exactly three sides undefined terms their meaning is accepted without definition vertex angle the angle opposite the base in an isosceles triangle the common endpoint of two sides of a polygon, the common endpoint of two rays that form an angle, the common point where two or more edges of a 3-D solid meet Addition Postulate If equal quanties ar added to equal quanties, the sums are equal Statements that we accept them without proof Statements that are likely to be true but not yet been proved true by a deductive proof An example that refutes or disproves a hypothesis, proposition, or theorem Deductive Reasoning Uses the laws of logic to combine definitions and general statements that we know to be true to reach a valid conclusion Direct proof A proof that starts with the given statements and uses the laws of logic to arrive at the statement to be proved Division Postulate If equals are dividied by nonzero equals, the quotients are equal Equivalence Relation A relation for which these postulates (reflective, symmetric and transitive) are true A proposition asserting something to be true either of all members of a certain class or of an indefinite part of that class Indirect proof/Proof by contradiction A proof that starts with the negation of the statement to be proved and used the laws of logic to show that it is false Inductive Reasoning The method of reasoning in which a series of particular examples leads to a conclusion Multiplication Postulate If equals are multiplied by equals, the products are equal Partition Postulate A whole is equal to the sum of all its parts A statement whose truth is accepted without proof Powers Postulate The squares of equal quantities are equal Is a valid argument that establishes the truth of a statement Reflective Property of Equality A quantity is equal to itself Roots Postulate Postivite square roots of postivie equal quantites are equal Substitution Postulate A quantity may be substituted for its equal in any statment of equality Subtraction Postulate If equal quantities are sbutracted from equal quantities, the differences are equal Symmetric Property of Equality An equality may be expressed in either order A statement that is proved by deductive reasoning Transitive Property of Equality Quantities equal to the same quantity are equal to each other Adjactent Angles Two angles in a plane that share a common side and share a common vertex but have no interior points in common ASA Triangle Congruence Two triangles are congruent if two angles and the included side of one triangle are congruent, respectively, two angles and the included side of the other Complementary Angles Two angles the sum of whose degree measures is 90 Congruent Polygons Polygons that have the same size and shape Corresponding Angles Congruent angles in the same relative postion with in two firgures Corresponding Sides Sides that are in the same relative position with in two figures Linear Pair of Angles Two adjacent angles whose sum is a straight angle SAS Triangle Congruence Two triangles are congruent if two sides and the included angle of one triangle are congruent, respectiviely, two sides and teh included angle of the other SSS Triangle Congruence Two triangles are congruent if the three sides of one triangle are congruent respectively to the thtree sides of the other Supplementary Angles Two angles the sum of whose degree measure 180 Veritical Angles The non adjacent angles formed by the interestion of two lines Altitude of a Triangle A line segment (of its length) Drawn from a vertex perpendicular to the line containing the oppostite side Angle Bisector A segment of ray that divides an angle into two congruent angles The point where the three perependicular biserctors of the sides of a triangle interest When the three lines interesct in one point A theorem that can easily be duduced from another theorem The same distance apart at every point Geometric Construction A drawing of geometric figure done using only a pencil, a compass and a straightedge, or their equivalents Isosceles Triangle Theorem If two sides of a triangle are congruent, then the angles opposite these sides are congruent Median of a Triangle A line segment that joints any vertex of the triangle to the midpoint of the opposite side Perpendicular Bisector of a Line Segment A line, a line segment or a ray that is perpendicular to the line segment at its midpoint x- coordinate, the distance from the point to the y axis Axis of Symmetry The line where the figure could be folded so that parts of the figure on opposite sides of the line would coincide Composition of Transformations WHen two transformations are performed one following the other Coordinate Plane A plane spanned by the x axis and y axis in which th ecoordinates of a point are its distances from two interesecting perpendicular axes A transformation that makes a object get bigger or smaller Direct Isometry A transformation that preserves distance and orientations Fixed points A point that does not change after a transformation A set of ordered pairs in which no two pairs have the same first element Glide Reflection A composition of transformations of the plane that consists of a line reflection and a translation in the direction of the line of reflections performed in either order The figure after a transformation occurs A transformation that perserves distance Line Reflection The correspondence between the object points and the images points Line Symmetry when the figure is its own image under a line reflection Oppostie Isometry A transformation that preserves distance but changes the order or orientation from clockwise to counterclockwise or form counterclockwise to clockwise Ordered Pair All the points on a plan have coordinates and they are written (x,y) The y coordinate, the distace from the point to the x axis Location of position relative to the points of the compass The point on the coordinate plan which the x axis and the y axis intersect Point symmetry The figure is its own image under a reflection in a point The image before the transformation occurs Quarter Turn A counterclockwise rotation of 90 about the origin A transformation of a plane about a fixed point P though an angle of d degrees A one-to-one correspondence between two sets of points, S and S^1, such that every point in set S corresponds to one and only one point in the S^1, called its image, and every point in S^1 is the image of one and only one point in S called its preimage Translation Symmetry If the image of every point of the figure is a point on the figure A transformation of the plane that moves every point in the plane in the same distance in the same direction X- Axis The horizontal Line on the coordinate plane The vertical line on the coordinate plane Additon Postulate of Inequalities 1) If equal quantities are added to unequal quantities, then the sums are unequal in the same order. 2) If unequal quantities are added to unequal quantities, then the sum is unequal in the same Adjacent Interior Angles The angle that is next to the angle but on the inside of the triangle Exterior Angles of a Polygon an angle that forms a linear pair with one of the interior angles of the polygon Remote Interior angles/ Non-Adjacent Angles that are in the triangle, but are not adjacent to the angle Substitution Postulate of Inequalities A quanity may be substituted for its equal in any statment of inequality Subtraction Postulate of Inequalities If equal quantities are subtracted from unequal quantities, then the differences are unequal in the same order Transitive Property of Inequality If a, b, and c are real numbers such that a>b, and b>c, then a>c Triangle Inequality Theorem The length of one side of a triangle is less than the sum of the length of the other two sides Trichotomy Postulate Given any two quantities, a and b, one and only one of the following is true: a,b or a=b or a>b Study of reasoning Usually formed by placing the word "not" in the originial statement (~) A compound statement formed by combining 2 simple statements using the word "and" (^) A compound statement formed by combining 2 simple statements using the word "or" (v) If a number is a whole number then it is an interger! If --> then statements If <--> and only if statements Negate both Switch and negate
{"url":"http://quizlet.com/1686872/geometry-10th-flash-cards/","timestamp":"2014-04-16T13:48:06Z","content_type":null,"content_length":"207206","record_id":"<urn:uuid:6fc8c1d5-78df-4d51-a3ad-210925a35816>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Power Programming: Bitwise Tips and Tricks - Open Source For You If you are a seasoned programmer, these tips and tricks will seem very familiar, and are probably already part of your repertoire. If you are a novice programmer or a student, they should help you experience an “Aha!” moment. Independent of what you currently do, these tips and tricks will remind you of the wonderful discoveries in computer science, and the brilliant men and women behind them. Before we get started, let’s establish some conventions for the rest of the article. Figure 1 shows how we represent bits — we start from right to left. The rightmost bit is the “Least Significant Bit” (LSB), and labelled as b0. The “Most Significant Bit” (MSB) is labelled b7. We use 8 bits to indicate and demonstrate concepts. The concept, however, is generically applicable to 32, 64, and even more bits. Population count Population count refers to the number of bits that are set to 1. Typical uses of population counting are: • Single-bit parity generation and detection: To generate the correct parity bit, depending on the scheme being followed (odd or even parity), one would need to count the number of bits set to 1, and generate the corresponding bit for parity. Similarly, to check the parity of a block of bits, we would need to check the number of 1s, and validate the block against the expected parity • Hamming weight: Hamming weight is used in several fields, ranging from cryptography to information theory. The hamming distance between two strings A and B can be computed as the hamming weight of “A” XOR “B”. These are just a few of the use cases of population counting; we cannot hope to cover all possible use cases, but rather, just explore a few samples. First implementation Our first implementation is the most straightforward: int count_ones(int num) int count = 0; int mask = 0x1; while (num) { if (num & mask) num >>= 1; return count; The code creates a mask bit, which is the number 1. The number is then shifted right, one at a time, and checked to see if the bit just shifted to the rightmost bit (LSB), is set. If so, the count is incremented. This technique is rather rudimentary, and has a cost complexity of O(n), where n is the number of bits in the block under consideration. Improving the algorithm For those familiar with design techniques like divide and conquer, the idea below is a classical trick called the “Gillies-Miller method for sideways addition”. This process is shown in Figure 2. As the name suggests, this method involves splitting the bits to count; we start by pairing adjacent bits, and summing them. The trick, though, is that we store the intermediate result in the same location as the original number, without destroying the data required in the next step. The code for the procedure is shown below: static inline unsigned char bit_count(unsigned char x) x = (0x55 & x) + (0x55 & (x >> 1)); x = (0x33 & x) + (0x33 & (x >> 2)); x = (0x0f & x) + (0x0f & (x >> 4)); return x; The key points to note for this algorithm are: • It uses a mask at each step in the algorithm. • The code takes O(log n) time to complete. The masks at each step are shown in Figure 3. The masks in the first step have alternate bits set (0×55); this selects alternate bits for summation. The number above is summed with the same mask (0×55), but with the entire number shifted right by one bit. In effect, this sums the alternate bits of the word. The bits that are not important are cleared, and set to 0 in the mask. This procedure is repeated; the goal now is to compute the sum of the intermediate result obtained in the step above. The previous step counted the sum of alternate bits; it is now time to sum two bits at a time. The corresponding mask for this step is 0×33 (can you see why?). Again, we repeat the procedure by masking the number with 0×33, and adding to it the result of the number right-shifted by 2 and masked by 0×33. We do something similar in the final step, where we need to count 4 bits at a time, and sum up the result to obtain the final answer. Figure 2 shows a sample computation for the number 177, which is represented as 10110001. In Step 1, we sum the adjacent bits, leading to 01100001 (the sum of 1 and 0 is 01, the sum of 1 and 1 is 10 (in binary, this represents 2), the sum of 0 and 0 is 00, the sum of 0 and 1 is 01). In the next step, we sum 2 bits at a time, resulting in 00110001 (the sum of 01 and 10 is 0011 — 3 in binary; the sum of 00 and 01 is 0001). In the final step, we sum 4 bits at a time, resulting in 00000100 (the sum of 0011 and 0001 is 00000100 — 4 in binary). As expected, this is also the final outcome, and the result of the number of 1s in the block under consideration. This completes the sideways addition algorithm. As you can see, this algorithm is clearly more efficient than the initial approach. 1. We focused on 8 bits in a block to explain the algorithm. This algorithm can easily be extended to 32 or 64 bits and beyond. Write a routine to extend this algorithm to 64 bits, and potentially all the way up to 256 bits. 2. The algorithm specified above (in the section Improving the algorithm) is not necessarily optimal. Look at the references below to see if a more optimal version can be found and used. Explain what optimisations are possible, and how. • MMIXware: A RISC Computer for the Third Millennium by Donald E Knuth, Springer-Verlag, 1999 • Matters Computational: ideas, algorithms, source code by Jorg Arndt, Draft version of 20-June-2010 This article was originally published in September 2010 issue of the print magazine. More efficient? Probably. More readable? Definitely not. Enjoyed the logic.. good work! This will work for only 8bit data? (unsigned char?) – in that case if speed is the main criteria – I will go for look-up table. at the most I will waste 256 bytes of memory, but it will be super • Pingback: Links 28/6/2012: Over a Million Android Activations Per Day, KDE 4.9 RC1 Released | Techrights
{"url":"http://www.opensourceforu.com/2012/06/power-programming-bitwise-tips-tricks/","timestamp":"2014-04-21T09:54:38Z","content_type":null,"content_length":"83294","record_id":"<urn:uuid:8983dfea-e568-465e-ba0c-9b24c1fa3acc>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
6th Grade Georgia Math Unit 1 Vocabulary If your printer has a special "duplex" option, you can use it to automatically print double-sided. However for most printers, you need to: 1. Print the "odd-numbered" pages first 2. Feed the printed pages back into the printer 3. Print the "even-numbered" pages If your printer prints pages face up, you may need to tell your printer to reverse the order when printing the even-numbered pages.
{"url":"http://quizlet.com/13644215/print","timestamp":"2014-04-21T09:48:08Z","content_type":null,"content_length":"221386","record_id":"<urn:uuid:8bfa691e-e301-4c84-9f63-293d8a288f07>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Sheaves and bundles in differential geometry up vote 18 down vote favorite Because the theory of sheaves is a functorial theory, it has been adopted in algebraic geometry (both using the functor of points approach and the locally ringed space approach) as the "main theory" used to describe geometric data. All sheaf data in the LRS approach can be described by bundles using the éspace étalé construction. It's interesting to notice that the sheafification of a presheaf is the sheaf of sections of the associated éspace étalé. However, in differential geometry, bundles are for some reason preferred. Is there any reason why this is true? Are there some bundle constructions which don't have a realization as a sheaf? Are there advantages to the bundle approach? dg.differential-geometry geometry I must note that by "any bundle constructions that don't have a realization as a sheaf", I mean any useful bundle constructions that don't have a realization as a sheaf. I am aware of the adjunction between $Psh(X)$ and $Bundle(X)$, which restricts to an equivalence of categories on $Sh(X)$ and some subcategory of $Bundle(X)$ that I don't remember offhand (they are covering spaces in the continuous case.) – Harry Gindi Mar 9 '10 at 0:10 1 There is the obvious answer: a lot in mathematics depends on your point of view. Sometimes you care more about spaces themselves, and sometimes you care more about the functions/sections on the spaces. It seems to me to be counterproductive to force yourself to translate everything to the same language, regardless of what the problem you're dealing with is best suited to. That said, I'd be quite interested in what other people have to say about less subjective advantages. – Ilya Grigoriev Mar 9 '10 at 2:22 1 Sure, but my point was more about the overwhelming prominence of bundles in DG, while sheaves get relatively little exposure in that setting. Bundles are used much more often in AG than sheaves are used in DG, at least in my experience. – Harry Gindi Mar 9 '10 at 2:46 add comment 3 Answers active oldest votes If $X$ is a manifold, and $E$ is a smooth vector bundle over $X$ (e.g. its tangent bundle), then $E$ is again a manifold. Thus working with bundles means that one doesn't have to leave the category of objects (manifolds) under study; one just considers manifold with certain extra structure (the bundle structure). This is a big advantage in the theory; it avoids introducing another class of objects (i.e. sheaves), and allows tools from the theory of manifolds to be applied directly to bundles too. Here is a longer discussion, along somewhat different lines: The historical impetus for using sheaves in algebraic geometry comes from the theory of several complex variables, and in that theory sheaves were introduced, along with cohomological techniques, because many important and non-trivial theorems can be stated as saying that certain sheaves are generated by their global sections, or have vanishing higher cohomology. (I am thinkin of Cartan's Theorem A and B, which have as consequences many earlier theorems in complex analysis.) If you read Zariski's fantastic report on sheaves in algebraic geometry, from the 50s, you will see a discussion by a master geometer of how sheaves, and especially their cohomology, can be used as a tool to express, and generalize, earlier theorems in algebraic geometry. Again, the questions being addressed (e.g. the completeness of the linear systems of hyperplane sections) are about the existence of global sections, and/or vanishing of higher cohomology. (And these two usually go hand in hand; often one establishes existence results about global sections of one sheaf by showing that the higher cohomology of some related sheaf vanishes, and using a long exact cohomology sequence.) up vote 64 These kinds of questions typically don't arise in differential geometry. All the sheaves that might be under consideration (i.e. sheaves of sections of smooth bundles) have global down vote sections in abundance, due to the existence of partions of unity and related constructions. There are difficult existence problems in differential geometry, to be sure: but these are very often problems in ODE or PDE, and cohomological methods are not what is required to solve them (or so it seems, based on current mathematical pratice). One place where a sheaf theoretic perspective can be useful is in the consideration of flat (i.e. curvature zero) Riemannian manifolds; the fact that the horizontal sections of a bundle with flat connection form a local system, which in turn determines the bundle with connection, is a useful one, which is well-expressed in sheaf theoretic language. But there are also plenty of ways to discuss this result without sheaf-theoretic language, and in any case, it is a fairly small part of differential geometry, since typically the curvature of a metric doesn't vanish, so that sheaf-theoretic methods don't seem to have much to say. If you like, sheaf-theoretic methods are potentially useful for dealing with problems, especially linear ones, in which local existence is clear, but the objects are suffiently rigid that there can be global obstructions to patching local solutions. In differential geomtery, it is often the local questions that are hard: they become difficult non-linear PDEs. The difficulties are not of the "patching local solutions" kind. There are difficult global questions too, e.g. the one solved by the Nash embedding theorem, but again, these are typically global problems of a very different type to those that are typically solved by sheaf-theoretic methods. 4 That is a great answer! – Sam Derbyshire Mar 9 '10 at 3:42 1 Dear fpqc, In complex geometry sheaves and cohomology certainly play a role, although I'm not close enough to the field to know whether they are as dominant as they were in the heyday of Oka and Cartan. I would guess that the closer the investigations are to algebraic geometry, the more likely these methods are to play an important role. – Emerton Mar 9 '10 at 3:42 2 Regarding the statement that studying bundles allows one to stay in the realm of manifolds. I'm curious, do differential geometers not usually care that bundles do not form an Abelian category, i.e., that kernels and cokernels of maps between bundles are not necessarily bundles? When working with locally free sheaves in an algebro-geometric setting, I tend to find it necessary to view them in the larger category of coherent sheaves for this reason. – Mike Skirvin Mar 9 '10 at 3:44 Typically, one considers maps of bundles which are locally of the form $\mathbb R^m \hookrightarrow \mathbb R^n$, so that the quotient is again a bundle. In fact, even more 3 geometrically inclined algebraic geometers tend to distinguish between a map of bundles (in the above sense), and a map of sheaves (such as the map $\mathcal O \hookrightarrow \mathcal O(1)$) given by choosing a hyperplane in projective space.) One point to consider is that the latter kinds of maps work well in algebraic geometry in part because the singularities of maps are so tame (zeroes or poles) ... – Emerton Mar 9 '10 at 4:03 @fpqc, Emerton, I would say that most definitely one uses sheaves when studying complex manifolds. Indeed, on an arbitrary complex manifold, cohomology of the "obvious" sheaves is just 2 about all we have. Slowly, however, people are turning to more differential geometric ways to try and understand complex non-Kähler manifolds. In brief, one looks for a Hermitian metric whose 2-form satisfies some PDE analogous to being closed. Eg. work of Streets-Tian on pluriclosed metrics and work of Fu-Li-Yau on balanced metrics. – Joel Fine Mar 9 '10 at show 7 more comments In addition to Emerton's answer (which is great), it should be noted that in the complex algebraic and analytic categories, the study of singularities is both tractable and necessary. So it is natural to study "generalized" vector bundles that allow singular fibers, but this is possible only using local sections, i.e. sheaves. On the other hand, in the smooth category, up vote singularities can be quite nasty and therefore are almost always avoided by differential geometers. If everything is smooth and nonsingular, there is little to be gained by using sheaves. I 16 down am not familiar with singular differential geometry, but it seems to me that using sheaves in that context might be just as essential as it is in algebraic geometry. Great answer as well! – Harry Gindi Mar 9 '10 at 6:28 add comment In differential geometry one often also has connections on the bundles, e.g. the Levi-Cevita connection on the tangent bundle. Many concepts of differential geometry use connections, such as holonomy or geodesics. I'd love to learn the opposite to the following statement, but I have the feeling that there is no "nice" definition of a connection in the sheaf-theoretical approach. So I think this is up vote 1 another aspect why bundles are often preferred. down vote As a little side remark, I'd like to point out that in higher differential geometry the story continues. Curiously, the word "gerbe" - which comes originally from the sheaf-theoretical side - is often at the same time used for the higher analog of a bundle. This is one of the reasons why different people associate so diverse meanings with "gerbe". 1 My experience with algebraic geometry is limited, but connections (and their cousins the crystals) seem to be used quite often in a sheaf-theoretic context. The connections tend to be flat, i.e., I rarely see people using nonzero curvature to prove things. – S. Carnahan♦ May 22 '10 at 5:10 Connections not only can be defined as a concept for any sheaf of modules whatsoever on the manifold (or analytic space), but on a complex manifold one can show that a coherent sheaf 4 admitting a connection is automatically a vector bundle. See Deligne's SLN book on diff'tl eqns. But the sheaf-theoretic viewpoint is extremely handy for studying these matters, among those who already care about sheaves for other reasons. – BCnrd May 22 '10 at 7:18 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/17545/sheaves-and-bundles-in-differential-geometry?answertab=oldest","timestamp":"2014-04-16T22:16:16Z","content_type":null,"content_length":"76437","record_id":"<urn:uuid:9aabc929-14ca-48f9-87bf-4b377dd9ee7e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrated Mathematics Courses and the NCAA Core Course System by Thurston Banks January 2001 Depending on the course content, such other courses as "integrated mathematics" may meet the requirement. In determining whether a mathematics course meets the core course requirements, integrated mathematics courses have been problematic for NCAA in the past. As the name implies, they consist of a mixture of mathematics topics that may include components of prealgebra, other topics that are below the level of first-year algebra, or topics that may duplicate material from other courses. Obviously, permitting a student to take two mathematics courses with different names that were both first-year algebra in nature and allowing graduation credit for both those courses would be fruitless. This outcome is the basis of the duplicative argument. The duplicative nature of course material is even more complex when one considers students who transferred from one high school to another. The first high school may follow a more traditional format in teaching mathematics, and the second may follow an integrated mathematics format that teaches first- and second-year algebra and geometry using a three-year integrated mathematics program. Parts of first-year algebra taken at the first school then appear in the second year of integrated mathematics at the second school and to that extent are duplicative. Analyses of these courses by clearinghouse personnel, NCAA staff members, or members of the NCAA Mathematics Core Course Subcommittee have been difficult and time-consuming. These analyses have included such factors as inquiring about the specific textbook used for the course, reviewing the table of contents of the textbook, and reviewing the course syllabus or course outline. At times, all materials used in the course were reviewed. During the period when the NCAA's 75 percent content rule was in effect, this rule was a primary discriminator in the review of these mathematics core courses. The 75 percent content rule stated that for a course to meet the NCAA's mathematics core course requirement, at least 75 percent of the instructional content of the course had to consist of mathematical concepts that were acceptable for the course. This requirement was later dropped, and high schools were asked to assume primary responsibility for core course analysis. In their evaluation of courses as core courses, high schools were asked to focus on such criteria as courses that are college preparatory in nature, that are taught by a qualified instructor, and that are awarded high school graduation credit. The NCAA membership also agreed to reduce the complexity of the mathematics core course analysis by eliminating the Level II mathematics requirement. The Level II requirement stipulated that at least one course must be above the first-year algebra level of content. These changes have no doubt been to the benefit of everyone involved; however, questions still arise when dealing with integrated mathematics courses. It behooves the high schools and the NCAA membership to strive toward a core-course evaluation system that maintains a rigorous high school mathematics program for all college-bound students, since many students who are entering college must take developmental or remedial mathematics courses before beginning their normal college mathematics coursework. │Thurston Banks is a member of the chemistry faculty at Tennessee Technological University, where he serves as the faculty athletics representative. He is also currently a member of the NCAA Core │ │Course Review Committee and serves as chairperson of the Mathematics Subcommittee. │ │ │
{"url":"http://www.nctm.org/resources/content.aspx?id=1720","timestamp":"2014-04-19T15:35:32Z","content_type":null,"content_length":"47278","record_id":"<urn:uuid:c806a166-d80f-46a9-a6cc-e8028da25a93>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
How to plot CV curve for a MOSCAP in Cadence? 1. 23rd October 2008, 20:38 #1 Newbie level 2 Join Date Oct 2008 0 / 0 How to plot CV curve for a MOSCAP in Cadence? Hi, How to plot CV curve for an MOSCAP in cadence? In DC analysis I saw some "op" and "opt" option in Calculator. I chose the Cgg option to plot Vs Vgs. But its showing only one Value. I want to plot the complete CV Curve, how it can be 2. 26th October 2008, 07:47 #2 Member level 3 Join Date Aug 2008 5 / 5 how to compare cv curves SPICE can, i am not sure about cadence, refer to the manual. Best regards. 3. 26th October 2008, 10:35 #3 Newbie level 3 Join Date Jul 2006 0 / 0 spice cv curve you can simulate in ADE, sweep the gate voltage as a parameter! 4. 26th October 2008, 15:47 #4 Newbie level 2 Join Date Oct 2008 0 / 0 plot cv curve I did In CADENCE. I swept the gate voltage and shorted the Drain, Source and Bulk terminal to GND. I did the DC analysis. Now I want to plot the CV Curve. But I don't know how to do it?? Please can anybody tell me the way how to do it in CADENCE??? Member level 2 Join Date Mar 2008 3 / 3 Re: plot cv curve Is it ok to do it in DC analysis? I think the capacitor's value depends on the frequency, right? Junior Member level 3 Join Date Sep 2008 12 / 12 Re: How to plot CV curve for a MOSCAP in Cadence? This link is quite helpful: The Designer's Guide Community Forum - Plot the CV curve using the spectre So, set the frequency fixed and sweep the DC voltage instead (i.e., in the AC analysis choose Sweep Variable to be Design Variable instead, where you have your DC value as a parameter). I must, however, say that I haven't used it frequently, but it seemed to work pretty well for my test setup.
{"url":"http://www.edaboard.com/thread136775.html","timestamp":"2014-04-20T16:50:35Z","content_type":null,"content_length":"73272","record_id":"<urn:uuid:4264bc28-f00e-4849-9091-15fbe1f59d5f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Imperial College/Courses/Fall2009/Synthetic Biology (MRes class)/'R' Tutorial/Practical From OpenWetWare Part 1: Random Walks Below is a basic 'R' script to simulate a random walk along the X axis. Random walks are important to model as they relate for example to the phenomenon of diffusion. • Copy and save the script into a file. • Run this script within 'R' <syntax type='R'> 3. Random Walk 1D / 1 Trace 1. number of steps nbSteps <- 1000 1. sampling and path construction 2. First method: Looping 1. draw from uniform distribtion between [0,1] x <- runif(n = nbSteps) 1. initialise path variable xSteps <- rep(0, nbSteps) #as opposed to x1Steps <-0 1. build independent steps series for(i in 1:nbSteps){ if(x[i]>0.5){ xSteps[i] <- 1} else{ xSteps[i] <- -1 } } 1. build path from independent steps xPath <- cumsum(xSteps) 2. Second method: vectorisation (faster) 4. x <- runif(n = nbSteps) 5. xSteps <- ifelse(x>0.5, 1, -1) 6. xPath <- cumsum(xSteps) 1. plot path plot(xPath, 1:nbSteps, type='l', col='red', lwd =2, main = 'Random Walk 1D', xlab = "X coordinate", ylab = "Steps") • Modify this script so that you can overlay as many independent paths as you want (Tip = use the function capability from 'R'). • Write a function to only return the last X coordinate after nbSteps. • Use the previous function to build the distribution of X coordinates from 1000 independent simulations, after nbSteps. • Extend the first script to be able to simulate 2D random walks. Part 2: Enzymatic reaction analysis In this section, we will be looking at enzymatic reaction data, using a Michaelis-Menten model. The purpose of the 'R' script is to automatise enzyme characterisation from experimental data. Part 2a: Linear regression analysis • Copy this script and run it. <syntax type='R'> 2. Enzymatic analysis: Linear regression analysis concentration <- c(0.3330, 0.1670, 0.0833, 0.0416, 0.0208, 0.0104, 0.0052) velocity <- c(3.636, 3.636, 3.236, 2.666, 2.114, 1.466, 0.866) df <- data.frame(concentration, velocity) plot(df$concentration, df$velocity) df$ytrans <- 1/df$velocity df$xtrans <- 1/df$concentration par(ask=T) plot(df$xtrans, df$ytrans) lmfit <- lm(ytrans~xtrans, data=df) coefficients <- coef(lmfit) print(coefficients) par(ask=T) plot(df$xtrans, df$ytrans) abline(coefficients) Vmax <- 1/coef(lmfit)[1] Km <- Vmax*coef(lmfit)[2] print(Vmax) print(Km) Part 2a:Questions • Using the help functions in 'R', add comments to the script to detail each command. Use also 'plot' arguments to make plot labels more explicit. • Modify the script so that experimental data can be read from a file, and analytical results can be exported into a file. Part 2b: Non-linear regression In this section, we will use a non-linear regression method to estimate the Michaelis-Menten parameters from the data. The non-linear regression method explored here is the least-square method. <syntax type'R'> 2. Enzymatic analysis: Non-linear regression (Least-square optimisation) concentration <- c(0.3330, 0.1670, 0.0833, 0.0416, 0.0208, 0.0104, 0.0052) velocity <- c(3.636, 3.636, 3.236, 2.666, 2.114, 1.466, 0.866) df <- data.frame(concentration, velocity) plot(df$concentration, df$velocity) nlsfit <- nls(velocity~Vmax*concentration/(Km+concentration),data=df, start=list(Km=0.0166, Vmax=3.852)) par(ask=T) plot(df$concentration, df$velocity, xlim=c(0,0.4), ylim=c(0,4)) x <- seq(0, 0.4, length=100) y2 <- (coef(nlsfit)["Vmax"]*x)/(coef(nlsfit)["Km"]+x) lines(x, y2) Part 2:Questions • As before, using the 'R' help comment this code. • Modify the code so that the initials values used by the non-linear regression can be read from the parameter file you created in Part 2a. • Plot on the same graph both fits (linear regression and non-linear regression) Part 3: Constitutive Gene Expression Modelling In this section, we will use 'R' to simulate a simple constitutive gene expression model. $Gene \rightarrow mRNA \rightarrow Protein$ The model is given by a system of 2 differential equations to account for the expression of mRNA molecules and Protein molecules: Following the law of mass action, we can write: \begin{alignat}{1} \frac{d[mRNA]}{dt} & = k_{1} - d_{1}[mRNA] \\ \frac{d[Protein]}{dt} & = k_{2}[mRNA] - d_{2}[Protein] \\ \end{alignat} • k[1] is the transcription rate. It is considered to be constant, and it represents the number of mRNA molecules produced per gene, and per unit of time. • d[1] is the mRNA degradation rate of the mRNA molecule. The typical half-life for the mRNAs, in E.Coli, has been measured to be between 2min and 8min (average 5min). • k[2] is the translation rate. It represents the number of protein molecules produced per mRNA molecule, and per unit of time. • d[2] is the protein degradation rate. The script below uses this model to simulate mRNA and Protein time series. It uses the 'odesolve' package in 'R', and its 'lsoda' function. Check ?lsoda for more details. <syntax type='R'> 2. Gene Expression ODE model params <- c(k1=0.1, d1=0.05, k2=0.1, d2=0.0025) times <- seq(from=1, to=3600, by=10) geneExpressionModel <-function(t, y, p) { dm <- p["k1"] - p["d1"]*y["m"] dp <- y["m"]*p["k2"] - p["d2"]*y["p"] res<-c(dm, dp) results <- lsoda(c(m=0,p=0),times,geneExpressionModel, params) plot(results[,1], results[,2], xlim=c(0,3800), ylim=c(0,100)) lines(results[,1], results[,3]) Part 3: Questions • Modify this script so you can read the parameters (k1, k2, d1, d2) from a file, and store the results of the simulation into a file. • Create a function to return the mRNA and protein steady-state from the parameters (k1, k2, d1, d2). • Plot the steady-state level of mRNA and protein on top of the simulated results. • As you can observe, the mRNA level reaches steady-state very quickly in comparison to the protein level. It is therefore possible to use a quasi-steady-state hypothesis on the mRNA, and assume that the level of mRNA is already at steady-state from the start of the simulation. It helps to simplify the model to: $Gene \rightarrow mRNA \rightarrow Protein$ \begin{alignat}{2} \frac{d[Protein]}{dt} = s - d[Protein] \\ \end{alignat} Modify your script to take into account this simplification, and compare both outputs to evaluate how good is the quasi-steady state assumption. Part 4: Repressed Gene Expression Very few genes are known to have a purely constitutive expression, most genes have their expression controlled by some outside signals (DNA-binding proteins, Temperature, metabolites, RNA molecules ...). In this section, we will particularly focus on the study of DNA-binding proteins, called transcription factors. These proteins, when binding to a promoter region, can either have an activation effect on the gene (positive control), or a repression effect (negative control). In prokaryotes, control of transcriptional initiation is considered to be the major point of regulation. In this part of the tutorial, we investigate one of the most common model used to describe this type of interactions. Let's first consider the case of a transcription factor acting as a repressor. A repressor will bind to the DNA so that it prevents the initiation of transcription. Typically, we expect the transcription rate to decrease as the concentration of repressor increases. A very useful family of functions to describe this effect is the Hill function: $f(R)=\frac{\beta.{K_m}^n}{{K_m}^n+R^n}$. The Hill function can be derived from considering the transcription factor binding/unbinding to the promoter region to be at equilibrium (similar to the enzyme-substrate assumption in the Michaelis-Menten formula). This function has 3 parameters: β,n,K[m]: • β is the maximal expression rate when there is no repressor, i.e. f(R = 0) = β. • K[m] is the repression coefficient (units of concentration), it is equal to the concentration of repressor needed to repressed by 50% the overall expression, i.e $f(K_m)=\frac{\beta}{2}$ • n is the Hill Coefficient. It controls the steepness of the switch between no-repression to full-repression. \begin{align} & Repressor \\ & \bot \\ Gene &\rightarrow mRNA \rightarrow Protein \end{align} Hill function for transcriptional repression: • k[1]: maximal transcription rate • K[m]: repression coefficient • n: Hill coefficient Following the law of mass action, we can write the following ODE model: \begin{alignat}{1} \frac{d[mRNA]}{dt} & = \frac{k_{1}.{K_m}^n}{{K_m}^n+R^n} - d_{1}[mRNA] \\ \frac{d[Protein]}{dt} & = k_{2}[mRNA] - d_{2}[Protein] \\ \end{alignat} Part 4:Questions • Write a 'R' script to simulate the mRNA and Protein expression as a function of the repressor concentration. • We want to plot the transfer function between the repressor [R] concentration and the protein steady-state level. • Suggest an application where this genetic circuit might be useful.
{"url":"http://www.openwetware.org/wiki/Imperial_College/Courses/Fall2009/Synthetic_Biology_(MRes_class)/'R'_Tutorial/Practical","timestamp":"2014-04-20T05:45:06Z","content_type":null,"content_length":"33772","record_id":"<urn:uuid:15b07c3f-a7c0-404b-afd0-1006c228c92a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Displacement-Wave Simulation Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search As discussed in [121], displacement waves are often preferred over force or velocity waves for guitar-string simulations, because such strings often hit obstacles such as frets or the neck. To obtain displacement from velocity at a given digital waveguide strings. To convert our force-wave simulation to a displacement-wave simulation, we may first convert force to velocity using the Ohm's law relations signals with respect to time (in advance of the simulation). Ohm's law relations and then divided by scatter identically. In more general situations, we can go to the Laplace domain and replace each occurrence of filter. In an all-velocity-wave simulation, each signal gets multiplied by transfer functions. All filters in the diagram (just wave impedance without changing the signal flow diagram, which remains a force-wave simulation until minus signs, scalings, and initial conditions in an impedance description, the integration constants obtained by time-integrating velocities to get displacements are all defined to be zero. Additional considerations regarding the choice of displacement waves over velocity (or force) waves are given in §E.3.3. In particular, their initial conditions can be very different, and traveling-wave components tend not to be as well behaved for displacement waves. Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
{"url":"https://ccrma.stanford.edu/~jos/pasp/Displacement_Wave_Simulation.html","timestamp":"2014-04-18T10:27:54Z","content_type":null,"content_length":"14640","record_id":"<urn:uuid:bd3f6eb8-e190-4ac2-bd37-b451e81c49c8>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
MOS Capacitor Hello everyone, My question is about MOS-capacitor, We know that, If we place +ve gate voltage on p-MOS capacitor holes are repelled from semiconductor-oxide interface leaving behind negative charge due to ionized acceptor ions,This results in a depletion region. Now the question is, If we injected the required negative charges (electrons) into semiconductor from its back contact is the depletion region is going to vanish(neutralization)....??
{"url":"http://www.physicsforums.com/showthread.php?s=29fca60efd9a54db68b7a2fc424655d4&p=4437238","timestamp":"2014-04-16T13:53:50Z","content_type":null,"content_length":"23438","record_id":"<urn:uuid:d422c433-d237-4b9d-85fc-fc658811ea53>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Peterstown, NJ SAT Math Tutor Find a Peterstown, NJ SAT Math Tutor ...For students whose goal is to learn particular subjects, I make sure that the student understands the basics prior to delving into the details. In a nutshell, I provide tutoring based on the student's need. Thank you for your time reading this profile! 15 Subjects: including SAT math, chemistry, calculus, geometry ...I am familiar with all of the aspects of the admissions process, including alumni interviews, essay writing, test preparation, etc. I just completed AP Art History, and I have spent a great deal of time in art museums. I have completed college level criminal justice coursework, and have a signi... 43 Subjects: including SAT math, reading, English, algebra 1 ...I spend extra time getting to know each student beyond the numbers. By getting to know the whole person, I avoid bumps in the road and am able to smoothly navigate a pathway to success. My students obtain optimal outcomes from their efforts. 52 Subjects: including SAT math, English, reading, writing ...I take an interactive approach to tutoring, and encourage students to give lots of feedback, dictate the pace, and maintain a constant dialogue with me. I don't believe in lecturing too much--you will learn the material much better when you are talking out loud through examples with some guidanc... 10 Subjects: including SAT math, physics, calculus, geometry ...Even some certified math teachers are not fluent in this subject. I spent 36 years as a mathematics teacher and 22 of those years supervising a department of approximately 30 mathematics teachers. Teaching students how to study was always a priority in our professional development meetings. 9 Subjects: including SAT math, geometry, algebra 1, precalculus Related Peterstown, NJ Tutors Peterstown, NJ Accounting Tutors Peterstown, NJ ACT Tutors Peterstown, NJ Algebra Tutors Peterstown, NJ Algebra 2 Tutors Peterstown, NJ Calculus Tutors Peterstown, NJ Geometry Tutors Peterstown, NJ Math Tutors Peterstown, NJ Prealgebra Tutors Peterstown, NJ Precalculus Tutors Peterstown, NJ SAT Tutors Peterstown, NJ SAT Math Tutors Peterstown, NJ Science Tutors Peterstown, NJ Statistics Tutors Peterstown, NJ Trigonometry Tutors Nearby Cities With SAT math Tutor Bayway, NJ SAT math Tutors Bergen Point, NJ SAT math Tutors Chestnut, NJ SAT math Tutors Elizabeth, NJ SAT math Tutors Elizabethport, NJ SAT math Tutors Elmora, NJ SAT math Tutors Midtown, NJ SAT math Tutors North Elizabeth, NJ SAT math Tutors Parkandbush, NJ SAT math Tutors Townley, NJ SAT math Tutors Tremley Point, NJ SAT math Tutors Tremley, NJ SAT math Tutors Union Square, NJ SAT math Tutors West Carteret, NJ SAT math Tutors Winfield Park, NJ SAT math Tutors
{"url":"http://www.purplemath.com/Peterstown_NJ_SAT_math_tutors.php","timestamp":"2014-04-19T20:24:50Z","content_type":null,"content_length":"24163","record_id":"<urn:uuid:32258810-ace9-4c5f-b2a2-1e016d68be30>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Osmania university, b.e. (mechanical) 1 sem. main examination, nov/dec-2009, mechanics of materiaand dynamics model question papers Download Model question papers & previous years question papers Awards & Gifts Active MembersToday Posted Date: 09 Jul 2012 Posted By:: Rajesh Member Level: Gold Points: 5 (Rs. 4) Last 7 Daysmore... 2009 B.Tech Mechanical Engineering Osmania university, b.e. (mechanical) 1 sem. main examination, nov/dec-2009, mechanics of materiaand dynamics Question paper Course: B.Tech Mechanical Engineering University/board: Osmania University Osmania University previous year question papers. You can download old question from here. If you are a student of Osmania University and looking for previous year question papers for second year ,then you can download from here. For downloading these question papers you must have acrobat reader in your computer. Get all the previous year question papers for B.Tech You can download old question from here. Univesity Name: Osmania University Paper name: MECHANICS OF MATERIALS AND DYNAMICS Academic Year: 2009 Courses: B.Tech Second Year First Semester Examination Answer all questions of Part A. answer five questions from Part B 1. State and explain Hooks law. 2. Explain the effect of change of temperature in a composite bar. 3. What do you understand by the term, point of contraflexuture ? 4. Describe the procedure for finding out the slope and defection of a cantilever beam of a composite section. 5. Define flexural rigidity and torsional rigidity. 6. Skectch the shear stress distribution in a circular shaft. 7. Write the significance of Mohrs circle and its uses. 8. Distinguish between circumferential stress and longitudinal stress in a cylindrical shell when subjected to an internal pressure. 9. What do you mean by the terms column and strut? Distinguish clearly between columns and short columns. 10. Give the assumptions for determining the stresses in the bending of cured bars. 11. A copper rod, 25 mm in diameter is encased in steel tube 30mm internal diameter and 35mm external diameter. The ends are rigidity attached. The composite bar is 500mm long and is subjected to an axial pull of 30kN. Find the stress induced in the rod and the tube. Take E for steel = 2*10^5 N/mm^2 and E for copper = 1*10^5 N/mm^2. 12. A horizontal beam, 30m long, carries a uniformly distributed load of 10kn/m over the whole length and a concentrated load of 30kN at the right end. If the beam is freely support at the left end, find the position of the second support so that the bending moment on the beam should be small as possible. Draw the diagrams of shearing force and bending moment and insert the principal values. 13. At a point in an elastic material under strain, there are normal stresses of 50 N/mm^2 and 30 N/mm^2, respectively at right angles to each other with a shearing stress of 25 N/mm^2. Find the principal stresses and the position of principal planes if a) 50 N/mm^2 is tensile and 30 N/mm^2 is also tensile b) 50 N/mm^2 is tensile and 30 N/mm^2 is compressive. Find also the maximum shear stress and its plane in both the cases. 14. A 30cm*16 cm rolled steel joint of I-section has flanges 11mm thick and web 8mm thick. Find the safe uniformly distributed load that this section will carry over a span of 5m if the permissible skin stress is limited to 120 N/mm^2. 15. Derive the expression for the deflection of a simply supported beam when subjected to a central point load by double integration method. 16. Compare the crippling loads given by Eulers and Rankines formula for a tubular steel strut 2.3 m long having outer and inner diameter 38mm and 33mm respectively, loaded through pin joints at each end. Take the yield stress as 335N/mm^2. The Rankines constant = 1/7500 and E=0.205*10^6 N/mm^2. For what length of strut of this cross-section does the Euler formula cease to apply? 17. A C.I. pipe has 20cm internal diameter and 50mm metal thickness, and carries water under a pressure of 5N/mm^2. Calculate the maximum and minimum intensities of circumferential stress and sketch the distribution of circumferential stress intensity and the intensity of radial pressure across the section. Return to question paper search Related Question Papers: Submit Previous Years University Question Papers make money from adsense revenue sharing program Are you preparing for a university examination? Download model question papers and practise before you write the exam.
{"url":"http://www.indiastudychannel.com/question-papers/127010-Osmania-university-b.e.-mechanical-1-sem.-main-examination-nov-dec-2009-mechanics.aspx","timestamp":"2014-04-16T04:13:52Z","content_type":null,"content_length":"27541","record_id":"<urn:uuid:7f9c7ce0-93c2-436e-ac83-b6f6b2537488>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Hello, My Teacher Assigned A Homework Problem About ... | Chegg.com Hello, my teacher assigned a homework problem about a series-parralel circut. I was wonder if possible if you could help me find the Resistance Total, Current Total, Current Drop(of each resistor), and Voltage Drop (of each resisitor). I have no clue on how to do these things. It would be a nice help if you showed me step by step of how to get each answer. Hello, my teacher assigned a homework problem about a series-parralel circut. I was wonder if possible if you could help me find theResistance Total, Current Total, Current Drop(of each resistor), and Voltage Drop (of each resisitor). I have no clue on how to do these things. It would be a nice help if you showed me step by step of how to get each answer. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/hello-teacher-assigned-homework-problem-series-parralel-circut-wonder-possible-could-help--q3158276","timestamp":"2014-04-17T14:58:11Z","content_type":null,"content_length":"21122","record_id":"<urn:uuid:ebcdacb7-b431-4f68-a539-5550a9db27e8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Weibull Analysis of Solder Joint Failure Data II Last time we introduced Weibull analysis. Let’s derive the relationships needed to calculate the slope, beta, and characteristic life, eta. F(t) is the cumulative fraction of fails, from 0 to 1. By choosing Ln(t) as x and LnLn 1/(1-F(t) as y, we would expect a straight line. See the derivation above. It can be shown graphically that this fact is so. So if we plot F(t) versus t on logarithmic graph paper, the slope of the line will be beta. To determine eta, let t=eta, in the first equation below. The result is F(t) = 1-e-1 = 0.632. So the time at which 63.2% of the parts have failed, is eta, the characteristic life. Let’s consider some data comparing SAC305 and SACM (SAC105 with about 0.1% manganese) BGA solder balls in thermal cycle testing. The primary test vehicle employed was a TFBGA with NiAu finish mounted on PCB with OSP finish. SACM is a new breakthrough soldering alloy that has better drop shock resistance than SAC105 and comparable thermal cycle performance to SAC305. The data follow. The first column is the sample number, the third and fifth columns are the number to thermal cycles to fail for SAC305 and SACM. The second and forth columns are rank of the sample number. One would think that the first number in the second column would be 100*(1/15) =6.67%, as it represents the cumulative percent of samples failed, but a slight correct factor is needed. By plotting the log log of rank as shown above (LnLn1/(1-F(t)) vs log of cycles at failure, we get the Weibull plot. The slopes of the best fit line is equal to beta and the number of cycles at rank = 63.2% is eta. Fortunately software like Minitab 16 does the plotting and calculating of beta and eta automatically. The results are below: We see that the shape (beta) for SAC305 is 1.76 and that of SACM is 6.09, the scale or characteristic life (eta) is 1736.8 and 2016.8 respectively. These results are a strong vote of confidence for SACM. Its steep slope (high beta) suggests a tighter distribution, with more consistent solder joints and its characteristic life (eta) is also slightly greater. I plan on teaching detailed workshops on this topic. I will keep you posted. Cheers, Dr. Ron One Response to Weibull Analysis of Solder Joint Failure Data II This entry was posted in Dr. Ron and tagged Reliability, SPC, Weibull distribution. Bookmark the permalink.
{"url":"http://circuitsassembly.com/blog/?p=3350","timestamp":"2014-04-18T05:37:55Z","content_type":null,"content_length":"29774","record_id":"<urn:uuid:75f37d62-9a00-416c-84a2-48a409d8477c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-Euclidean geometry You probably think you are sitting in Euclidean space right now. But how can you be certain of this? Isn't it possible that you are living in a that looks exactly like Euclidean space inside a centred on the earth with some enormous radius (let's say a few light years) but outside that region is different? How could you tell? In fact there are non-Euclidean geometries that look like Euclidean space but are not Euclidean space. I would love to know how Euclid would react to this suggestion. But now the story is getting out of order. It all starts with Euclidean geometry. Euclid proposed various axioms or postulates in his Elements and starting from those deduced the familiar theorems of Euclidean geometry. One of those axioms, the parallel postulate, seemed less basic than the others and many people tried (unsucessfully) to show that it could be deduced from the others. Fast forward two thousand years and Gauss solved the problem. He showed that there was a very interesting geometry, called hyperbolic geometry or Lobachevski geometry, which satisfies all of Euclid's axioms except the parallel postulate which he replaced by a variant. The reason Lobachevski gets a mention here (and also János Bolyai) is that Gauss didn't publish his results. Lobachevski published in 1826 and Bolyai in 1829, ten years or so after Gauss' discoveries. To go further than this we need to make a precise mathematical definition of what a geometry is and we need to see some examples to convince us that the notion is worth pusuing. Let's just say for now that a geometry is a set equipped with a distance function that has to satisfy certain natural properties. From this point of view Euclidean space with the usual Pythagorean distance is just an example of a geometry. This is a modern way to think about geometry, where we have properly divorced mathematics from the real world. That doesn't mean that we must stop using our intuition to guess what might be true, but we always remember that mathematics is what follows from the axioms of set theory, not something that we measure or observe. Of course this point of view doesn't stop mathematics from being useful for modelling the real world! Actually, it's worth pointing out that there are some problems with Euclid's axioms and his logic. Despite this the Elements is an incredible achievement and we shouldn't be too harsh. Euclid was a man of his time. Let's consider spherical geometry. First of all we have the natural distance function on the sphere, the distance between two points is the shortest distance travelled by walking from one point to the other along the sphere surface. This is not some abstract nonsense, this is how we measure distance on the earth! Also, note that this is not the same thing as the Euclidean distance between these points (travelling along a straight line burrowing through the earth is not very congenial just to get from A to B). For example, if the sphere has diameter d then the distnace between the two poles in spherical geometry is d.pi/2 whereas the Euclidean distance is d. Once you have a distance function that satisfies certain natural properties you are ready to do some geometry. What is the correct notion of line for this geometry? It turns out that great circles of the sphere are the analogues of lines. Think about it. This is a big difference from the usual Euclidean geometry. Suddenly lines have finite length! If you take two distinct points on the sphere that are not antipodal then there is a unique great circle passing through the points (just like Euclidean geometry). But if they are antipodal then there are infinitely many great circles through the two points! So this is quite different from Euclidean geometry where two distinct points determine a unique straight line. Another interesting difference is that sum of the angles in a triangle. turns out to be greater than pi. Enough non-Euclidean weirdness, it's time for the definition now. Definition A geometry is a set X together with a function d:XxX-->R that satisfies the following axioms for all a,b,c in X. 1. d(a,b)>=0 and d(a,b)=0 iff a=b. 2. d(a,b)=d(b,a). 3. d(a,c)<=d(a,b)+d(b,c). 4. Given any two points a,b in X and any two positive real numbers d,e there exist points p[1],...,p[n] such that p[1]=a and p[n]=b, for 1<=i<=n-1, we have d(p[i],p[i+1])<d 0<= d(p[1],p[2])+...+ d(p[n-1],p[n]) - d(a,b) < e. The first three axioms are fairly natural for a notion of distance, so let's just discuss the last one. Intuitively you should think of this as saying there is a curve in our geometry joining (approximated by line segments of the definition) so that the distance along the curve is the distance between Here's another example a geometry on the torus. This is a nice one. We start with a square strip | B | | | |A A'| | | | | | | | B' | and we imagine that the points on opposite edges are identified. So, for example, we think of the edge points as being equal and likewise are equal. Topologically what we get from this identification is a . Think about physically glueing the edge with point on it to the edge with point on it, to make a cylinder. Then stretch and bend this cylinder around to glue its two ends together. Now we want to think about . This is defined as follows, for two points on the strip the distance between them is just the usual Euclidean distance except that if we get a shorter distance by using the edge identification to zip from one edge to another then we will always take that. | | | | | a b | | | | | | a' b' | | | For example the distance from for this geometry is the same as the usual Euclidean distance but the distance between would be the sum of the Euclidean distances of to the left-most edge and to the right-most edge. It is tempting to think that this example is quite similar to spherical geometry, we just replaced a sphere by a torus, but this is wrong. When we made the identifications to create the torus, the bending and stretching we did distorted distance, so the distance between two points on the torus for this geometry is not just the distance we get by walking along the surface of the torus. One of the important ideas to grasp here is that we are thinking about geometry from the perspective of an inhabitant of the geometry. We are not thinking about our set as sitting inside some Euclidean space. What we are doing here is intrinsic. This thinking, which is the modern viewpoint, goes back to Gauss.
{"url":"http://everything2.com/title/non-Euclidean+geometry","timestamp":"2014-04-20T11:49:07Z","content_type":null,"content_length":"42572","record_id":"<urn:uuid:9d650294-b625-4c5f-a7f7-8898e200534e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Space Derivatives of the Flow of a vector field up vote 3 down vote favorite Suppose I have a smooth vector field that has the form $$ X(y) = \sum_j \lambda_j y^j \partial_j + \text{higher order terms}$$ for $\lambda_j>0$. Let $\Phi_t$ be the flow of $X$. Then it follows that $\Phi_t(y) \longrightarrow 0$ for $y$ near $0$ as $t \longrightarrow - \infty$. I am now looking estimates on the $y$-derivatives. Precisely, suppose that $K$ is a compact neighborhood of $0$ that lies in the unstable manifold near the point $0$. I would like to have a statement like "For every multiindex $\alpha$, there exists a constant $C>0$ such that $$ \sup_{y \in K} |D^\alpha_y \Phi_t(y)| \leq C e^{t\lambda}$$ for all $t<0$ and $y \in K$, where $\lambda$ is the smallest eigenvalue of the linearization of $X$ at $0$" Is some statement like this true? Where to find it or how do I prove it? add comment 1 Answer active oldest votes This is certainly true if you choose $\lambda$ to be strictly smaller than the smaller eigenvalue of $DX(0)$. You may prove it inductively, by noticing that for a given $y$ the function $t\ mapsto D^{\alpha}_y \Phi_t(y)$ solves a linear equation. For instance, the first step goes as follows: the path of matrices $W(t):= D_y^{\alpha} \Phi_t(y)$ solves the ODE $$ W'(t) = DX(\Phi_t(y)) W(t), \quad W(0)=I, $$ where $\|DX(\Phi_t(y)) - DX (0)\| \leq C_0 e^{\lambda_0 t}$ for all $t\leq 0$. Then for every $\lambda_1<\lambda_0$ you can find $C_1$ such that $\|DX(\Phi_t(y))\| \leq C_1 e^{\lambda_1 t}$ for all $t\leq 0$. up vote 3 A useful lemma for proving this and getting the uniformity you need is the following: given a continuous bounded path of matrices $t\mapsto A(t)$, $t\geq 0$, denote by $W_A(t)$ the solution down vote of the linear Cauchy problem $$ W_A'(t) = A(t) W_A(t), \quad W_A(0) = I. $$ Assume that $\|W_A(t)W_A(s)^{-1}\|\leq c e^{\lambda (t-s)}$ for every $t\geq s\geq 0$. Then for every continuous bounded path of matrices $t\mapsto H(t)$, $t\geq 0$, there holds $$ \| W_{A+H}(t)W_{A+H}(s)^{-1}\|\leq c e^{\mu (t-s)}, \quad \forall t\geq s\geq 0, $$ with $\mu := \lambda + c \|H\|_{\ (Sorry if here I switched to positive time, that's just because I am more used to work with stable manifolds). Do you have any references for this? It is quite hard to follow your comment. For example, what is $X_A(t)$? – Kofi May 25 '12 at 15:14 @Kofi. $X_A$ was the same thing as $W_A$, I just edited my answer fixing this (sorry for the confusion). Unfortunately I do not know a reference where your statement is explicitly proved. What I wrote should be enough for the case of first order derivatives; for higher derivatives you need also the formula of variation of arbitrary constants (higher order derivatives solve a inhomogeneous linear equation). If you find difficulties in proving it I can try to write more details. – Alberto Abbondandolo May 25 '12 at 17:15 1 Alberto, maybe Kofi just asks for a reference for the lemma, in which case we do have it (as Lemma 1.1 springerlink.com/content/kek5k4da1h33a444/fulltext.pdf ) – Pietro Majer May 25 '12 at 18:09 add comment Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems or ask your own question.
{"url":"http://mathoverflow.net/questions/97856/space-derivatives-of-the-flow-of-a-vector-field?sort=votes","timestamp":"2014-04-18T19:02:21Z","content_type":null,"content_length":"55227","record_id":"<urn:uuid:28a9f240-0e09-49b8-822a-710722cf87a0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial differentiation How do I do the partial differentiation of following: F(x,y)=(1-e^-x)(1-e^-y) they has answer of e^-(x+y) The partial derivative with respect to what variable? (del F)/(del x) = e^{-x} * (1 - e^{-y}) <--- This is the partial derivative of F(x,y) wrt x. (del F)/(del y) = (1 - e^{-x}) * e^{-y} <--- This is the partial derivative of F(x,y) wrt y. neither of which is e^-(x+y). I'm not sure what you are asking for. -Dan Last edited by topsquark; September 22nd 2006 at 06:05 PM. Reason: Fixed a sign error Ah! I didn't look at second derivatives. Okay, we can do this two ways: (del F)/(del x) = e^{-x} * (1 - e^{-y}) (Apparently I missed a minus sign in my original response. I'll fix it.) So taking del/ (del y) of this: (del^2 F)/(del x del y) = e^{-x}*e^{-y} = e^{-x - y} or (del F)/(del y) = (1 - e^{-x}) * e^{-y} So taking del/(del x) of this: (del^2)/(dex y del x) = e^{-x}*e^{-y} = e^{-x - y} If you are having a problem doing the partial derivatives, remember that when you do a "del/(del x)" you hold all variables other than x constant. So, for example in (del F)/(del x) we hold the y constant, so effectively we are taking the derivative of (1 - e^{-x})* constant = e^{-x}*constant. If you need more help than that with the derivatives, just let me know and I'll work up a quick tutorial for you. -Dan
{"url":"http://mathhelpforum.com/calculus/5743-partial-differentiation.html","timestamp":"2014-04-23T18:49:28Z","content_type":null,"content_length":"45664","record_id":"<urn:uuid:41c2bbd3-5f47-4124-a3ac-524fce87cdc3>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> CFA v EFA Anonymous posted on Wednesday, March 17, 2004 - 7:12 pm I have recently received responses from reviewers to a paper we have submitted. We have developed a new multi-dimensional instrument, consisting of 4 scales, one of which is pre-existing, one completely new and the other 2 adapted (with modifications and new items) from pre-existing scales. None of the 4 scales have appeared together before. We conducted an EFA to reveal our factor structure (albeit we obviously had a pretty good idea of what our factors should be). Both reveiwers have responded saying that we should have conducted a CFA. From all our reading this would appear to us to be wrong advice, although we can't claim to be experts on factor analysis. I would be very interested in and grateful for your opinion. Linda K. Muthen posted on Thursday, March 18, 2004 - 8:39 am I would start with an EFA to weed out bad items and factors, then do an EFA in a CFA framework to obtain standard errors for the factor loadings, and then do a simple structure CFA. These steps are outlined in the Day 1 handout from our short courses and can be purchased. Mark A. Sembower posted on Thursday, February 09, 2006 - 8:29 am We are currently using an EFA to find and remove poor items followed by a CFA to confirm the structure (WLSMV estimator in both cases), however, we would also like to use an EFA in a CFA framework. Unfortunately, when we run an EFA in the CFA framework, our model does not converge. It may be due to the fact that our model is very complex (5 factors, 40 items, 2451 subjects), but is it possible that items on different scales may be causing the model to not converge (i.e., we have some items on 3-point, 4-point, and 5-point likert scales)? One more question related to convergence: What was the specific reason for choosing the initial convergence criterion of 0.00005 (for WLSMV)? I have noticed that if I adjust it slightly, say to 0.00001, the model will converge. I guess I'm just wondering if it is okay to adjust this value or if I need some theoretical justification for doing so. Also, we have a handful of items that look good in the EFA, but after removing bad items and running the initial CFA, they no longer have significant loadings. Does it make sense to remove these items and run a second CFA or should they be left in? Thanks for all your help. bmuthen posted on Friday, February 10, 2006 - 7:37 pm Your EFA within CFA should work fine - perhaps you need to give more key items starting values of 1. You should follow the advice that we have in our new short course handout. Regarding convergence, your adjusted convergence criterion is stricter so I can't see why that would make it converge - probably something else is changed as well. Using the default is probably best After removing the bad items, the EFA need to be rerun to see that the model still holds up. Don't go straight to CFA. Mark A. Sembower posted on Monday, February 13, 2006 - 12:25 pm Dr. Muthen, Thanks for your help. As you describe in your handouts, we have our EFA within a CFA framework set up for m=5 factors and m-sqared=25 restrictions, but the model does not converge. When you said that we may need to give more key items starting values of 1, were you referring to adding restrictions (anchor terms, etc.) in addition to the 25 we already have? Regarding our CFA: We have come across a few items that load around .4 in our revised EFA, but do not load at all (.15-.17) in our CFA. The same items are included in the revised EFA and the CFA. Could this be due to "bad" items or is it some inherent difference between EFA and CFA methodologies? Thanks again, Linda K. Muthen posted on Monday, February 13, 2006 - 3:31 pm The default starting value in Mplus for factor loadings is one. You need to start all small loadings at zero and start the key ones at one. For example, verbal BY y1-y10*0 y2*1; This starts them all at zero and then overrides the zero by one for y2. If this does not help, send you input, data, output, and license number to support@statmodel.com. Tor Neilands posted on Saturday, February 25, 2006 - 10:22 am My experience has been that many reviewers are uncomfortable with using CFA in the service of exploratory factor analysis work and related model development. I agree 100% with you, Bengt, that CFA models and associated features (e.g., modification indices; greater control over model specification) make CFAs a useful tool in conducting exploratory scale development work. Can you suggest references to cite in response to reviewer critiques of the practice of using CFA in the service of exploratory factor analysis and scale development? Also, how does one obtain the handout referenced in your post of February 10th? With many thanks and best wishes, Tor Neilands bmuthen posted on Saturday, February 25, 2006 - 11:24 am I think of the "EFA within CFA" approach of Joreskog (1969) Psychometrika as an EFA. But you get the added advantages of CFA in that you have SEs, Modification Indices, and can correlate residuals. The Short Courses handouts can be ordered off the web site. This topic is covered in Day 1 of our Short Courses. Tor Neilands posted on Saturday, February 25, 2006 - 5:51 pm When will the short course handouts reflect the syntax and features in version 4? Or perhaps they have already been updated? bmuthen posted on Sunday, February 26, 2006 - 5:58 am Most is fine as is, but some simplifications, extensions, and new examples will be included gradually for the training sessions in May, June, and November, and will then be made available. Shang-Min Liu posted on Thursday, January 31, 2008 - 9:35 am I have problem to run CFA after I decide EFA structure. I got a warning from CFA output. I do check the correlation matrix by output TECH4, and I think the problem is from factor2 as well. However, I don't know how to fix it. Does negative factor loadings will cause problem? (my factor2 with negative loading) Or, what other restrictions I need to write when I have negative loading in my factor in CFA model? Linda K. Muthen posted on Thursday, January 31, 2008 - 11:05 am The problem would not be a negative factor loading but a negative residual variance or one of the other problems noted in the message. If you can't figure this, please send your input, data, output, and license number to support@statmodel.com. Erich Studerus posted on Friday, April 25, 2008 - 4:36 am I'm trying to do an EFA within a CFA-framework and I have the same problem as Mr. Sembower, that is, the parameter estimation process does not converge. I have also quite a complex model with 3 factors and 66 items. Sample size is about 580. I followed exactly the specifications, that are outlined in the handout and in the book "Confirmatory Factor Analyis for Applied Research" from Timothy A. Brown. When I specify the same model with the same dataset on Amos, it converges without problems. I also tried to set all the starting values for factor loadings to zero and the key ones to one. Unfortunately, it didn't help. I also increased the number of key loadings with starting values of one up to five for each factor, but still to no avail. What else could I do, to help the parameter estimation process to converge? Linda K. Muthen posted on Friday, April 25, 2008 - 5:48 am It sounds like there are defaults that differ between Amos and Mplus. Check that you have the same number of and the same free parameters in your model. If this does not help, please send your input, data, output, and license number to support@statmodel.com. Include the output from the EFA. Calvin D. Croy posted on Wednesday, June 23, 2010 - 2:38 pm A journal article published the factor loadings on two orthogonal factors found doing an EFA on the items of a scale using data from a minority population. I want to see if the same factor loadings would be found in another minority population. I thought about just running a CFA to check the fit of a model where the loadings were constrained to be the same as those in the article. But the loadings from the article are correlations, whereas those from the CFA are regression coefficients if my understanding is correct. If this is true, how do I constrain the CFA loadings to equal the published values? If the published EFA factor loading for variable X on the first factor was .654, I can't just say in Mplus: Factor1 by x@.654 since the .654 was a correlation and Mplus will set the regression coefficient for X at .654. Linda K. Muthen posted on Wednesday, June 23, 2010 - 4:19 pm If you standardize your variables, free all factor loadings, and fix the factor variances to one, you will be in the EFA metric. That being said, I think this is too stringent a test. I would instead do an EFA on the new data and see if the factor solution is close but not exact. Calvin D. Croy posted on Thursday, June 24, 2010 - 9:35 am Thank you, Linda. I appreciate your suggestion about doing an EFA on my data and seeing whether the factor solution is close to the published solution. I was hoping to take advantage of CFA to 1) get CFI,RMSEA, and ChiSquare values for the fit of the published structure to my data, and 2) formally test whether another factor solution in the population I'm studying has superior fit than the published structure. Is this often done with CFA? It seems like this type of confirmation would often arise when testing scales in new populations. All the literature I have on testing factorial invariance across groups assumes the analyst has the data for all the groups, not just published values. I'm confused by your suggestion to "free all factor loadings". I don't think I want them freed -- I want them to be fixed at values that correspond to the published EFA factor loadings (correlations). That way I can assess the fit of the published structure to my data using the CFA and RMSEA values. If I follow your directions to "be in the EFA metric", it sounds like I will just be doing an EFA using CFA syntax. That would be interesting since I would be able to test the loadings for significance. However, wouldn't the CFI, RMSEA, and Chisquare just assess the fit of whatever structure was found in my data, rather than than tell me specifically about the fit of the published Linda K. Muthen posted on Thursday, June 24, 2010 - 5:22 pm EFA gives Chi-Square, RMSEA, etc. Calvin D. Croy posted on Friday, June 25, 2010 - 9:05 am Thank you Linda for this information. However, I don't understand how it answers the question in the last line of my previous post: "wouldn't the CFI, RMSEA, and Chisquare just assess the fit of whatever structure was found in my data, rather than tell me specifically about the fit of the published structure?". Say my EFA reveals Chisquare p = .0356, RMSEA = .032, CFI = .975. Wouldn't that just indicate that the factors found in my data (totally ignoring the published factor loadings) fit my data well? The stats would seem to say nothing about how well published factor loadings fit my data. If the above is correct, it appears the only way to tell how well the factor structure of some scale items published for one population fits another population is to run an EFA on data from the new population and eyeball how close the computed loadings match those published for the reference population. Is that right? If yes, this seems like a process that would have been available in 1980 or even 1970; I was hoping that CFA would offer a way to test whether the observed dissimilarity in loadings was beyond what might occur by chance. Linda K. Muthen posted on Friday, June 25, 2010 - 10:08 am There are many ways to approach what you want to do. I don't think there is one right way and one wrong way. Robert Urban posted on Monday, November 08, 2010 - 9:29 pm Dear Dr. Muthén, I have divergent findings in CFA and EFA analyses . I have started the analysis with CFA with WLSMV estimation because we had a theoretical measurement model (ordinal data) which we wanted to test. The factors are correlating quite highly, around .60 -.70. The level of fit is good. However, one reviewer asked for an EFA analysis, therefore we performed it. We have the same three-factor structure in both analyses, but some items seem to be represented strongly in another factor that they should be. Why these two analyses have divergent findings? What is your suggestion to go further? Linda K. Muthen posted on Tuesday, November 09, 2010 - 9:37 am Simple structure CFA is a much more restricted model than EFA. It seems you have fixed the cross-loading to zero in the CFA without compromising model fit. I would check to see if it is significant. yangyang posted on Wednesday, March 14, 2012 - 8:57 pm How run program in MPLUS using covariance? Thank you£¡ Linda K. Muthen posted on Thursday, March 15, 2012 - 6:47 am See Example 13.1 in the Mplus User's Guide. yangyang posted on Thursday, March 15, 2012 - 6:31 pm Thank you! The following program can not give the values of H1 and H2 in the output using Mplus software when uing using covariance. Why? Thank you! DATA: FILE IS cov.txt; nobservations = 10000; VARIABLE: NAMES ARE y1-y3; MODEL: f1 BY y1-y3*(p1-p3); y1-y3 (a1-a3); Linda K. Muthen posted on Thursday, March 15, 2012 - 7:03 pm Please send the output and your license number to support@statmodel.com so I can understand the problem. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=9&page=374","timestamp":"2014-04-18T05:32:15Z","content_type":null,"content_length":"55968","record_id":"<urn:uuid:03394452-a399-46cf-8dfb-9fa3b9c7691f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Anirban’s Angle: Top Inequalities for a PhD student Contributing Editor Anirban DasGupta writes: It is the mark of an instructed mind, said Aristotle, not to seek exactness when only an approximation of the truth is possible. Delicate and classy, still, the nature of mathematics is such that quantities of intrinsic importance often cannot be evaluated in a simple or explicit form. So one opts for the next best thing. Bound it from above or below by something simpler and explicit. Inequalities form an integral part of the theory and practice of mathematical sciences. One we see in high school is that the irrational number $π<\frac{22}{7}$, as For statisticians, that There are countless inequalities; some are beautiful, some highly useful, some both. Which ones should a PhD student in mathematical statistics know? To get a finger on my colleagues’ pulse, I took a small poll. I asked Saugata Basu, Rabi Bhattacharya, Burgess Davis, Peter Hall, Iain Johnstone, B.V. Rao, Yosi Rinott, Philip Stark, Sara van de Geer, and Jon Wellner. Of course, the choices differed. As an experiment in innocuous merriment, I chose my favorites. My collection is embarrassingly biased by at least three factors: inequalities that I at least know, those I have personally seen being applied, and liked—either the application or the inequality itself. My one-page limit keeps me from stating all the inequalities, and so I only mention them by name or descriptively. Perhaps it would be useful to have them precisely stated, proved, each illustrated with one good application, and made publicly available in some platform. Here then, cerebrating, is a list of inequalities I would wish to know, if I were a graduate student working on statistical theory today. They are generally grouped by topics; analysis, matrices, probability, moments, limit theorems, statistics. 1 Cauchy–Schwarz 2 Jensen 3 Hölder and triangular 4 Fatou 5 Bessel 6 Hausdorff–Young 7 Basic Sobolev inequality in three dimensions only 8 Frobenius 9 Sylvestre 10 Determinant bounds, e.g., Hadamard 11 Kantorovich 12 Courant–Fischer 13 Boole’s inequality, from both directions 14 Chebyshev and Markov 15 Bernstein 16 Hoeffding in the Rademacher case, 1963 17 Bounds on Mills ratio from both directions 18 Upper tail of Binomial and Poisson 19 Slepian’s lemma, 1962 20 Anderson’s inequality on probabilities of symmetric convex sets, 1955 21 Rosenthal, 1970 22 Kolmogorov’s basic maximal inequality 23 Basic Berry-Esseen in one dimension 24 Le Cam’s bound on Poisson approximations (Le Cam, 1960) 25 DKW with a mention of Massart’s constant (Massart, 1990) 26 Bounds on expectation of normal maximum from both directions 27 Comparison lemma on multinormal CDFs (Leadbetter, Lindgren, and Rootzén, 1983) 28 Talagrand (as in 1995, Springer) 29 Inequality between Hellinger and Kullback–Leibler distance 30 Cramér-Rao 31 Rao–Blackwell (which is an inequality) 32 Wald’s SPRT inequalities. Truly going back to my student days, I recall how useful matrix inequalities were in that period, when linear inference was such an elephant in the room. Inequalities on CLTs and metrics played pivotal roles in the sixties, and then again, as the bootstrap and later, MCMC, emerged. Concentration inequalities came to the forefront with the advent of empirical process theory, and then as high dimensional problems became important. It seems as though the potential of analytic inequalities in solving statistical and probabilistic problems hasn’t yet been efficiently tapped. The recent book by Peter Bühlmann and Sara van de Geer (2011) has many modern powerful inequalities. There are of course new editions of the classics, e.g., Hardy, Littlewood and Pólya (1988), Marshall, Olkin and Arnold (2011). Quite possibly, on another day I would include some other phenomenal inequalities, and drop some that I chose today. Can anyone vouch that Efron–Stein (1981), Gauss (for unimodal distributions), FKG (Fortuin, Kasteleyn, Ginibre, 1971), Chernoff ’s variance inequality (1981), or a basic prophet or log-Sobolev inequality, or even a basic Poincaré, need not be in the essential list? Defining what is the most useful or the most beautiful is about the most hopeless task one can have. Beauty and use are such indubitably personal choices. We have, in front of us, an ocean of remarkable inequalities. You can’t cross the sea, said Nobel Laureate Poet Tagore, merely by standing and staring at the water. I figure I need to jump! 1 Comment Welcome to the new and improved IMS Bulletin website! We are developing the way we communicate news and information more effectively with members. The print is still with us (free with IMS membership ), and still available as a PDF to download , but in addition, we are placing some of the news, columns and articles on this blog site, which will allow you the opportunity to interact more. We are always keen to hear from IMS members, and encourage you to write articles and reports that other IMS members would find interesting. Contact the IMS Bulletin at What is “Open Forum”? With this new blog website, we are introducing a new feature, the Open Forum . Any IMS member can propose a topic for discussion. Email your subject and an opening paragraph (send this to ) and we'll post it to start off the discussion. Other readers can join in the debate by commenting on the post. Search other Open Forum posts by using the Open Forum category link below. Start a discussion today! Recent posts About IMS The Institute of Mathematical Statistics is an international scholarly society devoted to the development and dissemination of the theory and applications of statistics and probability. We have about 4,500 members around the world. Visit IMS at
{"url":"http://bulletin.imstat.org/2013/05/anirban%E2%80%99s-angle-top-inequalities-for-a-phd-student/","timestamp":"2014-04-16T18:56:48Z","content_type":null,"content_length":"26571","record_id":"<urn:uuid:0027de56-70c6-4db7-bd52-a849e77d9358>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Does Infinity Exist? The position that this work (SMN / IST) takes is that quantities can be arbitrarily large but they must always have a definite finite value. This was the original mathematical position too but over time the concept of infinity has been usefully applied in mathematics so the question of its actual existence has been largely ignored in favour of a pragmatic approach. In the 19th century the consensus between mathematicians began to shift and now in the 20th century there is wide scale belief in the existence of actual infinities although still no proof, indeed all the concrete evidence indicates that they cannot exist but still it is useful in mathematics so mathematicians retain their belief in its existence. This would suggest that mathematicians should check their axioms but such a reformation of modern mathematics is unlikely to occur unless there is a compelling need to do so because the logistics and the upheaval of such a reformation would be vast and it would be disruptive to many careers and traditions. Following comments quoted from (Infinity: You Can't Get There From Here): "Since no sensible magnitude is infinite, it is impossible to exceed every assigned magnitude; for if it were possible there would be something bigger than the heavens." (Aristotle) Aristotle distinguished between the potential infinite, and the actual infinite. The natural numbers, he would say, are potentially infinite because they have no greatest member. However, he would not allow that they are actually infinite, as he believed it impossible to imagine the entire collection of natural numbers as a completed thing. He taught that only the potential infinite is permissible to thought, since any notion of the actual infinite is not “sensible.” So great was Aristotle's influence that more than 2,000 years later we find the great mathematician Karl Friederich Gauss admonishing a colleague, "As to your proof, I must protest most vehemently against your use of the infinite as something consumated, as this is never permitted in mathematics. The infinite is but a figure of speech . . . ." (Karl Friederich Gauss) Nonetheless, long before Gauss's time, cracks had begun to appear in the Aristotelian doctrine. Galileo (b. 1564) had given the matter much thought, and noticed the following curious fact: if you take the set of natural numbers and remove exactly half of them, the remainder is as large a set as it was before. This can be seen, for example, by removing all the odd numbers from the set, so that only the even numbers remain. By then pairing every natural number n with the even number 2n, we see that the set of even numbers is equinumerous with the set of all natural numbers. Galileo had hit upon the very principle by which mathematicians in our day actually define the notion of infinite set, but to him it was too outlandish a result to warrant further study. He considered it a paradox, and “Galileo's Paradox” it has been called ever since. As the modern study of mathematics came into full bloom during the seventeenth and eighteenth centuries, more and more mathematicians began to sneak the notion of an actual infinity into their arguments, occasionally provoking a backlash from more rigorous colleagues (like Gauss). John Wallis The English mathematician John Wallis (b. 1616), was the first to introduce the “love knot” or “lazy eight” symbol for infinity that we use today, in his treatise Arithmetica infinitorum, published in 1665. Ten years later, Isaac Newton in England and Gottfried Leibnitz in Germany (working independently) began their development of the calculus, which involved techniques that all but demanded the admission of actual infinities. Newton side-stepped the issue by introducing an obscure notion called “fluxions,” the precise nature of which was never made clear. Later he changed the terminology to “the ultimate ratio of evanescent increments”. The discovery of the calculus opened the way to the study of mathematical analysis, in which the issue of actual infinities becomes very difficult indeed to avoid. All through the nineteenth century, mathematicians struggled to preserve the Aristotelian doctrine, while still finding ways to justify the marvelous discoveries which their investigations forced upon them. Finally, in the early 1870's, an ambitious young Russian/German mathematician named Georg Cantor upset the applecart completely. He had been studying the nature of something called trigonometric series, and had already published two papers on the topic. His results, however, depended heavily on certain assumptions about the nature of real numbers. Cantor pursued these ideas further, publishing, in 1874, a paper titled, On a Property of the System of all the Real Algebraic Numbers. With this paper, the field of set theory was born, and mathematics was changed forever. Cantor completely contradicted the Aristotelian doctrine proscribing actual, “completed” infinities, and for his boldness he was rewarded with a lifetime of controversy, including condemnation by many of the most influential mathematicians of his time. This reaction stifled his career and may ultimately have destroyed his mental health. It also, however, gained him a prominent and respected place in the history of mathematics, for his ideas were ultimately vindicated, and they now form the very foundation of contemporary mathematics. "One can without qualification say that the transfinite numbers stand or fall with the infinite irrationals; their inmost essence is the same, for these are definitely laid out instances or modifications of the actual infinite." (Georg Cantor) Cantors definition of an infinite set was: "A set is infinite if we can remove some of its elements without reducing its size." End of quotes from (Infinity: You Can't Get There From Here) The article goes on to explain cantors set theory and cardinal numbers but the gist of it is "To a present day mathematician, infinity is both a tool for daily use in his or her work, and a vast and intricate landscape demanding to be explored." (Infinity: You Can't Get There From Here) Following comments quoted from (Infinity: The Encyclopedia of Astrobiology Astronomy and Spaceflight): By confining their attention to potential infinity, mathematicians were able to address and develop crucial concepts such as those of infinite series, limit, and infinitesimals, and so arrive at the calculus, without having to grant that infinity itself was a mathematical object. Yet as early as the Middle Ages certain paradoxes and puzzles arose, which suggested that actual infinity was not an issue to be easily dismissed. These puzzles stem from the principle that it is possible to pair off, or put in one-to-one correspondence, all the members of one collection of objects with all those of another of equal size. Applied to indefinitely large collections, however, this principle seemed to flout a commonsense idea first expressed by Euclid: the whole is always greater than any of its parts. For instance, it appeared possible to pair off all the positive integers with only those that are even: 1 with 2, 2 with 4, 3 with 6, and so on, despite the fact that positive integers also include odd numbers. Galileo, in considering such a problem, was the first to show a more enlightened attitude toward the infinite when he proposed that "infinity should obey a different arithmetic than finite numbers." Much later, David Hilbert offered a striking illustration of how weird the arithmetic of the endless can get. Imagine, said Hilbert, a hotel with an infinite number of rooms. In the usual kind of hotel, with finite accommodation, no more guests can be squeezed in once all the rooms are full. But "Hilbert's Grand Hotel" is dramatically different. If the guest occupying room 1 moves to room 2, the occupant of room 2 moves to room 3, and so on, all the way down the line, a newcomer can be placed in room 1. In fact, space can be made for an infinite number of new clients by moving the occupants of rooms 1, 2, 3, etc, to rooms 2, 4, 6, etc, thus freeing up all the odd-numbered rooms. Even if an infinite number of coaches were to arrive each carrying an infinite number of passengers, no one would have to be turned away: first the odd-numbered rooms would be emptied as above, then the first coach's load would be put in rooms 3n for n = 1, 2, 3, ..., the second coach's load in rooms 5n for n = 1, 2, ..., and so on; in general, the people aboard coach number i would empty into rooms pn where p is the (i+1)th prime number. Such is the looking-glass world that opens up once the reality of sets of numbers with infinitely many elements is accepted. That was a crucial issue facing mathematicians in the late nineteenth century: Were they prepared to embrace actual infinity as a number? Most were still aligned with Aristotle and Gauss in opposing the idea. But a few, including Richard Dedekind and, above all, Georg Cantor, realized that the time had come to put the concept of infinite sets on a firm logical foundation. Cantor accepted that the well-known pairing-off principle, used to determine if two finite sets are equal, is just as applicable to infinite sets. It followed that there really are just as many even positive integers as there are positive integers altogether. This was no paradox, he realized, but the defining property of infinite sets: the whole is no bigger than some of its parts. He went on to show that the set of all positive integers, 1, 2, 3, ..., contains precisely as many members – that is, has the same cardinal number or cardinality – as the set of all rational numbers (numbers that can be written in the form p/q, where p and q are integers). He called this infinite cardinal number aleph-null, "aleph" being the first letter of the Hebrew alphabet. He then demonstrated, using what has become known as Cantor's theorem, that there is a hierarchy of infinities of which aleph-null is the smallest. Essentially, he proved that the cardinal number of all the subsets – the different ways of arranging the elements – of a set of size aleph-null is a bigger form of infinity, which he called aleph-one. Similarly, the cardinality of the set of subsets of aleph-one is a still bigger infinity, known as aleph-two. And so on, indefinitely, leading to an infinite number of different infinities. Cantor believed that aleph-one was identical with the total number of mathematical points on a line, which, astonishingly, he found was the same as the number of points on a plane or in any higher n-dimensional space. This infinity of spatial points, known as the power of the continuum, c, is the set of all real numbers (all rational numbers plus all irrational numbers). Cantor's continuum hypothesis asserts that c = aleph-one, which is equivalent to saying that there is no infinite set with a cardinality between that of the integers and the reals. Yet, despite much effort, Cantor was never able to prove or disprove his continuum hypothesis. We now know why – and it strikes to the very foundations of mathematics. In the 1930s, Kurt Gödel showed that it is impossible to disprove the continuum hypothesis from the standard axioms of set theory. Three decades later, Paul Cohen showed that it cannot be proven from those same axioms either. Such a situation had been on the cards ever since the emergence of Gödel's incompleteness theorem. But the independence of the continuum hypothesis was still unsettling because it was the first concrete example of an important question that provably could not be decided either way from the universally-accepted system of axioms on which most of mathematics is built. Currently, the preference among mathematicians is to regard the Continuum Hypothesis as being false, simply because of the usefulness of the results that can be derived this way. As for the nature of the various types of infinities and the very existence of infinite sets, these depend crucially on what number theory is being used. Different axioms and rules lead to different answers to the question what lies beyond all the integers? This can make it difficult or even meaningless to compare the various types of infinities that arise and to determine their relative size, although within any given number system the infinities can usually be put into a clear order. Certain extended number systems, such as the surreal numbers, incorporate both the ordinary (finite) numbers and a diversity of infinite numbers. However, whatever number system is chosen, there will inevitably be inaccessible infinities – infinities that are larger than any of those the system is capable of End of quotes from (Infinity: The Encyclopedia of Astrobiology Astronomy and Spaceflight) What these quotes state is that there is a definite structure of different kinds of infinities and these are useful in mathematics. But does infinity actually exist as a realisable phenomenon or is it just a useful mathematical tool? Or is it that if certain axioms are accepted then certain types of infinity arise but these are only applicable within certain axiomatic contexts. Is mathematics just exploiting the potential infinities and exploring their axiomatic structure or are there actual infinities that exist in reality and that we might someday discover. So far no actual infinities have been found and all the evidence suggests that an actual infinity is impossible. Quantum physics relies on things being quantised so there is no infinite resolution or continuum and general relativity relies on finite maximum values for velocity, mass, energy and so on so there are no objects with infinite velocity, mass, energy, etc. It seems likely that only in the realm of pure mathematics can the idea of infinity be entertained. In the context of actual, manifest, realisable quantities things seem much more like the situation in a computer where all phenomena have definite resolution and size. One can never create an infinitely large file because that would require an infinite amount of time and infinite computational resources such as memory. In my own work, which uses computational concepts to model reality I take the position that the phenomena can be arbitrarily large and detailed but they always have a definite finite value. So this allows for potential infinity but totally disallows actual infinity. Given that any set must be actually represented using data (e.g. binary data), then no set can be infinitely large and if one removes any members of the set then the cardinality (size) of the set is reduced. So any representable set cannot be an infinite set and any infinite set cannot be actually represented. Furthermore, in the context of computational metaphysics, representation is equivalent to existence. If something is represented and it takes part in the overall simulation of the universe then it exists in that universe but if it cannot be represented then it cannot exist. So if actual infinities exist then there cannot be any discrete computational foundation to reality but so far no actual infinites have ever been discovered. Even with the domain of pure mathematics, infinities can only exist because they are symbolically represented and never actually represented. No one has ever written out an infinite number of integers thereby actually representing the set of integers. It is only ever referred to but never fully represented. If one required sets to be fully represented then mathematics could not operate on actual infinite sets; it could only operate on potentially infinite sets which always have finite representations (e.g. {1,2,3}) but which are unlimited in their length. Such sets are arbitrarily large but always have a definite finite size. Modern mathematics is totally dependent upon the assumed existence of actual infinities but the existence of these can neither be proven nor disproven by mathematics. This leaves modern mathematicians in the position of defending their belief in the existence of actual infinities and discrediting any opposing ideas, but beside all of this - do actual infinites exist???? All the evidence seems to suggest that they only exist within the context of modern mathematics, which would seem to suggest that modern mathematics should return to its axioms and see where the problem Other related links:
{"url":"http://www.anandavala.info/TASTMOTNOR/Infinity.html","timestamp":"2014-04-19T11:57:00Z","content_type":null,"content_length":"18640","record_id":"<urn:uuid:acee2dc9-6b94-4700-b963-7c0c34359d7d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix's and consumption March 18th 2008, 04:59 PM #1 Matrix's and consumption Ok so I have this study question that really has me stumped, I cant, find anything in the textbook to help me and I am really stuck. The question says there is an economy that has 3 manu, agricult, indust, labour. $1 ag = 0.5 Ag, 0.20 Manu, and 1.00 Labour $1 manu = 0.8 manu and 0.4 labour $1 labour = 0.25 agri and 0.10 manu. It first asks what the consuption matrix C is. Now I assume that it would simply be matrixing this out so it would be like, this I am not sure on though. ag ma la ag 0.5 0.2 1 ma 0 0.8 0.4 la 0.25 0.10 0 Then it says to find a production schedual that satisfies a demand of 100$ for agriculture, 500$ for manufacturing and 700$ for labor. (This part I have no idea, I think you multiply a 100 500 700 matrix by your consumption matrix, but I dont know what that means or does, I am probely wrong) It asks which industries are profitable and if the economy is productive exc. I assume that to find productivity I simply go (I-E)P=0 and solve for P (E being the consumption matrix i made above, I being an identity matrix of the same size exc) and then figure it out from there. (based on the values I get for each thing) Am I on the right track? You see I have spent a loong time on this question but there is absolutely no similar example in my textbook as well as no outside help so I really don't know if I am close or way off or thinking in the right direction but doing something wrong, could someone explain what needs to be done for a question like this . It would be greatly appreciated, I have spent many hours looking this over only to be stumped, the textbook I have does not show any similar questions at all and I don't have much to reference too. Would be greatly appreciated for help. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/31370-matrix-s-consumption.html","timestamp":"2014-04-20T02:42:07Z","content_type":null,"content_length":"29707","record_id":"<urn:uuid:48e821b3-f441-4471-af77-e78d2026a1d6>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Teacher2Teacher - Q&A #7205 View entire discussion From: Kimberley (for Teacher2Teacher Service) Date: Nov 01, 2001 at 22:04:15 Subject: Re: Who invented math? Your message makes it sound as though you have a list of a group of people, but I didn't see one included with your message. Nevertheless, the answer to "Who invented math?" is both very simple and very complex. At the simplest level, no one invented math. This is because math has many branches, or areas within it. You may have heard of algebra and geometry, but they just scratch the surface. There are also trigonometry, calculus, topology, number theory, probability, statistics,... Many of these are related to one another. You may study history in school but did you know there is also mathematics history? Most of the math you do in school was developed over long periods of time, during which mathematicians and others noticed patterns and worked on solving problems. Even something which seems simple to you such as finding the area of a circle wasn't just "invented" one day. Early on, various cultures had their own, different ways of approximating the area of the Some of the methods, techniques, etc. we use in math are named for the people who worked with them but who didn't necessarily invent them. One of the most familiar of those is the Pythagorean Theorem. The Greek mathematician did not invent the formula relating the lengths of the legs of a right triangle with its hypotenuse, but he did work with it. Logarithms and calculus, on the other hand, were "invented". Read about John Napier, Isaac Newton, and Gottfried Leibniz. There are whole books written about the history of mathematics. There are websites devoted to the topic as well. Try a search using "math history". -Kimberley, for the T2T service Post a public discussion message Ask Teacher2Teacher a new question
{"url":"http://mathforum.org/t2t/message.taco?thread=7205&message=2","timestamp":"2014-04-16T11:25:49Z","content_type":null,"content_length":"5976","record_id":"<urn:uuid:7932dc86-2663-43a7-934e-376b729ffb02>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Kids.Net.Au - Encyclopedia > Asymmetric key cryptography Asymmetric-key cryptography , also known as public-key cryptography , is a form of in which asymmetric key algorithms are used for encryption, dignature, etc. In these algorithms, one key is used to a message and another is used to decrypt it, or one key is used to a message and another is used to verify the signature. The key used to decrypt or sign must be kept secret ('private') and cannot (so algorithm designers hope) be derived from the public key, which is used to encrypt or verify, and which may be known to any. Several asymmetric key algorithms have been developed beginning in the 1970s. One widely-used algorithm is RSA. It uses exponentiation modulo a product of two large primes to encrypt and decrypt. The public key exponent differs from the private key exponenet, and determining one exponent from the other is hard without knowing the primes. Another is ElGamal (developed by Taher ElGamal) which relies on the discrete logarithm problem. A third is a group of algorithms based on elliptic curves. Note that there is nothing special about asymmetric key algorithms. There are good ones, bad ones, insecure ones, etc. None have been proved 'secure' in the sense the one-time pad has, and some are known to be insecure (ie, easily broken). Some have the public key / private key property in which one of the keys is not deduceable from the other; or so it is believed by knowledgeable observers. Some do not, it having been demonstrated that knowledge of one key gives an attacker the other. As with all cryptographic algorithms, these must be chosen and used with care. Public-key cryptography can be used for authentication and privacy. A user can encrypt a message with their private key and send this message on. The fact that it can be decrypted by the public key provides assurance that the user sent it. Similarly, PKP can also be used to ensure privacy, a message which is encrypted by the public key can only be decrypted by a person in possession of the private key. Examples of well regarded asymmetric key algorithms[?] include: See also: GNU Privacy Guard, Pretty Good Privacy, Secure Sockets Layer, Secure Shell, pseudonymity, Quantum cryptography, Key escrow, public key infrastructure (PKI). All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/as/Asymmetric_key_cryptography","timestamp":"2014-04-19T12:10:18Z","content_type":null,"content_length":"16456","record_id":"<urn:uuid:904347c5-6bd1-4ac6-bb7b-7eef70b00f19>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Wave propagation in cracked elastic slabs and half-space domains-TBEM and MFS approaches. (English) Zbl 1195.74090 Summary: In this paper, the traction boundary element method (TBEM) and the method of fundamental solutions (MFS), formulated in the frequency domain, are used to evaluate the 3D scattered wave field generated by 2D empty cracks embedded in an elastic slab and a half-space. Both models overcome the thin-body difficulty posed when the classical BEM is applied. The crack exhibits arbitrary cross section geometry and null thickness. In neither model are the horizontal formation surfaces discretized, since appropriate fundamental solutions are used to take them into consideration. The TBEM models the crack as a single line. The singular and hypersingular integrals that arise during the TBEM model’s implementation are computed analytically, which overcomes one of the drawbacks of this formulation. The results provided by the proposed TBEM model are verified against responses provided by the classical BEM models derived for the case of an empty cylindrical circular cavity. The MFS solution is approximated in terms of a linear combination of fundamental solutions, generated by a set of virtual sources simulating the scattered field produced by the crack, using a domain decomposition technique. To avoid singularities, these fictitious sources are not placed close to the crack, and the use of an enriched function to model the displacement jumps across the crack is The performances of the proposed models are compared and their limitations are shown by solving the case of a C-shaped crack embedded in an elastic slab and a half-space domain. The applicability of these formulations is illustrated by presenting snapshots from computer animations in the time domain for an elastic slab containing an S-shaped crack, after applying an inverse Fourier transformation to the frequency domain computations. 74J25 Inverse problems (waves in solid mechanics) 74S15 Boundary element methods in solid mechanics 74S30 Other numerical methods in solid mechanics 74R10 Brittle fracture 65N80 Fundamental solutions, Green’s function methods, etc. (BVP of PDE)
{"url":"http://zbmath.org/?q=an:1195.74090","timestamp":"2014-04-17T09:48:28Z","content_type":null,"content_length":"22993","record_id":"<urn:uuid:e29a60ba-4d1a-4230-a075-902f7bdd5347>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Data Analysis/Statistics August 3rd 2009, 09:19 PM Data Analysis/Statistics The following shows how the value of a car depreciates each year. Find the trade-in value of a car for each 5 yr. The percents given are based on the selling price of the new car... a. What is the approximate trade-in value of a $12,000 car after 1 yr if the selling price of a new car is 70%? b. How much has a $20,000 car depreciated after 5 yr if the selling price of a new car is 30%? c. What is the approximate trade-in value of a $20,000 car after 4 yr if the selling price of a new car is 35%? d. Dani wants to trade in her car before it loses half its value. When should she do this? My answers are: a. $8,400 b. $6,000 c. $13,000 d. Within 2 years. Do any of my answers appear to be correct? Thank you. August 3rd 2009, 11:08 PM The following shows how the value of a car depreciates each year. Find the trade-in value of a car for each 5 yr. The percents given are based on the selling price of the new car... a. What is the approximate trade-in value of a $12,000 car after 1 yr if the selling price of a new car is 70%? Please reword these so that they make sense, the phrase "if the selling price of a new car is 70%" in this context makes no sense. Presumably the $12000 was the price paid for the car when new and the selling price was 70% of the new price or $8400. August 3rd 2009, 11:15 PM The following shows how the value of a car depreciates each year. Find the trade-in value of a car for each 5 yr. The percents given are based on the selling price of the new car... a. What is the approximate trade-in value of a $12,000 car after 1 yr if the selling price of a new car is 70%? b. How much has a $20,000 car depreciated after 5 yr if the selling price of a new car is 30%? c. What is the approximate trade-in value of a $20,000 car after 4 yr if the selling price of a new car is 35%? d. Dani wants to trade in her car before it loses half its value. When should she do this? My answers are: a. $8,400 b. $6,000 c. $13,000 d. Within 2 years. Do any of my answers appear to be correct? Thank you. For (d) 2 years is a good guess if we are restricted to integer numbers of years, but in fact there is insufficient information to set up a model that would allow us to discriminate between 1, 2 or 3 years. August 3rd 2009, 11:21 PM Data Analysis/Statistics Thank you. I apologize about the confusion but the percentages represented the percent of the selling price of the new car which was on a graph. But when I previously graphed the information and posted it, it didn't come out right. I tried to word it the best way I could, but thanks for the advice. (Nod)
{"url":"http://mathhelpforum.com/advanced-statistics/96901-data-analysis-statistics-print.html","timestamp":"2014-04-21T10:38:24Z","content_type":null,"content_length":"7452","record_id":"<urn:uuid:616079ce-a47e-412d-a21b-ad34fd79feb2>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
money in an account earns interest at a continuous rate of 8 per year Number of results: 11,222 College level math Suppose you deposit $100 in an account that earns 0.5% each month. You make no withdrawals from the account and deposit no more money into the account. How much money will you have in the account after 4 years? Saturday, January 24, 2009 at 7:37pm by Steph a mother wants to $9000 for her son's future education. She invested a portion of the money in a bank certificate(CD account) which earns 4% and the reminder in a savings bond that earns 7%. If the total interest earned after one year is $540, how much money was invested in ... Monday, September 3, 2012 at 11:35pm by KaRon Kent invested $5000 in a retirement plan. He allocated x dollars of the money to a bond account that earns 4% interest per yr and the rest to a traditional account that earns 5% interest per yr. Write an expression that represents the amount of money invested in the ... Thursday, February 17, 2011 at 11:42pm by --- Kent invested $5000 in a retirement plan. He allocated x dollars of the money to a bond account that earns 4% interest per yr and the rest to a traditional account that earns 5% interest per yr. Write an expression that represents the amount of money invested in the ... Thursday, February 17, 2011 at 11:43pm by --- A mother wants to invest $9,000.00 for her son’s future education. She invents a portion of the money in a bank certificate of deposit (Cd account) which earns 4% and the remainder in a saving bond that earns 7%. If the total interest earned after one year is $540.00, how much... Wednesday, March 9, 2011 at 8:51pm by MARIAH Suppose you deposit $400 in an account that earns 0.75 percent each quarter. You make no withdrawals from the account and deposit no more money into the account. How much money will accumulate after 2.5 years? Monday, February 16, 2009 at 8:06pm by Naimy 6 grade algebra cindy earns 6% simple interest each year on her savings account. if she has $850 in her account and leaves the money in there for 5 years. how much interest will the money earn? what is the new balance in cind's account? Friday, November 12, 2010 at 5:41pm by sierra or this one tracey has 4,300 in his savings account. the money in the account earns 4.6% intrest each year assuming he makes no deposits, how much money will he have in his account atthe end of the year? i dont know how to figure that Monday, November 30, 2009 at 4:33pm by Chris00 An investor has 7000 to invest in two accounts. The first account earns 8% annual simple intrest, and the second account earns 13% annual simple intrest .How much money Should be invested in each account so that the simple intrest earned is 500? Sunday, August 11, 2013 at 4:28pm by Anonymous a mother wants to invest 5000 for her sons future education. She invests a portion of the money in a bank certificate of deposit which earns 4% and the remainder in a savings bond that earns 7%. If the total interest earned after one year is $300.00,how much money was invested... Sunday, January 20, 2013 at 4:50pm by Justin A mother wnats to invest $6,000 for her son's future education. She invests a portion of the money in a bank certificate of deposit which earns 4% and the remaainder is a savings bond that earns 7%. If the total interest earned after one year is $360.00, how much money was ... Saturday, November 17, 2012 at 6:23pm by Mark algebra II i just placed $1500 in an account which earns 8% per year compunded quartely. how much money will be in the account in 20 years? how long will it tke for this account ot have $7000 in it? Monday, March 18, 2013 at 10:42pm by Anonymous Kelly plans to put her graduation money into an account and leave it there for 4 years while she goes to college. She receives $750 in graduation money that she puts it into an account that earns 4.25% interest compounded semi-annually. How much will be in Kellys account at ... Sunday, September 29, 2013 at 6:53pm by joe Kelly plans to put her graduation money into an account and leave it there for 4 years while she goes to college. She receives $750 in graduation money and she puts it into an account that earns 4.25% interest compounded semi-annually. How much will be in kelly's account at ... Friday, February 24, 2012 at 12:47pm by Anonymous Kent invested $5,000 in a retirement plan.He allocated X dollars of the money to a bond account that earns 4% interest per year and the rest to a traditional account that earn 5% interest per year. 1.Write an expression that represents the amount of money invested in the ... Tuesday, May 11, 2010 at 10:26pm by Shadow An initial investment of $480 is invested for 4 years in an account that earns 16% interest, compounded quarterly. What is the amount of money in the account at the end of the period? Friday, May 7, 2010 at 10:14am by Cherie An initial investment of $1000 is appreciated for 8 years in an account that earns 9% interest, compounded annually. Find the amount of money in the account at the end of the period Wednesday, June 8, 2011 at 4:27pm by Lisa An initial investment of $1240 is appreciated for 17 years in an account that earns 8% interest, compounded continuously. Find the amount of money in the account at the end of the period. Friday, April 8, 2011 at 7:17pm by allison college math An initial investment of $1000 is appreciated for 4 years in an account that earns 6% interest, 2) compounded semiannually. Find the amount of money in the account at the end of the period. Sunday, March 31, 2013 at 4:45pm by lisa pre calc An initial investment of $12,000 is appreciated for 5 years in an account that earns 7% interest, compounded quarterly. Find the amount of money in the account at the end of the period Thursday, June 10, 2010 at 8:53pm by joanie math models Tanisha wants to have $1000 in her bank account in 5 years. How much money should she deposit if her account earns 6% interest which is compounded 2 times per year? Monday, December 12, 2011 at 5:11pm by jazmin You deposit 172 dollars in an account every year for 9 years that earns 9 percent annual interest. How much money is in your account 9 years from now? Friday, February 26, 2010 at 3:57pm by Anonymous A self –employed person deposits $3,000 annually in a retirement account(called a Keogh account)that earns 8 percent. a) How much will be in the account when the individual retires at the age of 65 if the savings program starts when the person is age 40? b) How much additional... Sunday, April 3, 2011 at 11:33am by Anonymous A self-employed person deposits $3,000 annually in a retirement account (called a Keogh account) that earns 8 percent. a. How much will be in the account when the individual retires at the age of 65 if the savings program starts when the person is age 40? b. How much ... Thursday, November 7, 2013 at 6:06pm by Anonymous v(t)= Ce^(k(square root(t)) Suppose that the dealer, who is 25 years old, decides to sell the card at time , sometime in the next 40 years: 0< or equal to t < or equal to 40. At that time , he’ll invest the money he gets for the sale of the card in a bank account that ... Wednesday, May 23, 2012 at 10:57am by victoria algebra 1 MArta has $6000 to invest. She puts x dollars of this money into a savings account that earns 2% interest per year. With the rest, she buys a certificate of deposit that earns 4% per year. i need 2 different equations. [using a=prt and/or a=p(1+r/n)^nt] Wednesday, February 27, 2008 at 8:37pm by Trixie At what constant, continuous annual rate should you deposit money into an account if you want to have $1,000,000 in 25 years? The account earns 5% interest, compounded continuously. Round to the nearest dollar. Monday, May 16, 2011 at 1:06pm by CJ MArta has $6000 to invest. She puts x dollars of this money into a savings account that earns 2% interest per year. With the rest, she buys a certificate of deposit that earns 4% per year. i need 2 different equations. [using a=prt and/or a=p(1+r/n)^nt] Wednesday, February 27, 2008 at 9:19pm by trixie In A Year, Seema Earns RS 1,50,000. Find The Ratio Of Money That Seema Earns To The Money She Saves And Money That She Saves To The Money She Spends? Saturday, December 19, 2009 at 12:56pm by naisha A self-employed person deposits $3000 annually in a retirement account (called a Keogh account) that earns 8 percent. a. How much will be in the account when the individual retires at the age of 65 if the savings program starts when the person is age 40? b. How much additional... Wednesday, January 27, 2010 at 11:42pm by Bella you deposit $ 900 in a savings account that earns 4%interest coumpounded once a year and has no service charges. you donot make any deposits or withdrawals to the account for two years. at the end of two years, after the second year's interest has been added to the account by ... Tuesday, October 28, 2008 at 3:15pm by isabel Bob has $4000 invested in an account that earns 4.75% simple interest. He has another account that earns 6.5% simple interest. How much is invested in the 6.5% account if it earned $425.10? Tuesday, October 5, 2010 at 5:49pm by Anonymous a bank account earns 7% annual interest compounded continuously. you deposit $10,000 in the account, and withdraw money continuously from the account at a rate of $1000 per year. a. write the differential equation for the balance, B, in the account after t years b. what is the... Monday, March 14, 2011 at 5:04pm by Anonymous Samantha opened a savings account and deposited some money into the account. The account pays an annual simple interest rate of 5%. After 9 years, the interest earned on the account was $1,800. How much money did Samantha deposit in the account? Sunday, December 30, 2012 at 2:37pm by Andrea Suppose Kevin and Jill both deposit $4000 into their personal accounts. If Kevin’s account earns 5% simple interest annually and Jill’s earns 5% interest compounded annually, how much will each account balance show at the end of 5 years? Calculate the difference between each ... Tuesday, April 24, 2012 at 1:13pm by Dee child is now 3 years old - dad opens account with $10,000 it earns 4.5% annual intrest. a) construct formula (A(t)=Ao(a)^t) b) how much money will be in account when he is 10 years old c) if dad wants account to grow to 100,000 when he is 18 then what should original amount to be Monday, April 5, 2010 at 1:54pm by cc A friend opens a savings account by depositing $1000. He deposits an additional $75 into the account each month. a. What is a rule that represents the amount of money in the account as an arithmetic sequence? b. How much money is in the account after 18 months? Show your work. Wednesday, January 8, 2014 at 10:08pm by Shakira A friend opens a savings account by depositing $1000. He deposits an additional $75 into the account each month. a. What is a rule that represents the amount of money in the account as an arithmetic sequence? b. How much money is in the account after 18 months? Show your work. Wednesday, January 8, 2014 at 10:23pm by Shakira A friend opens a savings account by depositing $1000. He deposits an additional $75 into the account each month. a. What is a rule that represents the amount of money in the account as an arithmetic sequence? b. How much money is in the account after 18 months? Show your work. Wednesday, January 8, 2014 at 10:23pm by Shakira A mother wants to invest $ 12,000.00 for her sons education. She invests a portion of the money in a bank certificate of deposit (CD Account) which earns 4% and the remain saving bond that earns 7%. If the total interest earned after a year is 720.00, How much invested in the ... Sunday, October 6, 2013 at 7:38am by Lisa v(t)= Ce^(k(square root(t)) Suppose that the dealer, who is 25 years old, decides to sell the card at time , sometime in the next 40 years: 0< or equal to t < or equal to 40. At that time , he’ll invest the money he gets for the sale of the card in a bank account that ... Wednesday, May 23, 2012 at 4:34pm by allison Kim has money in a savings account that earns an annual interest rate of 4.1%, compounded monthly. What is the effective rate of interest on Kim's account? Round to the nearest hundredth of a Wednesday, April 2, 2014 at 7:42pm by Lynn Earl Watkins is ready to retire and has saved up $250,000 for that purpose. He places all of this money into an account which will pay him annual payments for 20 years. How large will these annual payments be if the account earns 17% compounded annually? Tuesday, December 3, 2013 at 5:34pm by Lynda Earl Watkins is ready to retire and has saved up $250,000 for that purpose. He places all of this money into an account which will pay him annual payments for 20 years. How large will these annual payments be if the account earns 17% compounded annually? Wednesday, December 4, 2013 at 4:14pm by dreamkatcher76 how much money must you deposit into an account that earns 3% monthly to have $1000 after 4 years Thursday, January 10, 2013 at 4:57pm by judy Simple & Compounding Interest Suppose Kevin and Jill both deposit $4000 into their personal accounts. If Kevin’s account earns 5% simple interest annually and Jill’s earns 5% interest compounded annually, how much will each account balance show at the end of 5 years? Calculate the difference between each ... Wednesday, April 25, 2012 at 6:12pm by Need Help!!! Marta has $6000 to invest. She puts x dollars of this money into a savings account that earns 2% interest per year. With the rest, she buys a certificate of deposit that earns 4% per year. I'm guessing it's monthly. I need to know the equation for the amount of money T Marta ... Wednesday, February 27, 2008 at 9:19pm by trixie How much would you need to deposit in an account now in order to have $20,000 in the account in 4 years? Assume the account earns 5% interest. Tuesday, July 17, 2012 at 10:09pm by Anonymous How much would you need to deposit in an account now in order to have $20,000 in the account in 4 years? Assume the account earns 5% interest. Wednesday, July 18, 2012 at 1:24am by colin How much money should be deposited today in an account that earns 6.5% compounded monthly so that it will accumlate to $8,000.00 in three years? Sunday, November 25, 2012 at 11:32am by V Delila has $1200 in a saving account and in a checking account. The ratio of money in saving to money in the checking is 3 to 2. Use a system of equation to find how much money is in each account . Thursday, April 12, 2012 at 9:18pm by unknown Simple & Compounding Interest I am SO STUCK on this problem... PLEASE HELP ASAP!!! Suppose Kevin and Jill both deposit $4000 into their personal accounts. If Kevin’s account earns 5% simple interest annually and Jill’s earns 5% interest compounded annually, how much will each account balance show at the ... Friday, April 27, 2012 at 10:32am by Need Help!!! how much money must Andrea invest for 2 years in an account that earns an annual simple interest rate of 8% if she wants to earn $300 from the investment? Friday, January 31, 2014 at 4:37pm by sheri How much money will I need to have at retirement so I can withdraw $60,000 a year for 20 years from an account earning 8% compounded annually? a. How much do you need in your account at the beginning b. How much total money will you pull out of the account? c. How much of that... Wednesday, July 18, 2012 at 1:25am by colin Suppose a young couple deposits $700 at the end of each quarter in an account that earns 7.1%, compounded quarterly, for a period of 6 years. After the 6 years, they start a family and find they can contribute only $200 per quarter. If they leave the money from the first 6 ... Saturday, April 30, 2011 at 7:35pm by cant figure this out :( At the end of each year a self-employed person deposits $1,500 in a retirement account that earns 10 percent annually. a) How much will be in the account when the individual retired at the age of 65 if the contribution start when the person is 45 years old? b) How much ... Thursday, April 25, 2013 at 10:01pm by carolyn Suppose you deposit $400 in an account that earn 0.75 percent each quarter. You make no withdrawal from the account and deposit no mone money into the account. How much money will accumulate after 2.5 years Sunday, February 7, 2010 at 6:42pm by Anonymous I have been staring at this problem forever, and cant seem to dig it up in my book. Please help! Two competing bank are trying to attract customers. (a) Ally Bank has an account which earns 25% interest every 10 years. Assuming the interest is compounded weekly, find both the ... Tuesday, January 8, 2013 at 8:56pm by fawn I want to open a savings account that earns 1.6% simple interest yearly. I want to earn exactly $288 in interest after 3 years. How much money should I deposit? Tuesday, January 8, 2013 at 3:01pm by Liz GEOMETRY ??? Why would he pay interest on money that is in his own savings account? Did you mean he "earns" interest ? I will assume that is what you meant. Amount = 500(1.05)^10 = 814.45 Monday, December 5, 2011 at 9:15pm by Reiny Suppose that the dealer, who is 25 years old, decides to sell the card at time , sometime in the next 40 years: 0< or equal to t < or equal to 40. At that time , he’ll invest the money he gets for the sale of the card in a bank account that earns an interest rate of r , ... Wednesday, May 23, 2012 at 12:30pm by illy Bert is planning to open a savings account that earns 1.6% simple interest yearly. He wants to earn exactly $240 in interest after 3 years. How much money should he deposit? Saturday, March 27, 2010 at 8:34pm by melanie Bert is planning to open a savings account that earns 1.6% simple interest yearly. He wants to earn exactly $160 in interest after 2 years. How much money should he deposit? Wednesday, January 25, 2012 at 4:08pm by shannon Bert is planning to open a savings account that earns 1.6% simple interest yearly. He wants to earn exactly $384 in interest after 3 years. How much money should he deposit? Monday, November 5, 2012 at 8:49pm by Michael 5. Bert is planning to open a savings account that earns 1.6% simple interest yearly. He wants to earn exactly $192 in interest after 2 years. How much money should he deposit? Saturday, March 9, 2013 at 10:08am by jay For an account that earns interest compounded annually, find the balance on the account to the nearest cent. Monday, January 21, 2013 at 2:54pm by Anonymous Bert is planning to open a savings account that earns 1.6% simple interest yearly. He wants to earn exactly $128 interest after 2 years. How much money should he deposit? Pls help me! Tuesday, November 25, 2008 at 7:27pm by gio If $635 is invested in an account that earns 9.25%, compounded annually, what will the account balance be after 21 years? Sunday, July 24, 2011 at 10:06pm by Anonymous everest online 2. At the end of each year a self-employed person deposits $1,500 in a retirement account that earns 10 percent annually. a) How much will be in the accountant when the individual retires at the age of 65 if the contributions start when the person is 45 years old? b) How much ... Thursday, January 30, 2014 at 1:01pm by shunda John took all his money out of his savings account. He spent $50 on a radio and 3/5 of what remained on presents. Half of what was left he put back in his checking account, and the remaining $35 he donated to charity. How much money did John originally have in his savings ... Saturday, June 26, 2010 at 7:43am by jessie need help with "you have $15 in your bank account. you spend $11 on a hat. then you mow 3 lawns for $20 each and deposit the money into your bank account, write and solve an expression (using order of operations) to determine how much money is in your account." I keep getting... Thursday, January 5, 2012 at 5:56pm by kb Redo problem 8 in section 6.3 of your textbook (page 288) assuming that the parents need $105000 in 9 years for college expenses, and that the bank account earns 9.25% compounded continuously. Round your answers to the nearest cent. (You may need to compute your answers to 4 ... Wednesday, December 1, 2010 at 1:42am by Brooke If $3000 is deposited at the end of each half year in an account that earns 6.2% compounded semiannually, how long will it be before the account contains Saturday, April 30, 2011 at 1:46pm by yeah If $795 is invested in an account that earns 11.75%, compounded annually, what will the account balance be after 27 years? Saturday, September 10, 2011 at 10:33pm by MARY If $870 is invested in an account that earns 24.25%, compounded annaually, what will the account balance be after 30 years. Wednesday, June 13, 2012 at 11:13am by linda If $1000 is invested in an account that earns 11.75%, compounded annually, what will the account balance be after 12 years? Sunday, November 4, 2012 at 9:38am by sara if you deposit $700 a month in a savings account that earns 4% interest compounded monthly. how many months to get $3500 and what is the balance in the account. Sunday, March 24, 2013 at 5:15pm by Anonymous if you deposit $700 a month in a savings account that earns 4% interest compounded monthly. how many months to get $3500 and what is the balance in the account. Sunday, March 24, 2013 at 6:42pm by Anonymous algebra 2 how much money will betty need, to the nearest cent, to invest in a certificate of deposit to have $20,000 after 20 years if the account earns a simple annual interest rate of 3.5% Thursday, May 26, 2011 at 11:45pm by sandiie 7th grade math Scott deposits $1,000 in an account that earns 5% simple interest. What will the account be worth after two years? Thursday, January 28, 2010 at 8:13pm by Anonymous suppose you invest a certain amount of money n an account that pays 11% interest annually, and 4000 more than that in an account that pays 12% annually, How much money do you have in each account id the total interest for a year is 940 Tuesday, November 19, 2013 at 11:28am by Anonymous If $3000 is deposited at the end of each half year in an account that earns 6.2% compounded semiannually, how long will it be before the account contains $130,000? Saturday, April 30, 2011 at 1:47pm by yeah Anthony earns an allowance for doing work at home .the first day he earns $1. ,$2. for the next two days,$3.for the next three days and so on and so on.If Anthony keep earning money this way ,on what day will he earn $6.?(2)Anthony saves all the money he earn to buy a $59.99 ... Tuesday, November 20, 2012 at 4:11pm by zack Danny invested$5,000 into his savings account for college when he was 13 years old. If the account earns 2.5% interest every year, how much interest will danny have earned on his investment and how much money will danny have for college by the time that he is 18 years old? Tuesday, December 4, 2012 at 7:41pm by Mackenna Foundations Math 12 Mark wants to buy a car in 15 months, when he graduates. He estimates the car he wants will cost $12 500. Mark has just invested $7500 in a GIC earning 4% compounded quarterly. He also has a savings account that earns 2.45%, compounded monthly. How much should he deposit in ... Tuesday, February 18, 2014 at 2:06am by Leah Sam gets deposits a total of $3500 every three months to the bank which earns a 6.5% p/a. After five years how much will he have in his account? Ren wants to buy a new car that costs $26255, so, he would deposit money into his savings account with an interest rate of 3.9% p/a ... Wednesday, July 13, 2011 at 10:19am by Kate If $570 is invested in an account that earns 12.75%, copounded annually, what will the account balance be after 11 years?( round to the neearest cent) Monday, March 12, 2012 at 8:16pm by Anonymous If $570 is invested in an account that earns 12.75%, copounded annually, what will the account balance be after 11 years?( round to the neearest cent) Monday, March 12, 2012 at 8:18pm by Anonymous If $695 is invested in an account that earns 21.75%, compounded annually, what will the account balance be after 15 years? (Round your answer to the nearest c Friday, May 18, 2012 at 11:24pm by Monic If $625 is invested in an account that earns annual interest of 5.5%, compounded semiannually, what will the account balance be after 8 years? (Round your answer to the nearest cent.) Monday, May 28, 2012 at 3:57pm by Diane Aidan has $7565 in his checking account. He invests $5000 of it in an account that earns 3.5% intersest compounded continuously. What is the total amount of his investment after 3 years? Monday, December 3, 2012 at 11:47pm by Student09 Sam opened a money-market account that pays 2% simple interest. He started the account with $7,000 and made no further deposits. When he closed the account, he had earned $560 in interest. How long did he keep his account open? Thursday, October 31, 2013 at 7:39pm by Lexi Simple Interest The Johnsons have saved $45,000. They invest their money in a bank and thei account earns 7.5% interest. How many years will it take to earn $74,000? *PLEASE PROVIDE AN EXPLANATION* Monday, April 9, 2012 at 6:42pm by Marilyn If $690 is invested in an account that earns 20.75%, compounded annually, what will the account balance be after 25 years? (Round your answer to the nearest cent.) Tuesday, July 19, 2011 at 12:41am by Anonymous If $690 is invested in an account that earns 20.75%, compounded annually, what will the account balance be after 25 years? (Round your answer to the nearest cent.) Wednesday, July 20, 2011 at 1:07am by Anonymous If $450 is invested in an account that earns annual interest of 3.5%, compounded semiannually, what will the account balance be after 15 years? (Round your answer to the nearest cent.) Saturday, November 26, 2011 at 11:12pm by Daniel algreba with application If $795 is invested in an account that earns annual interest of 5.5%, compounded semiannually, what will the account balance be after 5 years? (Round your answer to the nearest cent.) Sunday, November 27, 2011 at 2:29pm by Anonymous If $835 is invested in an account that earns annual interest of 4.5%, compounded semiannually, what will the account balance be after 13 years? (Round your answer to the nearest cent.) Sunday, April 29, 2012 at 9:31pm by Mike If $695 is invested in an account that earns 21.75%, compounded annually, what will the account balance be after 15 years? (Round your answer to the nearest cent.) Sunday, April 29, 2012 at 9:43pm by Mike If $835 is invested in an account that earns annual interest of 4.5%, compounded semiannually, what will the account balance be after 13 years? (Round your answer to the nearest cent.) Sunday, April 29, 2012 at 9:46pm by Mike Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=money+in+an+account+earns+interest+at+a+continuous+rate+of+8+per+year","timestamp":"2014-04-20T15:56:39Z","content_type":null,"content_length":"41306","record_id":"<urn:uuid:dc89a8a7-6409-4da5-879e-6990d4370f50>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A polygon has an area of 100 square inches and one of its sides is 20 inches long. If a second similar polygon has an area of 36 square inches, what is the length of the corresponding side in the second polygon? 5 in 8 2/3 in 12 in 33 1/3 in • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50fdb0f1e4b00c5a3be59334","timestamp":"2014-04-20T03:36:41Z","content_type":null,"content_length":"164042","record_id":"<urn:uuid:5c60d536-f5b8-498a-8f49-52dae3351d00>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability GHC, Hugs (MPTC and FD) Stability stable Maintainer robdockins AT fastmail DOT fm Safe Haskell None This module defines a sequence adaptor Sized s. If s is a sequence type constructor, then Sized s is a sequence type constructor that is identical to s, except that it also keeps track of the current size of each sequence. All time complexities are determined by the underlying sequence, except that size becomes O( 1 ). Sized Sequence Type data Sized s a Source Sequence s => Monad (Sized s) Sequence s => Functor (Sized s) (Monad (Sized s), Sequence s) => MonadPlus (Sized s) (Functor (Sized s), MonadPlus (Sized s), Sequence s) => Sequence (Sized s) Eq (s a) => Eq (Sized s a) (Eq (Sized s a), Sequence s, Ord a, Eq (s a)) => Ord (Sized s a) (Sequence s, Read (s a)) => Read (Sized s a) (Sequence s, Show (s a)) => Show (Sized s a) (Sequence s, Arbitrary (s a)) => Arbitrary (Sized s a) (Sequence s, CoArbitrary (s a)) => CoArbitrary (Sized s a) Sequence s => Monoid (Sized s a) Sequence Operations Unit testing Other supported operations
{"url":"http://hackage.haskell.org/package/EdisonCore-1.2.2/docs/Data-Edison-Seq-SizedSeq.html","timestamp":"2014-04-21T07:18:57Z","content_type":null,"content_length":"67824","record_id":"<urn:uuid:6953c43b-b842-4d50-aad3-c7fb9e3af709>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Movie Database MMDB−The Mathematical Movie Database via http://t.co/Csnhnk8GJn #mathchat — Vijay Krishnan (@bucharesttutor) December 22, 2013 I am always interested in finding mathematics in pop culture. The Mathematical Movie Database contains an extensive list of more than 800 movies and television shows that contain clips of mathematics. The site is maintained by Burkard Polster and Marty Ross. About ten years ago, on a whim, we began to collect movies containing mathematics. Now, as a consequence of that whim, we own a library of more than 800 movies on DVD, VHS, 16 mm, Laserdisc, and some strange thing called a CED video disc. The movies range from those expressly about mathematicians, to those that, for whatever reason, just happen to have a snippet of humorous mathematical dialogue. Over the years, we have found that it is not only professional mathematicians who find the fun in this cinematic mathematics. Just about everybody is charmed by Meg Ryan explaining Zeno’s paradox in I.Q., Danny Kaye singing about Pythagoras’s theorem in Merry Andrew, Lou Costello explaining to Bud Abbott why 7 x 13 =28 in In the Navy, and so on. They also give links to other pages about mathematical movies. For instance, Oliver Knill has a collection of movie clips containing mathematical content. They also link to the Mathematical Fiction page, mentioned on this blog by Maya Sharma (Sept 2012). Take some time to peruse these websites. Let me know what your favorite findings are. Enjoy! This entry was posted in Math, Math in Pop Culture, Mathematics in Society, Mathematics Online. Bookmark the permalink.
{"url":"http://blogs.ams.org/mathgradblog/2013/12/23/mathematical-movie-database/?amp&amp","timestamp":"2014-04-20T18:24:30Z","content_type":null,"content_length":"45924","record_id":"<urn:uuid:441b039c-2185-4597-97cc-c6d06e374947>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical hands With MOOCs fast becoming teaching trend-du-jour in western universities, it is easy to imagine that all disciplines and all ways of thinking are equally amenable to information technology. This is simply not true, and mathematical thinking in particular requires hand-written drawing and symbolic manipulation. Nobody ever acquired skill in a mathematical discipline without doing exercises and problems him or herself, writing on paper or a board with his or her own hands. The physical manipulation by the hand holding the pen or pencil is necessary to gain facility in the mental manipulation of the mathematical concepts and their relationships. Keith Devlin recounts his recent experience teaching a MOOC course on mathematics, and the deleterious use by students of the word-processing package latex for doing assignments: We have, it seems, become so accustomed to working on a keyboard, and generating nicely laid out pages, we are rapidly losing, if indeed we have not already lost, the habit—and love—of scribbling with paper and pencil. Our presentation technologies encourage form over substance. But if (free-form) scribbling goes away, then I think mathematics goes with it. You simply cannot do original mathematics at a keyboard. The cognitive load is too great. Why is this? A key reason is that current mathematics-producing software is clunky, cumbersome, finicky, and not WYSIWYG (What You See Is What You Get). The most widely used such software is Latex (and its relatives), which is a mark-up and command language; when compiled, these commands generate mathematical symbols. Using Latex does not involve direct manipulation of the symbols, but only their indirect manipulation. One has first to imagine (or indeed, draw by hand!) the desired symbols or mathematical notation for which one then creates using the appropriate generative Latex commands. Only when these commands are compiled can the user see the effects they intended to produce. Facility with pen-and-paper, by contrast, enables direct manipulation of symbols, with (eventually), the pen-in-hand being experienced as an extension of the user’s physical body and mind, and not as something other. Expert musicians, archers, surgeons, jewellers, and craftsmen often have the same experience with their particular instruments, feeling them to be extensions of their own body and not external tools. Experienced writers too can feel this way about their use of a keyboard, but language processing software is generally WYSIWYG (or close enough not to matter). Mathematics-making software is a long way from allowing the user to feel that they are directly manipulating the symbols in their head, as a pen-in-hand mathematician feels. Without direct manipulation, hand and mind are not doing the same thing at the same time, and thus – a fortiori – keyboard-in-hand is certainly not simultaneously manipulating concept-in-mind, and nor is keyboard-in-hand simultaneously expressing or evoking I am sure that a major source of the problem here is that too many people – and especially most of the chattering classes – mistakenly believe the only form of thinking is verbal manipulation. Even worse, some philosophers believe that one can only think by means of words. Related posts on drawing-as-a-form-of-thinking here, and on music-as-a-form-of-thinking here. [HT: Normblog] 0 Responses to “Mathematical hands” Comments are currently closed.
{"url":"http://www.vukutu.com/blog/2013/01/mathematical-hands/","timestamp":"2014-04-21T03:02:19Z","content_type":null,"content_length":"19794","record_id":"<urn:uuid:18339d2f-de30-40a9-8e62-c3ed2d1b3d64>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
CS-551: Homework 1 -- Efficient CRCW PRAM algorithm Efficient CRCW PRAM algorithm for finding maximum of N numbers • Suggest an O(log(log n)) CRCW PRAM algorithm to compute the maximum of N numbers using no more than O(N) processors. • What is the best EREW PRAM algorithm you can come up with to compute the maximum of N numbers using no more than O(N) processors? Hint: First, think about an efficient algorithm to find the maximum of n numbers using no more than O(n^2) processors, and then reduce the original problem to that one. This document has been prepared by Professor Azer Bestavros <best@cs.bu.edu> as the WWW Home Page for CS-551, which is part of the NSF-funded undergraduate curriculum on parallel computing at BU. This page has been created on October 3, 1994 and has been updated last on October 4, 1994.
{"url":"http://www.cs.bu.edu/~best/crs/cs551/homeworks/hw1/pram.html","timestamp":"2014-04-19T22:32:58Z","content_type":null,"content_length":"1472","record_id":"<urn:uuid:6653c162-7784-46b7-878a-da49372045da>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple math problem September 16th 2012, 01:29 PM #1 Sep 2012 Simple math problem Let's imagine I have different tonnages of products I am shipping. 15000 lbs to 40000 lbs in 5000 lbs increments. On average to ship it cost $2.81 for a 100 lbs. How do I calculate how much each bracket is costing me? Is there a simple way to resolve this? I realize that there could be different combination possible but this is a real life problem for me right now. Thanks for any help. Re: Simple math problem each 5000 lb increment is 50(100 lb), which costs 50($2.81) = $140.50 to ship a 15000 lb shipment consists of three 5000 lb increments ... 3($140.50) = $421.50 to ship Re: Simple math problem Ok I didn't phrase this correctly. The average cost 2.81 but shipping 15000 or 40000 cost the same so the cost of shipping 100 lbs is less expensive when shipping 40000 than 15000 lbs. In other words if I ship 15000 at $X dollars for 100lbs and 40000 at $Y dollars on average I paid $2.81. So let me know if I have this right. If I calculate the total cost 40+15 @ 2.81. Then calculate the total tonnage. Then find how many times I have the 15k lbs and 40k lbs in my total tonnage. Then take those 2#s I just found and add them. Then I divide my total shipping cost by how many times I found 15k and 40k in my total tonnage and multiply by how many times I found 15k. So here is how it looks (in 100 lbs) 150+400 = 550 Cost is 2.81*55=1545.5 For 150lbs = (1545.5/5.04)*3.66=1124 or 7.49 for 100lbs. Same thing with 400 lbs result is 1.05 lbs. Average is $2.81 per cwt. Is this right? September 16th 2012, 01:37 PM #2 September 16th 2012, 03:40 PM #3 Sep 2012
{"url":"http://mathhelpforum.com/statistics/203565-simple-math-problem.html","timestamp":"2014-04-21T08:07:20Z","content_type":null,"content_length":"35812","record_id":"<urn:uuid:8a48a2df-93ae-4b16-b210-d1f17dc23ff7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
st: R: RE: R: Merging obervations of two variables [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: R: RE: R: Merging obervations of two variables From "Carlo Lazzaro" <carlo.lazzaro@tiscalinet.it> To <statalist@hsphsun2.harvard.edu> Subject st: R: RE: R: Merging obervations of two variables Date Wed, 15 Oct 2008 09:57:49 +0200 Dear Martin, I agree with you in full. I do not know if codes 1 and 2 have some further useful meaning for Beatrice. Otherwise, I have probably missed some details of Beatrice's thread. Kind Regards, -----Messaggio originale----- Da: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Per conto di Martin Weiss Inviato: mercoledì 15 ottobre 2008 9.27 A: statalist@hsphsun2.harvard.edu Oggetto: st: RE: R: Merging obervations of two variables what is your example supposed to add as an additional insight? The way you construct your dataset, the -if- qualifier is always true. set obs 10 g time=10*(uniform()) g target_var=time brings about the same result... -----Original Message----- From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Carlo Lazzaro Sent: Wednesday, October 15, 2008 9:15 AM To: statalist@hsphsun2.harvard.edu Cc: 'Beatrice Crozza' Subject: st: R: Merging obervations of two variables Dear Beatrice, If I have understood your query well, it seems that the problem lays in a misuse in & operator in your first syntax sketch. Please try the following one: -------------begin example------------ set obs 10 g time=10*(uniform()) g code=1 in 1/10 replace code=2 in 3/6 g target_var=time if code==1 | code ==2 -------------end example------------ HTH and Kind Regards, -----Messaggio originale----- Da: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Per conto di Beatrice Crozza Inviato: mercoledì 15 ottobre 2008 0.39 A: statalist@hsphsun2.harvard.edu Oggetto: st: Merging obervations of two variables Dear all, maybe this is a simple question, but I don't know how to overcome my I have a variable time and I should generate another variables equal to time when another variable (code) is equal to 1 or 2. However, I don't know how to instruct stata for this. I tried with: gen time1= time if code==1 & code==2 but of course I will have all missing values. Thus, I created two variables: gen time1= time if code==1 gen time1= time if code==2 but I would like to merge the values of the two variables at this point. Any idea of how to do this? Thank you. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-10/msg00822.html","timestamp":"2014-04-17T09:53:37Z","content_type":null,"content_length":"8948","record_id":"<urn:uuid:00c81264-adaf-4ff3-a2a6-7c2c6e2a7a80>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching with the Internet K-12 MATH: THINKING MATHEMATICALLY ON THE INTERNET Since 1989, when the National Council of Teachers of Mathematics (NCTM) first published Curriculum and Evaluation Standards for School Mathematics, a change in the way that we view mathematics education has been taking place. These changes continue today in the latest set of standards developed by the NCTM. The National Science Foundation has supported the development of several outstanding directories. In addition, there are sites with intriguing puzzles, software to download, weekly math challenges, biographies of famous women in math, mathematicians who answer your students’ questions, lesson plans, a homework center for students, and even ol’ Blue Dog who will answer any four-function math problem your primary grade students throw his way... by barking out the answer! Teaching with the Internet: Elissa Morgan’s Class Numbers a site located at Nottingham University in England Math Counts and Math Word Problems are sites with new math problems each week that really challenged students to think. Past Notable Women of Mathematics History of Mathematics What are the Chances Helping Your Child Learn Math Directories for Math Education We encourage you to begin your explorations at one of the following directories: • Eisenhower National Clearinghouse for Mathematics and Science Education There are many great locations in this directory. We especially like Math Topics since it is neatly organized around the topics of math education. Other key locations include: • Digital Dozen • Classroom Calendar • ENC Focus • Lessons and Activities The Internet provides many helpful resources that can assist you in developing an exciting and dynamic science program in your classroom, a program consistent with the National Science Education Standard’s emphasis on thinking scientifically through inquiry. • Math Forum @ Drexel Provides many useful resources for teachers, students, and others. The Math Forum maintains chat areas and listservs/mailing lists for students and teachers to share ideas and questions about math. Dr. Math is also on call to answer questions from you or your students. • Math Archives provides resources for mathematicians at all levels, not just K–12 educators. This site has an especially good collection of interactive math experiences and free software to download and use in your classroom. There is also a nice collection of links to web resources for math in the section Topics in Mathematics. • Math Section of Learning Resources This is a section of Canada’s SchoolNet and is useful as you begin to explore links to math resources. At the present time, this list is not organized by topic or grade level but SchoolNet is quickly evolving and it looks like this will be an important resource. • Mathematics allows you to browse math resources by topic and grade level. All items are linked to the standards of Ohio, derived from the national standards. The grade browser since this saves tremendous amounts of time by organizing links. All of the sites at this location contain resources you can use immediately in your classroom. • Math Virtual Library This site from Florida State University provides a collection of exceptional links to math resources. While it is neither topically nor developmentally organized, it contains highly useful resources for math educators. Keeping It Simple: Using Internet Workshop One location with weekly problems for students is Brain Teasers, sponsored by Houghton Mifflin Publishing. Each week, a new problem is presented by grade level. If students require it, they may click on a ‘Hint” or a “Solution” button. There is also an archive of problems used in the past. If you work at the middle school level, you may wish to use problems that appear on Japan’s Junior High math tests to see how your students compare. Visit Japanese Math Challenge. Or, pay a visit to Ole Miss Problems of the Week , a site featuring a weekly prize. Here are just a few ideas to get you started with your own Internet workshops: • Dr. FreeMath is an electronic mail project where one mathematics question per month is researched and answered by different classes. A great way to make math come alive in your classroom. • Biographies of Women Mathematicians contains a developing set of biographies. Invite students to read about one of these favorite women. Or, better yet, have them do research on a new person, share their work during Internet Workshop, and then send it to the manager of this site to be posted. • MacTutor History of Mathematics archive includes extensive links to sites with information about the history of math. A nice location to set up a weekly question related to math history. • The Fruit Game A simple interactive game, originally called Nim, with a hidden trick. See if your students can explain the trick in writing. Share your best guesses during Internet Workshop. • Interactive Mathematics Miscellany and Puzzles has an incredible list of links to games, activities, and puzzles that will keep your class busy all year with Internet Workshop! Set a bookmark! Using Internet Project In math, Internet Project is important because it encourages students to work together to develop the ability to think mathematically. Part of thinking mathematically is being able to communicate problem-solving strategies to others and to listen as others describe different approaches to proofs. │E-MAIL FOR YOU │ │ │ │From: Jodi Moore (jmoore@ms.spotsylvania.k12.va.us) │ │Subject: Using the Internet for Math │ │ │ │Brain Teasers │ │The Elementary Problem of the Week │ │MacTutor History of Mathematics Archive │ │ │ │The Internet is a powerful tool will literally make all the difference in the world with students. I can honestly say I am glad technology has arrived!│ │ │ │Jodi Moore 7th grade │ │Freedom Middle School │ │Fredericskburg, VA 22407 │ Down the Drain is a project that connects both science and math. It has students measure the amount of water they use each day and then compare their use with others around the world. Graph Goodies is designed for K–2 students. It provides an early introduction into the power of numbers and analysis. Take a look and you will find many ideas that you can use right in your The Noon Day Project: Measuring the Circumference of the Earth is a project also used in science. Students recreate the classic experiment conducted by Eratosthenes over 2,200 years ago to determine the circumference of the Earth. Classes measure the length of a shadow cast by a meter stick, share this data electronically, use scale drawings and a spreadsheet to make comparisons, and use this information to estimate the circumference of the earth. The Global Grocery List Project invites your students to enter grocery list data from their location and conduct a variety of analyses using a worldwide database of prices and foods contributed by other classes around the world. It is an outstanding way to integrate social studies with mathematics. Other projects may be joined by reviewing projects posted at the traditional locations on the Internet such as: Global SchoolNet’s Internet Project Registry Oz Projects SchoolNet’s Grassroots Project Gallery Intercultural Email Classroom Connections Examples of projects that you may wish to post for others to join include: • Problems for Problem Solvers Invite other classrooms to join you in exchanging interesting math problems to solve together. Appoint one class each week to be the lead class on a rotating basis. The lead class is responsible for developing five problems or puzzles that are sent to participating classes who then have a week to return the answers. Each week, another class becomes the lead class and circulates five new problems or puzzles for everyone to solve. • Heads or Tails? A simple probability project for younger students. Invite other classes to flip a coin from their country ten times and record the number of times that heads turn up. Repeat this ten times. Then have them send the results to your class. Record the data, write up the results, and send back a report with the percentage of times heads turns up during a coin toss. You may wish to invite participating schools to exchange the coins they flipped so that young children become familiar with different currency systems. • Graph your Favorite This project was completed by students in Grades 2, 4, and 6 classrooms in Michigan, Minnesota, Canada, Australia, and California. Classes voted each week on their favorite item in one category: pets, holidays, sports, school subjects, and food. Participating classes sent their data to the project coordinator who compiled the results each week and emailed it to everyone for further analysis. Students used the data in raw form to make their own spreadsheets, both manually and by computer. They also made computer bar graphs and pie graphs as well as manually drawn bar graphs. Then they analyzed the graphs and drew conclusions using the graphing website Create a Graph. Invite a group of participating classes to join you in working through the experiences at Statistics Every Writer Should Know and What Are the Odds . After completing these experiences, have each class develop group projects to analyze and report comparative statistics from their country, state, or nation on some category where numerical data is kept. Use the site Finding Data on the Internet to obtain these data. Share the reports and provide responses to each report. Using Internet Inquiry It is possible to organize Internet Inquiry around interesting sites that already exist on the Internet. Examples include the very rich sites that exist for the following: • Kids Count Data Book lets students explore all types of demographic information. Their explorations will lead to important questions. The site includes exceptional tools for displaying results in graphs, maps, rankings as well as raw data files. Use this in your social studies classroom, too! •NationMaster provides students with important demographic statistics by nation, allowing them to compare countries around the world on a number of different variables (over 900!). It also provides graphing and presentation tools. Set a bookmark for Internet Inquiry! • Pi Mathematics provides a history of pi. Students can view a video, complete several different activities, calculate the best deal on several pizzas, and share their favorite pizza topping with students around the world. Have them write up a report on their experiences and share them with others. • A Fractals Lesson Make a fractal, learn how fractals are related to chopping broccoli, and view fractals on the Web. Have students prepare a poster session on fractals for the class including examples they printed out from sites on the Web. • Mega Mathematics From seemingly simple coloring problems that have perplexed cartographers for centuries, to the mathematics of knots, to issues of infinity, to graphs and games, this site has enough intriguing issues to keep any student thinking mathematically for a year. Another approach to Internet Inquiry is to encourage students to explore sites containing links to many different topics in mathematics. Direct students to any of the central sites described earlier in the chapter or explore some of these locations: • Knot a Braid of Links is a great math location for students. Each week a new site is selected in math. Previous links are available so that you can go down the list until you find something really interesting. • Interactive Mathematics Miscellany and Puzzles Have students explore this site. Encourage them to report on the history behind the problem as well as the problem itself. They may wish to visit some of the history sites mentioned earlier to gather information. Using WebQuest If you teach middle grade math students, explore A Creative Encounter of A Numerical Kind. This humorous webquest will send your students on a voyage of number systems, determining which system would be best. Perhaps you would like your students to calculate the current cost of building a pyramid using modern materials and ancient methods. Take a look at Mr. Pitonyak’s Pyramid Puzzle. If you teach math in Grades 4–8, explore Best Weather , a webquest for which you must develop a definition of good weather and then evaluate the weather statistics in several cities, making graphs for each, as you present the case for which city has the best weather. Student presentations are then displayed for Open House Night. If you teach Grades 6–12, complete World Shopping Spree. In this webquest, you find four common objects for sale in four different countries. Then, converting each cost into dollars, you determine which country has the best buy for each item. If you teach at the high school level Baseball Prediction is useful. Students analyze statistical correlations between a team’s winning percentage and several performance indicators in order to make a recommendation to management about which type of player to acquire: a home-run hitter, a high-average hitter, a hitter who bats in more runs, a base stealer, or a pitcher with a low earned-run average. If you have any baseball fans, this would be a big hit. A final example of a math webquest is Titanic: What Can Numbers Tell Us About Her Fatal Voyage . In this activity students evaluate several data bases containing statistical information on survivors and deaths from this tragedy. Students use these data in the construction of spreadsheet tables, with appropriate graphics, to illustrate specific statistical conclusions. Visiting the Classroom: Rob Hetzel’s Math Classes in Wisconsin "Life is good for only two things, discovering mathematics and teaching mathematics.” You can see the true meaning of this quote by paying a visit to his excellent homepage . New Literacies in Math The Internet is an exceptional tool for helping our students to think mathematically as they develop the new literacies that are quickly becoming part of our evolving definition of mathematics in a world in which math, information, critical thinking, problem solving, communication, and the Internet are all converging. Additional Math Resources on the Internet 100th Day of School Celebration Here is a series of great activities to celebrate the magic behind the number 100. Send and receive a hundred emails, see how hundreds of jellybeans can make hundreds of thousands, and many more great, quick projects for your class. Additional Resources This is a teacher friendly collection of great math resources for your classroom. Many useful links for teaching and learning. Set a bookmark! ArithmAttack How many basic math problems can you solve in one minute. Set a bookmark and see how much each student can improve his or her scores for addition, subtraction, multiplication, and division during the year. Arithmetic Software Do your students need new and fun ways to master basic arithmetic? Here is a central site for great freeware and shareware you can download right to your classroom computer. Set a bookmark! Blue Dog Can Count!! A classic! Blue dog answers all your basic math problems by barking out the answers. A fun site and especially useful in the primary grades for practicing basic math skills. Explorer: Mathematics The Explorer is a collection of educational resources including instructional software, lab activities, and lesson plans for K–12 mathematics and science education. A nice collection for busy teachers to obtain very useful resources. Set a bookmark! Finding Data on the Internet Here is the place to get nearly every piece of statistical data on states, countries, cities, and other geographical and political units. A treasure trove for data snoopers and a great place for older students to explore during Internet Inquiry. Flashcards for Kids This location offers a set of flashcard experiences for your students for addition, subtraction, multiplication, and division at several different levels of difficulty. It also lets you run flashcards in a timed or untimed mode and keeps your score for you. A great resource for students learning their basic facts. Geometry Classroom Materials Are you looking for a range of Internet resources for your course in geometry? Here is your answer, a great collection of teaching tools from The Math Forum. Great Graph Match If you work with graphs, here is a great location for an Internet Workshop assignment. It may be set to make it harder or easier for your students as they work to solve the Macalester College Problem of the Week If you are looking for math challenges for your high school classes, here is a wonderful site. Use each week’s problem to run a brief Internet Workshop on Fridays to see if anyone has come up with the solution. Math Hunt Have your students complete a series of treasure hunts that help them to solve a math problem. A great Internet Workshop resource. Numbers in Search of a Problem Looking for real-world statistics for problems in your class? Here is a great site with statistics on everything from sports to population to the stock market. Practical Algebra Lessons from Purplemath A series of wonderful tutorials and then interactive Try-Its. A great set of assignments for Internet Workshop. Statistics Learn about central statistical concepts as you follow a fictional race between two candidates by reading news bulletins. Discover what a random sample is, what “margin of error” means, and why polls aren’t always right. Online Communities for Math Math Forum Newsletter An electronic newsletter from Drexel University. Homepage: http://mathforum.org/electronic.newsletter/ Subscription address: majordomo@mathforum.org Mailing Lists and Newsgroups list: http://mathforum.org/discussions/ Mathedcc This list is intended for anyone interested in technology in math education. Subscription address: listserv@vm1.mcgill.ca Archives: http://archives.math.utk.edu/hypermail/mathedcc/ Mathsed-L A discussion on mathematics in education. Subscription address: listserv@deakin.edu.au MATHWEB-L A general math discussion area. Subscription address: mailserv@hcca.ohio.gov NCTM-L National Council of Teachers of Mathematics. Discussion of mathematics teaching and the national standards. Subscription address: majordomo@mathforum.org Archives: http://mathforum.org/epigone/nctm-l Math and Science Board from ProTeacher Homepage: http://www.proteacher.net/cgi-bin/dcforum/dcboard. cgi?az=list&forum=science Back to the top
{"url":"http://www.sp.uconn.edu/~djleu/fourth/eight.html","timestamp":"2014-04-20T21:15:05Z","content_type":null,"content_length":"35660","record_id":"<urn:uuid:d60bc45e-3f7b-4219-a7a8-e22f419a2470>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Please help! A bag has 4 green marbles, 3 red, and 2 yellow. What is the probability that you pick a red marble, do not replace it, then pick a yello marble? • one year ago • one year ago Best Response You've already chosen the best response. Choices are: 2/15 1/15 1/9 1/10 1/9 Is NOT correct! Best Response You've already chosen the best response. If you take out a red and don't replace it then you now have 4 green, 2 red, and 2 yellow. Probability is the part over the whole. So it's 2 yellow/ 8 = 1/4. ... now I'm confused. That's not an answer choice. Best Response You've already chosen the best response. Yeah! Now you can see why i need help lol. Best Response You've already chosen the best response. I'm correct about the meaning of probability so I'm not sure. Best Response You've already chosen the best response. Yeah! I have no idea Best Response You've already chosen the best response. A bag has 4 green marbles, 3 red, and 2 yellow. What is the probability that you pick a red marble, do not replace it, then pick a yellow marble? First find the total: 9 marbles. There's 3 reds. Chance of picking a red is thus 3/9. Since we don't replace it, there's now only 8 remaining marbles. There are 2 yellow, so the chance of picking a yellow is 2/8. To find the probability of picking a red and then a yellow, multiply those two probabilities together:\[\frac{ 3 }{ 9 } \times \frac{ 2 }{ 8 } = \] Best Response You've already chosen the best response. OH!!! That makes sense! xD Best Response You've already chosen the best response. It's the probability of doing the two actions in succession. That's why I was wrong. Best Response You've already chosen the best response. But that gives 1/12, which also isn't an answer choice... even though I'm pretty sure that's the correct solution. Best Response You've already chosen the best response. Unless there's an error in the question Best Response You've already chosen the best response. Best Response You've already chosen the best response. thanks guys! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d24aa6e4b069abbb715fe9","timestamp":"2014-04-17T04:11:03Z","content_type":null,"content_length":"54772","record_id":"<urn:uuid:d12564d5-051a-4ade-b1a3-3ed6347cf421>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Research Presentations Department of Mathematics Student Research Presentations Nathan Bradford, Morehouse College Exploring Fuzzy Sets The Twentieth Annual F.E. Mapp Science and Mathematics Symposium, April 8, 2008 Sean Ewing, Morehouse College Sign Pattern Matrices and Determinants The Twentieth Annual F.E. Mapp Science and Mathematics Symposium, April 8, 2008 Raphiel Murden, Morehouse College Fractional Calculus and Applications The Twentieth Annual F.E. Mapp Science and Mathematics Symposium, April 8, 2008 William Shropshire, Morehouse College Construction Of Hybrid Real Number System And Its Analytic Properties The Twentieth Annual F.E. Mapp Science and Mathematics Symposium, April 8, 2008 Jason Smith, Morehouse College Fuzzy sets and its applications The Twentieth Annual F.E. Mapp Science and Mathematics Symposium, April 8, 2008 Raphiel Murden, Morehouse College Fractional Calculus The Sixth Annual Harriett J. Walton Symposium on Undergraduate Mathematics Research, April 5, 2008 Samuel Ivy, Morehouse College Unfair Dice The Sixth Annual Harriett J. Walton Symposium on Undergraduate Mathematics Research, April 5, 2008 Jason Smith, Morehouse College Fuzzy sets and its applications The Sixth Annual Harriett J. Walton Symposium on Undergraduate Mathematics Research, April 5, 2008 Sean Ewing, Morehouse College Sign Pattern Matrices and Determinants The Sixth Annual Harriett J. Walton Symposium on Undergraduate Mathematics Research, April 5, 2008 William Shropshire, Morehouse College Construction Of Hybrid Real Number System And Its Analytic Properties The Sixth Annual Harriett J. Walton Symposium on Undergraduate Mathematics Research, April 5, 2008
{"url":"https://www.morehouse.edu/academics/math/student_presentations.html","timestamp":"2014-04-20T08:15:13Z","content_type":null,"content_length":"9861","record_id":"<urn:uuid:1c04ce42-180d-464d-9c5d-89f013e884fc>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
New Carrollton, MD Precalculus Tutor Find a New Carrollton, MD Precalculus Tutor ...This is where most students start to struggle, and it continues into the later years. Algebra 2 is my favorite subject. This course allows you to truly explore your understanding of functions. 24 Subjects: including precalculus, reading, calculus, geometry ...I have taught every aspect of reading from sound and word recognition through the skills necssary to read great literature. For emergent readers, selection of books that will capture their attention is paramount. I am familiar with children's literature and can help with recommendations - even for boys. 32 Subjects: including precalculus, reading, English, chemistry ...Also, I have taught higher level math subjects and understand what comes up in Precalculus and Calculus, to better get the student ready for these classes. My college minor was Economics, and I took many college courses in American history. Also, I am interested in what happens and has happened throughout my lifetime and earlier. 21 Subjects: including precalculus, calculus, world history, statistics ...I graduated from Williams College with a BA in Economics in 2009. I scored a 170 out of 180 on my LSAT, and a 1500 (710 math, 790 verbal) out of 1600 on the SAT. While at Williams College I tutored freshmen in Economics. 21 Subjects: including precalculus, geometry, accounting, statistics I am currently a math instructor at the University of the District of Columbia. I have been teaching math since 1980. I have a bachelor's and master's degree in Math Secondary Education. 12 Subjects: including precalculus, calculus, geometry, algebra 1 Related New Carrollton, MD Tutors New Carrollton, MD Accounting Tutors New Carrollton, MD ACT Tutors New Carrollton, MD Algebra Tutors New Carrollton, MD Algebra 2 Tutors New Carrollton, MD Calculus Tutors New Carrollton, MD Geometry Tutors New Carrollton, MD Math Tutors New Carrollton, MD Prealgebra Tutors New Carrollton, MD Precalculus Tutors New Carrollton, MD SAT Tutors New Carrollton, MD SAT Math Tutors New Carrollton, MD Science Tutors New Carrollton, MD Statistics Tutors New Carrollton, MD Trigonometry Tutors Nearby Cities With precalculus Tutor Beltsville precalculus Tutors Berwyn Heights, MD precalculus Tutors Bladensburg, MD precalculus Tutors Cheverly, MD precalculus Tutors College Park precalculus Tutors Glenarden, MD precalculus Tutors Glenn Dale precalculus Tutors Greenbelt precalculus Tutors Landover Hills, MD precalculus Tutors Lanham precalculus Tutors Lanham Seabrook, MD precalculus Tutors Riverdale Park, MD precalculus Tutors Riverdale Pk, MD precalculus Tutors Riverdale, MD precalculus Tutors Seabrook, MD precalculus Tutors
{"url":"http://www.purplemath.com/New_Carrollton_MD_Precalculus_tutors.php","timestamp":"2014-04-16T07:18:27Z","content_type":null,"content_length":"24388","record_id":"<urn:uuid:4259e304-c319-4090-8767-e59850ced133>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Measure Your Portfolio's Performance Many investors mistakenly base the success of their portfolios on returns alone. Few consider the risk that they took to achieve those returns. Since the 1960s, investors have known how to quantify and measure risk with the variability of returns, but no single measure actually looked at both risk and return together. Today, we have three sets of performance measurement tools to assist us with our portfolio evaluations. The Treynor, Sharpe and Jensen ratios combine risk and return performance into a single value, but each is slightly different. Which one is best for you? Why should you care? Let's find out. Treynor Measure Jack L. Treynor was the first to provide investors with a composite measure of portfolio performance that also included risk. Treynor's objective was to find a performance measure that could apply to all investors, regardless of their personal risk preferences. He suggested that there were really two components of risk: the risk produced by fluctuations in the market and the risk arising from the fluctuations of individual securities. Treynor introduced the concept of the security market line, which defines the relationship between portfolio returns and market rates of returns, whereby the slope of the line measures the relative volatility between the portfolio and the market (as represented by beta). The beta coefficient is simply the volatility measure of a stock portfolio to the market itself. The greater the line's slope, the better the risk-return tradeoff. The Treynor measure, also known as the reward to volatility ratio, can be easily defined as: (Portfolio Return – Risk-Free Rate) / Beta The numerator identifies the risk premium and the denominator corresponds with the risk of the portfolio. The resulting value represents the portfolio's return per unit risk. To better understand how this works, suppose that the 10-year annual return for the S&P 500 (market portfolio) is 10%, while the average annual return on Treasury bills (a good proxy for the risk-free rate) is 5%. Then assume you are evaluating three distinct portfolio managers with the following 10-year results: │ Managers │ Average Annual Return │ Beta │ │ Manager A │ 10% │ 0.90 │ │ Manager B │ 14% │ 1.03 │ │ Manager C │ 15% │ 1.20 │ Now, you can compute the Treynor value for each: T(market) = (.10-.05)/1 = .05 T(manager A) = (.10-.05)/0.90 = .056 T(manager B) = (.14-.05)/1.03 = .087 T(manager C) = (.15-.05)/1.20 = .083 The higher the Treynor measure, the better the portfolio. If you had been evaluating the portfolio manager (or portfolio) on performance alone, you may have inadvertently identified manager C as having yielded the best results. However, when considering the risks that each manager took to attain their respective returns, Manager B demonstrated the better outcome. In this case, all three managers performed better than the aggregate market. Because this measure only uses systematic risk, it assumes that the investor already has an adequately diversified portfolio and, therefore, unsystematic risk (also known as diversifiable risk) is not considered. As a result, this performance measure should really only be used by investors who hold diversified portfolios. Sharpe Ratio The Sharpe ratio is almost identical to the Treynor measure, except that the risk measure is the standard deviation of the portfolio instead of considering only the systematic risk, as represented by beta. Conceived by Bill Sharpe, this measure closely follows his work on the capital asset pricing model (CAPM) and by extension uses total risk to compare portfolios to the capital market line. The Sharpe ratio can be easily defined as: (Portfolio Return – Risk-Free Rate) / Standard Deviation Using the Treynor example from above, and assuming that the S&P 500 had a standard deviation of 18% over a 10-year period, let's determine the Sharpe ratios for the following portfolio managers: │ Manager │ Annual Return │ Portfolio Standard Deviation │ │ Manager X │ 14% │ 0.11 │ │ Manager Y │ 17% │ 0.20 │ │ Manager Z │ 19% │ 0.27 │ S(market) = (.10-.05)/.18 = .278 S(manager X) = (.14-.05)/.11 = .818 S(manager Y) = (.17-.05)/.20 = .600 S(manager Z) = (.19-.05)/.27 = .519 Once again, we find that the best portfolio is not necessarily the one with the highest return. Instead, it's the one with the most superior risk-adjusted return, or in this case the fund headed by manager X. Unlike the Treynor measure, the Sharpe ratio evaluates the portfolio manager on the basis of both rate of return and diversification (as it considers total portfolio risk as measured by standard deviation in its denominator). Therefore, the Sharpe ratio is more appropriate for well diversified portfolios, because it more accurately takes into account the risks of the portfolio. Jensen Measure Like the previous performance measures discussed, the Jensen measure is also based on CAPM. Named after its creator, Michael C. Jensen, the Jensen measure calculates the excess return that a portfolio generates over its expected return. This measure of return is also known as alpha. The Jensen ratio measures how much of the portfolio's rate of return is attributable to the manager's ability to deliver above-average returns, adjusted for market risk. The higher the ratio, the better the risk-adjusted returns. A portfolio with a consistently positive excess return will have a positive alpha, while a portfolio with a consistently negative excess return will have a negative The formula is broken down as follows: Jensen's Alpha = Portfolio Return – Benchmark Portfolio Return Where: Benchmark Return (CAPM) = Risk-Free Rate of Return + Beta (Return of Market – Risk-Free Rate of Return) So, if we once again assume a risk-free rate of 5% and a market return of 10%, what is the alpha for the following funds? │ Manager │ Average Annual Return │ Beta │ │ Manager D │ 11% │ 0.90 │ │ Manager E │ 15% │ 1.10 │ │ Manager F │ 15% │ 1.20 │ First, we calculate the portfolio's expected return: ER(D)= .05 + 0.90 (.10-.05) = .0950 or 9.5% return ER(E)= .05 + 1.10 (.10-.05) = .1050 or 10.50% return ER(F)= .05 + 1.20 (.10-.05) = .1100 or 11% return Then, we calculate the portfolio's alpha by subtracting the expected return of the portfolio from the actual return: Alpha D = 11%- 9.5% = 1.5% Alpha E = 15%- 10.5% = 4.5% Alpha F = 15%- 11% = 4.0% Which manager did best? Manager E did best because, although manager F had the same annual return, it was expected that manager E would yield a lower return because the portfolio's beta was significantly lower than that of portfolio F. Of course, both rate of return and risk for securities (or portfolios) will vary by time period. The Jensen measure requires the use of a different risk-free rate of return for each time interval considered. So, let's say you wanted to evaluate the performance of a fund manager for a five-year period using annual intervals; you would have to also examine the fund's annual returns minus the risk-free return for each year and relate it to the annual return on the market portfolio, minus the same risk-free rate. Conversely, the Treynor and Sharpe ratios examine average returns for the total period under consideration for all variables in the formula (the portfolio, market and risk-free asset). Like the Treynor measure, however, Jensen's alpha calculates risk premiums in terms of beta (systematic, undiversifiable risk) and therefore assumes the portfolio is already adequately diversified. As a result, this ratio is best applied with diversified portfolios, like mutual funds. The Bottom Line Portfolio performance measures should be a key aspect of the investment decision process. These tools provide the necessary information for investors to assess how effectively their money has been invested (or may be invested). Remember, portfolio returns are only part of the story. Without evaluating risk-adjusted returns, an investor cannot possibly see the whole investment picture, which may inadvertently lead to clouded investment decisions. More From Investopedia
{"url":"https://sg.finance.yahoo.com/news/measure-portfolios-performance-222226627.html","timestamp":"2014-04-20T00:55:40Z","content_type":null,"content_length":"154951","record_id":"<urn:uuid:e10150f0-24ee-4531-a100-2cb192ba493b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Predator/Prey: Questions Predator/Prey with Functional Response The Homework has the assignment to hand in. These Questions are basically drill to build an intuitive grasp of functional response and isoclines, so that you expect the answer you get. 1. On Scenario | Functional Response, look at Types I and II. With the preset parameter values (killrate 0.05 and handling time 0.02), the Type I and II responses seem colinear for a little ways. At what prey population do they appear to diverge? 2. Using the same Scenario, set Type II killrate to 0.1. What happens? 3. The difference between the predator/prey Scenario | Linear Codependent and Scenario | Saturated Predator is their functional responses (Type I vs. Type II). To simplify your comparisons, the preset parameters match (i.e., both have killrate 0.05.) Now look at (in sequence): a. "Linear Codependent" with its default killrate 0.05. b. "Saturated Predator" with its default killrate 0.05. (How many cycles are there on the time series?) c. "Saturated Predator" with killrate 0.1. (What happens to the phase plot (top) trajectory shape? How many cycles on the time series?) d. Back to "Linear Codependent". Tell yourself a story about what this means. Based on your story, predict what will happen with a Saturated Predator killrate 0.01 and 0.2. Test your theories. 4. Play the Question 2/Question 3 game again, this time varying only the handling time. (Set parameter "predator saturation" to killrate times handling time. See Details.) Again, what does this mean? Does the handling time affect the number of cycles on the time series? 5. Pick Scenario | Saturated Predator again, to make sure you've got the default parameters. Now try doubling and halving the prey birthrate. What happens? 6. Now try doubling and halving the predator deathrate. What happens? 7. The number of cycles is determined by the period, or length of each cycle. Are the cycles of equal length in the Saturated Predator system? Which parameters affect the period? Why would that make 8. The quantities Prey and Predator are population densities. Let's say you wanted to scale these. For instance, you have data from Idaho on a one-meter-square map and data from Wyoming on a ten-meter-square map. How do you adjust your parameters to scale the one-meter data down two orders of magnitude? (0.01 per (10m)^2 = 0.01 per 100m^2 = 1.0 per 1m^2.) 9. In the natural world, there are extremely few predators who depend on a single sort of prey. What aspects of these models do you think would be relevant in practice? Which are probably less 10. Take a predator/prey ecosystem you're familiar with, like cats catching mice and baby moles in Connecticut, but include the cat's food bowl. Pick one of the models here and describe its fitness to describe your sample ecosystem. What ought to happen when you add more prey types? What do you think the equations suggest about adding prey types? Ginger Booth, revised April 2005, orig. December 1998, for oswald.schmitz@yale.edu
{"url":"http://gingerbooth.com/coursewareCBC/neweco/twospecies/readers.ppr/questions.html","timestamp":"2014-04-17T01:16:36Z","content_type":null,"content_length":"3908","record_id":"<urn:uuid:8119d6fe-99ee-41f4-a101-cc32c4739b14>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
{-# LANGUAGE BangPatterns #-} {- | This module provides the 'Str' data type, which is used by the underlying 'uniplate' and 'biplate' methods. It should not be used directly under normal circumstances. module Data.Generics.Str where import Data.Generics.Uniplate.Internal.Utils import Control.Applicative import Control.Monad import Data.Foldable import Data.Monoid import Data.Traversable -- * The Data Type data Str a = Zero | One a | Two (Str a) (Str a) deriving Show instance Eq a => Eq (Str a) where Zero == Zero = True One x == One y = x == y Two x1 x2 == Two y1 y2 = x1 == y1 && x2 == y2 _ == _ = False {-# INLINE strMap #-} strMap :: (a -> b) -> Str a -> Str b strMap f x = g SPEC x g !spec Zero = Zero g !spec (One x) = One $ f x g !spec (Two x y) = Two (g spec x) (g spec y) {-# INLINE strMapM #-} strMapM :: Monad m => (a -> m b) -> Str a -> m (Str b) strMapM f x = g SPEC x g !spec Zero = return Zero g !spec (One x) = liftM One $ f x g !spec (Two x y) = liftM2 Two (g spec x) (g spec y) instance Functor Str where fmap f Zero = Zero fmap f (One x) = One (f x) fmap f (Two x y) = Two (fmap f x) (fmap f y) instance Foldable Str where foldMap m Zero = mempty foldMap m (One x) = m x foldMap m (Two l r) = foldMap m l `mappend` foldMap m r instance Traversable Str where traverse f Zero = pure Zero traverse f (One x) = One <$> f x traverse f (Two x y) = Two <$> traverse f x <*> traverse f y -- | Take the type of the method, will crash if called strType :: Str a -> a strType = error "Data.Generics.Str.strType: Cannot be called" -- | Convert a 'Str' to a list, assumes the value was created -- with 'listStr' strList :: Str a -> [a] strList x = builder (f x) f (Two (One x) xs) cons nil = x `cons` f xs cons nil f Zero cons nil = nil -- | Convert a list to a 'Str' listStr :: [a] -> Str a listStr (x:xs) = Two (One x) (listStr xs) listStr [] = Zero -- | Transform a 'Str' to a list, and back again, in a structure -- preserving way. The output and input lists must be equal in -- length. strStructure :: Str a -> ([a], [a] -> Str a) strStructure x = (g x [], fst . f x) g :: Str a -> [a] -> [a] g Zero xs = xs g (One x) xs = x:xs g (Two a b) xs = g a (g b xs) f :: Str a -> [a] -> (Str a, [a]) f Zero rs = (Zero, rs) f (One _) (r:rs) = (One r, rs) f (Two a b) rs1 = (Two a2 b2, rs3) (a2,rs2) = f a rs1 (b2,rs3) = f b rs2
{"url":"http://hackage.haskell.org/package/uniplate-1.6.9/docs/src/Data-Generics-Str.html","timestamp":"2014-04-18T09:55:13Z","content_type":null,"content_length":"23694","record_id":"<urn:uuid:b4f9c8b5-8635-4801-8cdb-e3ae0d0d810f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Naperville Science Tutor Find a Naperville Science Tutor ...I was also a graduate research assistant for four years and a graduate teaching assistant for an undergraduate course on nuclear and particle physics during grad school. I have a PhD. in experimental nuclear physics. I have completed undergraduate coursework in the following math subjects - dif... 10 Subjects: including physics, algebra 2, calculus, geometry ...I have a Master's degree in Biology and a Bachelor's in Genetics and Development. I am passionate about what I teach and will help your son or daughter master the material they need to learn. I believe in understanding the concepts of biology and genetics, not just memorizing terms. 10 Subjects: including genetics, biology, biochemistry, geometry ...I won the Botany award for my genetic research on plants as an undergraduate, and I have done extensive research in Computational Biology for my Ph.D. dissertation. I was a teaching assistant for both undergraduate and graduate students for a variety of Biology classes. I am fluent in a range of Science and History disciplines. 41 Subjects: including ACT Science, precalculus, trigonometry, finance ...I am also a certified lifeguard. I have taught students ages 6 months to adult. I have taught mommy/daddy and me classes all the way up to adults who were too scared to put their face in the water to which they were swimming laps by the time I was finished teaching them after 6 weeks. 25 Subjects: including psychology, English, sociology, ESL/ESOL ...I lead workshops for a year at the University. My job was to make sure students understood the lecture material and answer any questions they had. In addition to that I led the students through exercises that were created by the Biology Department. 27 Subjects: including ACT Science, chemistry, ecology, reading
{"url":"http://www.purplemath.com/naperville_science_tutors.php","timestamp":"2014-04-21T05:20:34Z","content_type":null,"content_length":"23929","record_id":"<urn:uuid:58765991-f36a-4e5b-9985-455fa140ae79>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Lone Tree, CO Prealgebra Tutor Find a Lone Tree, CO Prealgebra Tutor ...I'm patient, friendly, and easy-going while being goal-oriented, practical, and encouraging. My mission is to provide my clients with the best tools possible to solve their own problems and succeed on their own. I graduated in May of 2013 with a degree in Physics and a minor in Mathematics. 13 Subjects: including prealgebra, reading, physics, calculus Students are fueled by their own achievements. My goal is to guide students to experience accomplishments that spark motivation. I establish high but realistic expectations; use positive reinforcement; match teaching technique to learning style; and ensure that learning experiences enhance self-esteem. 29 Subjects: including prealgebra, reading, English, writing ...However, once I developed my study skills, I realized that homework, essays, and studying for exams didn't take as long as it use to. Even though I was in the gifted in talented program from elementary through high school, I did not fully develop my study skills until I was a college student. A... 31 Subjects: including prealgebra, reading, writing, English ...I relate math concepts to physical realities to help develop an intuitive approach to problem solving. In addition to having a degree which requires college math up to Differential Equations, including Algebra, Algebra 2, Calculus 1-3 and other degree specific items, I have recently tutored my s... 8 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...I strongly believe that ANYONE can be "good" at math. Through tutoring, I seek to remove the intimidation of math and physics and help students build up confidence and understanding. Throughout my high school and college careers I was constantly assisting friends, peers, and my younger sister with math and physics homework. 13 Subjects: including prealgebra, calculus, physics, geometry Related Lone Tree, CO Tutors Lone Tree, CO Accounting Tutors Lone Tree, CO ACT Tutors Lone Tree, CO Algebra Tutors Lone Tree, CO Algebra 2 Tutors Lone Tree, CO Calculus Tutors Lone Tree, CO Geometry Tutors Lone Tree, CO Math Tutors Lone Tree, CO Prealgebra Tutors Lone Tree, CO Precalculus Tutors Lone Tree, CO SAT Tutors Lone Tree, CO SAT Math Tutors Lone Tree, CO Science Tutors Lone Tree, CO Statistics Tutors Lone Tree, CO Trigonometry Tutors Nearby Cities With prealgebra Tutor Aurora, CO prealgebra Tutors Bow Mar, CO prealgebra Tutors Castle Rock, CO prealgebra Tutors Centennial, CO prealgebra Tutors Cherry Hills Village, CO prealgebra Tutors Columbine Valley, CO prealgebra Tutors Englewood, CO prealgebra Tutors Foxfield, CO prealgebra Tutors Glendale, CO prealgebra Tutors Greenwood Village, CO prealgebra Tutors Highlands Ranch, CO prealgebra Tutors Littleton, CO prealgebra Tutors Lonetree, CO prealgebra Tutors Parker, CO prealgebra Tutors Sheridan, CO prealgebra Tutors
{"url":"http://www.purplemath.com/Lone_Tree_CO_prealgebra_tutors.php","timestamp":"2014-04-18T19:05:22Z","content_type":null,"content_length":"24378","record_id":"<urn:uuid:9c9fc896-e6db-444a-8057-b9f8326fdf2f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Sam H. Lawson Middle, Cupertino, CA San Jose, CA 95131 Patient and Caring Tutoring in Study Skills, SAT/ACT Prep ,and Math ...I am having more fun now than I have in 20 years. I have worked with many students aging K through adult. Subjects handled have been SAT and ACT test prep, math from pre through trigonometry, study skills, reading, and writing. I am a patient and caring... Offering 10+ subjects including algebra 1 and algebra 2
{"url":"http://www.wyzant.com/Sam_H_Lawson_Middle_Cupertino_CA_algebra_tutors.aspx","timestamp":"2014-04-19T17:45:22Z","content_type":null,"content_length":"61292","record_id":"<urn:uuid:96bc151c-8250-40a7-937f-ce2c44455765>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mid-Winters Eve Blog Hop is hosted by I Am A Reader, Not A Writer and Oasis For Ya. This hop runs from 12-21-12 thru midnight 12-27-12. The winner will be chosen and announced on 12-28-12. For this hop I have up for grabs a book of your choice, up to $10, from The Book Depository. Just make sure The Book Depo ships free to your address. This giveaway is open to all followers of this blog. Just leave a comment with a valid email addy and you are entered. Be sure to check out all the blogs participating in this Hop. Good Luck! 188 comments: Thanks Michelle. Happy holidays to you. tcsg80@gmail.com Thanks for the giveaway Happy Holidays What a beautiful back drop for the hop button! Makes me want to be there, preferabally with some skis on my feet or behind a team of Siberian Huskies. =) Merry Christmas! eyesofblueice (at) gmail GFC: Jewel Thanks for the giveaway. Merry Christmas & Happy New Year :D jewel4jay AT hotmail DOT com Thanks for the giveaway! Thank you for the great giveaway. Merry Christmas. thank you for the giveaway enjoy your holidays! lilypondreads at gmail dot com Thank you! Happy Holidays! GFC: TayteH Thanks for the fun giveaway and blog hop! Merry Christmas! castings at mindspring dot com Almost xmas, can you believe it! :D booksforlife01 (at) gmail dot com Thank you a lot for this international holidays! GFC: Cassandra :D Thanks for the giveaway! Thanks for the chance to win. GFC: sweety white Happy holidays! GFC- Dea Sauva sauvadeavelle @ yahoo dot com Super giveaway and Merry Christmas :) thanks and merry christmas :) gfc - sarah elizabeth Thanks and very merry Christmas! GFC: Dita Škarste Thanks for the giveaway~ Followed through GFC - Faith D. Here's my eadd: faith.dolot@yahoo.com.ph Thanks for the chance to win, beejee77 at gmail dot com Happy holidays and thanks for the awesome giveaway! :D Thanks Michelle. Happy holidays to you. Thank you for the great giveaway. Happy Holidays and Merry Christmas :) Thank you very much for this chance! gfc: kah_cherub kah_cherub at hotmail dot com Thanks for the giveaway! :) GFC: Mervi Mustonen musmekipi at gmail dot com Thanks for the giveaway! GFC: Chenise Jones Thanks for the giveaway! GFC: DoingDewey, kxw116[at]gmail.com This comment has been removed by the author. Thanks for the awesome giveaway! Happy holidays! GFC follower: Bookie Bee thebookiebee AT gmail DOT com Thanks for being part of the hop!!! Gfc Terri Matlock Oklahomamommy0306 @ Gmail.com Thanks for the giveaway. GFC follower under Jane. janie1215 AT excite DOT com GFC follower: ObsessedReader email: wright.emily[at]aol[dot]com Thanks for the giveaway! Thanks for the giveaway!!! Happy Holidays! GFC follower Zemmy89 Thanks so much for the giveaway. I'm loving this blog hop! jennirv4967 at gmail dot com Thanks for the giveaway GFC: Pauline Tolentino e-mail: polin037@gmail.com Awesome Giveaway. Happy Holidays! I am a gfc follower as mel brock misstrangelove (at) gmail (dot) com Thanks for the giveaway! GFC: Cali Willette Thanks so much for the chance to win. Happy Holidays! GFC : Brooke Banks Thanks for the giveaway! GFC: Georgia Blanch Email: little_georgia_b AT hotmail DOT com GFC Courtney_Elena courtneygen12 at hotmail dot com GFC: laceyblossom Thanks for the giveaway! followed! Thanks so much for the giveaway, :) e-mail: reofamelaterrado(at)yahoo(dot)com GFC: Sandy Thanks so much for the chance. Always love being able to select a book of my choice :) And can't go wrong with getting a few more books around the holiday season! Happy Holidays! Tanks so much for the giveaway! :) Thanks for the giveaway! GFC: Farhana keizoku_luffy at yahoo dot com Thanks for the giveaway! GFC: Rima Happy Holidays!!! GFC kassandralopez32 Thanks for the giveaway and Happy Holidays! gfc follower Thanks for the giveaway! GFC : Dinda_SI I'm following via GFC as Jasmine1485 :) kate1485 at hotmail dot com This comment has been removed by the author. sorry for that misspelled my e-mail address. Thanks for the Giveaway. GFC: Francis Casabuena trakulite9 at gmail DOT com thanks for the giveway gfc : eli_y83 Thanks for this wonderful opportunity. New follower via the hop! Thanks for the giveaway! Happy Holidays! GFC - Clarissa.(Bookadicea) Thank you for the wonderful giveaway! tambrewer35 at msn dot com Zarahf @ gmail dot com Thanks and happy holidays! :) GFC name - Diana Dimovska Thanks for the giveaway! (: GFC: AnimeAngel1016 Thanks! GFC: Doodle doodlesbookblog @ gmail .com Thanks for the simple giveaway. Happy Christmas! GFC as Abby reviewsbyabby at gmail dot com Thanks for the giveaway! GFC: *Jam* GFC: Leigh Ann GFC: Debbie W. Thanks for the chance! xsweeteternityx (at)hotmail (dot)com thanks for the giveaway!! GFC: paula Thanks for the chance! yay tyty mrsbrinius at comcast dot net Happy Holidays! GFC: barrie mac books4me67 at ymail dot com Thanks for the giveaway Merry Christmas and Happy new Year GFC crystaley73 crystaley73 at yahoo dot com Thanks for the giveaway! I follow on gfc-magic5905 Thanks for the hop. magic5905 at embarqmail dot com GFC Mandala. Happy Holidays! mandalarctic at gmail dot com Thanks for the giveaway. Wishing everyone a wonderfully joyous Holiday Season. I Follow by GFC: Kamla L. follow via GFC sstogner1 at gmail dot com Thanks for the great giveaway! Thanks for the giveaway! GFC: January Matthews Email: katlyn8638@hotmail.ca Thanks for the giveaway! Ollie aka DarkBloodyVamp Happy holidays and thanks for the giveaway!!!! My email is: Have a great holiday!!!! volta2173 at sbcglobal dot net Merry Christmas! makeighleekyleigh at yahoo.com Thanks for the giveaway! My comment name here is also my GFC name :) Email: muggledATgmxDOTcom p.s. Happy holidays! Thanks for the giveaway!:) Thank you for the giveaway! GFC follower: Kristia Miltiadou GFC: Ashfa Anwer Thanks for the giveaway. Judith-Pamela Ayimma GFC Follower - Thais thais_rpc at hotmail dot com Thanks for the great giveaway and for making it international! GFC: Marti@SBBCreviews Happy Holidays & thanks for the amazing giveaway! elizabeth @ bookattict . com GFC: BookAttict Thanks fot this giveaway ! :) Happy holidays and merry christmas ;) emilieeven at aol dot com Thank you so much for the chance to win! GFC: Jeanne Thanks for a chance to win! Happy Holidays :) maijasteinbrika at gm@il d0t c0m Thanks for the giveaway! bookwormsusanna AT gmail DOT com Thank you for this giveaway iheartmemorethanyou at yahoo dot com GFC follower - Dovile spamscape [at] gmail [dot] com Thanks for the giveaway! aluisanorte [at] uol [dot] com [dot] br GFC Alina email: alinutza4u2004[at]yahoo[dot]co[dot]uk Thanks for the giveaway :) Thanks for the giveaway! And Happy Holidays :) GFC: Evie by.evie at yahoo dot com dot br I follow via GFC - sweepingtheusa Thanks for the giveaway! I follow GFC as Samantha Deen thanks for the hop happy holidays kaholgate at ymail dot com Thanks for the giveaway! GFC: Krista pinkbonanza{ AT }gmail{ DOT }com Thank you for this Giveaway! GFC: Kayla Graham (Bengal Reads) e-mail: kaylasgraham04(at)gmail.com Thanks for the giveaway :) Happy Holidays! Thanks for the giveaway! GFC (hhollyhal) hhollyhal at gmail dot com Thank you so much! Have a happy Christmas and New Year! anne.j2 (at) gmail.com Happy Christmas Gfc dookiepookiebear Thanks for the amazing giveaway! GFC: iLuvReadingTooMuch iliveforreading AT hotmail DOT com Happy Holidays! GFC: Na Following you via GFC as ThePixieSprinkles Thanks for the chance to win! =) This comment has been removed by the author. Thanks for the great giveaway! Happy Holidays! Almost forgot, I follow via GFC as A'lina. Happy Holidays and thank you for the amazing giveaway! Thanks for the giveaway Thanks! Happy Holidays! Merry Christmas , Happy Holiday ! GFC follow as irgl7 happy holidays ddreemz @ yahoo.com GFC Karen Merry Christmas kpuleski at gmail dot com Thanks and Merry Christmas Eve and Happy Holidays to you and your family! Cheers :) GFC Lynne (of tworeads.blogspot.com) tworeads at gmail dot com Happy Holidays!! GFC Christina Torres Thanks for the giveaway! GFC: Shar Simms Happy Holidays! I am a GFC Follower I am SueSueper Sue on Google Friend Connect sue2sueper (at) gmail (dot) com I am a new follower through GFC-username Scott Wilcox Thanks so much for the great giveaway. There are so many books on my wish list and tons more coming out in 2013. Wow, where did this year go???? Merry Christmas, Thanks for the giveaway!! Hello, thanks for the giveaway. Thanks and happy holidays! gfc: caitlin gokarter418 (at) aol (dot) com Thanks for the great giveaway! Happy Holidays! saltsnmore at yahoo dot com SWEET! Thank you so much for the giveaway! *crosses fingers* Lilian @ A Novel Toybox Thanks for the giveaway! :) GFC: TiffanyLove thx u for an easy giveaway... u are awesome !!! merry christmas and happy holliday :) chiko_jubilee at yahoo dot com Thanks for the chance! GFC - Juana Esparza Thanks for the giveaway! GFC: Jyl22075 JYL22075 at gmail dot com Thanks so much! Merry Christmas! ineedadietcoke at aol dot com Thanks for the giveaway! GFC: Gaabi Costa Xx :) Thank you so much!!! GFC follower Tiffany Drew Thanks for participating! I'm a follow via GFC as Katie Amanda. Thank you for the giveaway! GFC-- Savings in Seconds dedezoomsalot @yahoo.com Thanks for the giveaway! Thanks for the giveaway! GFC: Fi-chan Email: feeyonachan at gmail dot com beccaboo97200 (at) att (dot) net Happy Holidays! Thanks for this amazing giveaway! I follow on GFC as Suz and I would love to win! susanw28 (at) mindspring (dot) com Thanks for the giveaway! bhwrn1@hotmail.com Thanks for the giveaway :) GFC Follower: chryselle gustilo email: g.chryslle@yahoo.com Thanks for the giveaway! =D Thanks for the giveway email: malli_26(at)hotmail(dot)com GFC: Mariella RG Thanks for the giveaway! GFC follower: Stephanie Verhaegen thanks for making this international! happy holidays! GFC Oriana dentrodeunlibroblog @ gmail.com Thanks for the giveaway! Thanks for the chance...and happy holidays! (...of course, happy reading wishes as well.) ^_^ Thank you for the chance to particiate. Happy Holidays Sarita Lopez Happy Holidays! Thanks for the giveaway! GFC follower: Brianne Email: brianne(dot)libby(at)maine(dot)edu Whoohoo! Thanks for the super easy entry! GFC: Christine Dawson Email: chrysrawr@yahoo.com Thanks for the giveaway and Merry Christmas. coreenamcburnie at gmail dot com Sparkly thanks! porcukorborso at gmail dot com Thanks!! Happy Holidays :D magabygc at gmail.com Thank you! What a lovely giveaway prize! Would love to get a new book. :) pixelberrypie at yahoo dot com GFC Cayce Thanks you for the giveaway! cayce006 at yahoo dot com GFC: Gabbie Johnson fecheerleader at yahoo dot com Thanks for the fun. Old follower! Thanks for being a part of the hop! I hope that you have a happy holiday! jessangil at gmail dot com Hi Michelle, Merry Christmas!!! Thanks for participating in the Hop. GFC : Aline Tobing follow on GFC susan1215 s2s2 at comcast dot net Nice giveaway! Merry Christmas and thanks for the blog stop. Bookworm (dot) judy (at) gmail (dot) com I'm a follower Thank you for your giveaway Thank you for the giveaway! I'm a GFC follower: Jacklin Updegraft Thank you for being a part of this awesome blog hop!! GFC: Lisa Vazquezanzua GFC- Holly Calhoun Thanks for the giveaway! I'm a GFC follower :) Michelle Sedeño Thanks for the giveaway! GFC name: Lynn Thank you! :) celeste_fiore at live dot com Thanks for the chance to win GFC follower Jolene and family gfc follower-Francine Anchondo fmd518 at gmail dot com Thanks for the awesome giveaway! Thanks for the chance. Email fennyherawatiyusuf@yahoo.com Thank you for the great giveaway! GFC follower-latishajean tishajean@ charter.net Thanks for the chance. Jennifer Rote wildnmild4u at yahoo dot com Wow! This would be awesome to win a free book. THANKS! Thanks for the giveaway! Happy holidays! I follow via GFC (Isa). Thank you. :) Thanks for the giveaway! My email addy is amaterasureads AT gmail DOT com :D New follower on GFC Sarah perry :) Thank you for such a great giveaway...I love a good book! oddball2003 at hotmail dot com Thanks for the giveaway! GFC: Stephanie Thanks for the chance to win! GFC:Kaci Verdun kacidesigns AT yahoo DOT com Thanks for the great giveaway!! Thanks for the giveaway. GFc Marlene Breakfield
{"url":"http://michellesramblins.blogspot.com/2012/12/mid-winters-eve-blog-hop-int.html","timestamp":"2014-04-21T01:59:59Z","content_type":null,"content_length":"477443","record_id":"<urn:uuid:fa008422-0022-45ee-b7c2-c5dce61fe0a9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows. Accession N20120013787 Title Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows. Publication Aug 2012 Media Count 26p Personal N. S. Liu T. H. Shih Abstract In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows. Keywords Large eddy simulation Navier-stokes equation Probability theory Reacting flow Reynolds averaging Spatial filtering Steady flow Turbulent flow Source National Aeronautics and Space Administration Subject 72B - Algebra, Analysis, Geometry, & Mathematical Logic Corporate National Aeronautics and Space Administration, Cleveland, OH. NASA John H. Glenn Research Center at Lewis Field. Document Technical report Title Note N/A NTIS Issue 1306 Contract N/A
{"url":"http://www.ntis.gov/search/product.aspx?ABBR=N20120013787","timestamp":"2014-04-19T17:36:15Z","content_type":null,"content_length":"28687","record_id":"<urn:uuid:9c2572ee-b45f-487b-8228-a2047ce53cb6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
describe how to change the equation f(x0=x^2+4x-2, in order to reflect the function oer the y-axis. Also how to... - Homework Help - eNotes.com describe how to change the equation f(x0=x^2+4x-2, in order to reflect the function oer the y-axis. Also how to reflect it over x axis. To reflect over the x axis, you would consider "instead of going up 4, go down 4", or "instead of y = 4, y = -4". That means, we would need to multiply the entire function (or y) by -1. So, for the reflection over the x axis: f(x)=x^2+4x-2 ---> f(x) = (-1)(x^2+4x-2) For reflection over the y axis, you would consider "instead of moving right, move left", or "instead of x = 4, x = -4". Therefore, we would multiply the x by -1. So, we would have: f(x)=x^2+4x-2 ---> f(x) = (-x)^2+4(-x)-2 Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/describe-how-change-equation-f-x0-x-2-4x-2-order-461092","timestamp":"2014-04-19T02:41:56Z","content_type":null,"content_length":"25613","record_id":"<urn:uuid:5bcb7f78-9d52-4b6c-a56a-6e63e786696a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
st: How to implement FGLS on estimated regression coefficients? Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: How to implement FGLS on estimated regression coefficients? From Michael Boehm <michael.boehm1@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: How to implement FGLS on estimated regression coefficients? Date Fri, 17 Feb 2012 00:00:39 +0000 Dear all, I want to run a FGLS estimation on the coefficients from a seemingly unrelated regression, ie I first regress "sureg (dd i.decade##(c.t1 c.t2) ) (w i.decade##(c.t1 c.t2)), coeflegend" with output | Coef. Legend dd | 1.decade | .1909903 _b[dd:1.decade] t1 | .1976521 _b[dd:t1] t2 | -.2013332 _b[dd:t2] decade#c.t1 | 1 | -.0220626 _b[dd:1.decade#c.t1] decade#c.t2 | 1 | .0245381 _b[dd:1.decade#c.t2] _cons | .5002711 _b[dd:_cons] w | 1.decade | .5814188 _b[w:1.decade] t1 | 1.50046 _b[w:t1] t2 | 1.497409 _b[w:t2] decade#c.t1 | 1 | .1770365 _b[w:1.decade#c.t1] decade#c.t2 | 1 | -.1983975 _b[w:1.decade#c.t2] _cons | 1.809371 _b[w:_cons] Then I want to stack y=(_b[w:1.decade#c.t1] _b[w:1.decade#c.t2])' and FGLS-regress it on x=(_b[dd:t1] _b[dd:t2])' (a regression with two observations) with the weighting matrix being the covariance matrix of these parameter estimates from the previous SUR regression. Ideally, I would also like to report the value of the objective function that FGLS minimizes in optimum, because this is supposed to be chi-squared distributed with 2 degrees of freedom under the H0 that my model is Sorry for this long explanation, but does anyone know how to implement this procedure nicely in Stata? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-02/msg00815.html","timestamp":"2014-04-19T12:09:33Z","content_type":null,"content_length":"9128","record_id":"<urn:uuid:e86a5ec9-67ae-4619-b665-6406e8eaea00>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Zeros of a combination of exponentials up vote 6 down vote favorite Is there any known result about the necessary and sufficient conditions for the existence of zeros for a function $f(x)=\sum_{n=1}^{N} a_n e^{b_n x}$, where $a_n,b_n \in \mathbb{R}\, \forall n=1,2,\ cdots,N$, $a_1,a_N >0$, $b_1 < b_2 < \cdots < b_N $ and $x \in \mathbb{R}$? It is known (see "Problem and Theorems in Analysis II" by Polya and Szego) that using a generalization of Descartes' rule of signs it possible to say that, named with $Z$ the number of changes of sign in the sequence of the $a_n$ and with $Z_0$ the number of zeros of $f(x)$, $Z-Z_0 \geq 0$ is an even integer. The number $Z-Z_0$ should be even since for $x \rightarrow -\infty$, the dominant therm of $f(x)$ is $a_1 e^{b_1 x}>0$ and for $x \rightarrow +\infty$ the dominant term is $a_N e^{b_N x}>0$. This gives an upper limit for the number of zeros, but there is any way to say "$f(x)$ should have at least $M$ zeros", with $0 < M \leq Z$? Thanks in advance, exponential-polynomials nt.number-theory 1 Lower bounds on real zeros (of anything) are usually a lot harder to get than upper bounds. In the Descartes rule of sign, you get a lower bound of Z (mod 2) because the number of zeros has the same parity as Z. (BTW, are you sure about your Z should be even in your statement?). I can't think of any interesting lower bounds. There might be something, but I doubt anything spectacular. – Thierry Zell Nov 1 '10 at 14:48 Ciao Thierry, thanks for your interest. You're right, my exposition of the result in Polya-Szego is unclear, I'm changing the text of my question to clarify it. – nicodds Nov 1 '10 at 14:56 add comment 1 Answer active oldest votes Note that we can assume wlog that $b_n\geq 0.$ In the case they are rationals, writing $b_n=p_n/q$, with $p_n\in\mathbb{N},\\ $ $q\in\mathbb{N}_+,\\ $ and $t:=e^{x/q},\\ $ puts everything into the case of positive roots of a real polinomial, with not more, nor less generality. The book by Pólya and Szegő has a section on the location and number of positive up vote 3 roots of a polynomial; in any case, whatever you can say for it can clearly be translated for your exponential equation. Then, the case of real $b_n$ can certainly be treated by down vote approximation. @Pietro: Why is it necessary to consider irrational case separately? If there are two $b$'s that are linearly independent over $\mathbb Q$, then the equation is equivalent to a system of two similar equations, each with fewer terms than the original. Right? – Mark Sapir Nov 1 '10 at 18:25 Would you give more details on this? It seems very reasonable to me, but I'm not sure I get how to do it. – Pietro Majer Nov 4 '10 at 7:32 Thanks Pietro, your answer put me in a good direction – nicodds Nov 8 '10 at 0:19 add comment Not the answer you're looking for? Browse other questions tagged exponential-polynomials nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/44443/zeros-of-a-combination-of-exponentials","timestamp":"2014-04-21T02:45:19Z","content_type":null,"content_length":"56341","record_id":"<urn:uuid:830fdea1-f7fa-4fa2-a546-5e2aa051703b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Selected Publications Below is a list of selected publications of our group. Most are available electronically, in PostScript or PDF format. For copies of other publications, please contact the author(s) directly. "Self-Timed Carry-Lookahead Adders" [pdf] F.-C. Cheng, S.H. Unger and M. Theobald IEEE Transactions on Computers, Special Issue on Computer Arithmetic, Volume 49, Issue 7, July 2000, Pp. 659-672. "A Low-Latency FIFO for Mixed-Clock Systems" [pdf][ps] T. Chelcea and S.M. Nowick Proceedings of the IEEE Workshop on VLSI (WVLSI), 2000, Pp. 119-126. "Low-Latency Asynchronous FIFO's Using Token Rings" [pdf] [ps] T. Chelcea and S.M. Nowick Proceedings of the Sixth International Symposium on Advanced Research in Asynchronous Circuits and Systems (ASYNC'00), 2000, Pp. 210-220. "Synthesis for Logical Initializability of Synchronous Finite-State Machines" [pdf] [ps] M. Singh and S.M. Nowick IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Volume 8, Oct. 2000, Pp. 542-557. "Fine-Grain Pipelined Asynchronous Adders for High-Speed DSP Applications" [pdf] [ps] M. Singh and S.M. Nowick Proceedings of the IEEE Computer Society Workshop on VLSI (WVLSI), 2000, Pp. 111-118. "High-Throughput Asynchronous Pipelines for Fine-Grain Dynamic Datapaths" [pdf] [ps] M. Singh and S.M. Nowick Proceedings of the Sixth International Symposium on Advanced Research in Asynchronous Circuits and Systems (ASYNC'00), 2000, Pp. 198-209. Best Paper Award. "A Power-Efficient Duplex Communication System" [pdf] [ps] S.B. Furber, A. Efthymiou, and M. Singh Workshop on Asynchronous Interfaces: Tools, Techniques and Implementations (AINT-2000), Delft, The Netherlands, July 2000. "Scanning the Technology: Applications of Asynchronous Circuits" [pdf] [ps] C.H. Van Berkel, M.B. Josephs, and S.M. Nowick Proceedings of the IEEE, Volume 87, Issue 2, Feb. 1999, Pp. 223-233. "OPTIMISTA: State Minimization of Asynchronous FSMs for Optimum Output Logic" [pdf] [ps] R.M. Fuhrer and S.M. Nowick Proceeding of the International Conference on Computer-aided design, 1999, Pp. 7-13. "Sequential Optimization of Asynchronous and Synchronous Finite-State Machines: Algorithms and Tools" [pdf] [ps] R.M. Fuhrer PhD Thesis, Computer Science Department, Columbia University, 1999. "MINIMALIST: An Environment for the Synthesis, Verification and Testability of Burst-Mode Asynchronous Machines" [pdf] [ps] R.M. Fuhrer, S.M. Nowick, M. Theobald, N.K. Jha, B. Lin, and L. Plana Technical Report CUCS-020-99, Computer Science Department, Columbia University, July 1999. "Fast Heuristic and Exact Algorithms for Two-Level Hazard-Free Logic Minimization" [pdf] M. Theobald and S. Nowick IEEE Transactions on Computer-Aided Design, Volume 11, Nov. 1998, Pp. 1130-1147. "An Implicit Method for Hazard-Free Two-Level Logic Minimization" [pdf] M. Theobald and S. Nowick IEEE International Symposium on Advanced Research in Asynchronous Circuits and Systems (ASYNC'98), March 1998, Pp. 58-69. Best Paper Finalist. "A Fast Asynchronous Huffman Decoder for Compressed-Code Embedded Processors" [pdf] [ps] R. Benes, S.M. Nowick, and A. Wolfe Proceedings of the Fourth International Symposium on Advanced Research in Asynchronous Circuits and Systems (ASYNC'98), March 1998, Pp. 43-56. "Synthesis of Low-Power Asynchronous Circuits in a Specified Environment" [pdf] S.M. Nowick and M. Theobald International Symposium on Low Power Electronics and Design, August 1997, Pp. 92-95. "A High-Speed Asynchronous Decompression Circuit for Embedded Processors" [pdf] [ps] M. Benes, A. Wolfe, and S.M. Nowick Proceedings of the Seventeenth Conference on Advanced Research in VLSI, 1997, Pp. 219-236. "Synthesis of Asynchronous Circuits for Stuck-at and Robust Path Delay Fault Testability" [pdf] [ps] S.M. Nowick, N.K. Jha, and F.-C. Cheng IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Volume 16, Issue 12, Dec. 1997, Pp. 1514-1521. "Speculative Completion for the Design of High-Performance Asynchronous Dynamic Adders" [pdf] [ps] S.M. Nowick, K.Y. Yun, P.A. Beerel, and A.E. Dooply Proceedings of the Third International Symposium on Advanced Research in Asynchronous Circuits and Systems (ASYNC'97), 1997, Pp. 210-223. "Synthesis for Logical Initializability of Synchronous Finite State Machines" [pdf] [ps] M. Singh and S.M. Nowick Proceedings of the Tenth International Conference on VLSI Design, 1997, Pp. 76-80. "An Introduction to Asynchronous Circuit Design" [pdf] [ps] A. Davis and S.M. Nowick Technical Report UUCS-97-013, Computer Science Department, University of Utah, Sep. 1997. "Fast OFDD-Based Minimization of Fixed Polarity Reed-Muller Expressions" [pdf] R. Drechsler, M. Theobald, and B. Becker IEEE Transactions on Computers, Volume 45, Issue 11, November 1996, Pp. 1294-1299. "Espresso-HF: A Heuristic Hazard-Free Minimizer for Two-Level Logic" [pdf] M. Theobald, S.M. Nowick, T. Wu Proceedings of the 33rd Annual Design Automation Conference (DAC), 1996, Pp. 71-76. "Synthesis-for-Initializability of Asynchronous Sequential Machines" [pdf] [ps] M. Singh and S.M. Nowick Proceedings of the International Test Conference, 1996, Pp. 232-241. "State Assignment for Initializability of Synchronous Finite State Machines" [pdf] [ps] M. Singh and S.M. Nowick International Test Synthesis Workshop, Santa Barbara, CA, May 1996. "Symbolic Hazard-Free Minimization and Encoding of Asynchronous Finite State Machines" [pdf] [ps] R.M. Fuhrer, B. Lin, and S.M. Nowick Proceedings of the IEEE/ACM International Conference on Computer-Aided Design (ICCAD-95), 1995, Pp. 604-611. "Exact Two-Level Minimization of Hazard-Free Logic with Multiple-Input Changes" [pdf] [ps] S.M. Nowick and D.L. Dill IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Volume 14, Issue 8, Aug. 1995, Pp. 986-997. "Automatic Synthesis of Burst-Mode Asynchronous Controllers" [pdf] [ps] S.M. Nowick Technical Reporst CSL-TR-95-686, Computer Science Department, Stanford University, Dec. 1995. "Fast OFDD based Minimization of Fixed Polarity Reed-Muller Expressions" [pdf] R. Drechsler, M. Theobald, and B. Becker European Design Automation Conference (Euro-DAC), Sep. 1994, Pp. 2-7.
{"url":"http://www.cs.columbia.edu/async/publications.html","timestamp":"2014-04-17T12:39:36Z","content_type":null,"content_length":"25028","record_id":"<urn:uuid:71a8902a-8762-48d6-b23a-6302e724a554>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Blawenburg SAT Math Tutor ...One of my most recent students in SAT prep earned 800CR/780M, and another earned a combined score of 2290! I'd love to teach you in any of my listed academic subjects. I favor a dual approach, focused on both understanding concepts and going through practice problems. 26 Subjects: including SAT math, English, calculus, writing ...My education is rooted deeply in Physics, as I most recently received a Master's in Physics from the University of Connecticut. I taught introductory physics courses at UConn and enjoyed seeing my students grow both in academics and critical thinking, validated through both testing and laborator... 9 Subjects: including SAT math, calculus, physics, algebra 1 ...This allows the students to gain a deeper understanding of concepts so that the ideas can be applied to a larger range of questions. In my experience, this is what works and I have seen tremendous growth in every student I've tutored. I find these breakthroughs very rewarding and look forward t... 12 Subjects: including SAT math, calculus, geometry, algebra 1 ...My students have seen an average total score improvement of over 400 points. I scored a perfect 800 on the SAT Critical Reading section. My students have seen an average total score improvement of over 400 points. 10 Subjects: including SAT math, GMAT, SAT reading, SAT writing ...If you choose to work with me and utilize my effective learning methods, I guarantee that you will find extraordinary success in an efficient and enjoyable way.I am fluent in Mandarin Chinese and have taught it at the secondary level for a number of years. My minor during my university studies w... 37 Subjects: including SAT math, English, algebra 1, Chinese
{"url":"http://www.purplemath.com/Blawenburg_SAT_Math_tutors.php","timestamp":"2014-04-19T07:08:41Z","content_type":null,"content_length":"24008","record_id":"<urn:uuid:de2c2210-d650-472f-abc2-2908f498070e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
College Mathematics Journal Contents—September 2013 The September issue of The College Mathematics Journal is devoted to articles about puzzles and games. The games discussed include Set, Mancala, and Chomp. Puzzles attacked include chess on a triangular, honeycomb board; Instant Insanity II, and Boggle Logic Puzzles. Problems and Solutions challenge readers and Media Highlights keep them well-informed, and, finally, there is a Sudoku to solve: a Tetris Sudoku courtesy of Philip Riley and Laura Taalman.—Michael Vol. 44, No. 4, pp.258-344. Journal subscribers and MAA members: Please click 'Login' in the upper right corner and access your journal through your member portal (My Subscriptions). Sets, Planets, and Comets Mark Baker, Jane Beltran, Jason Buell, Brian Conrey, Tom Davis, Brianna Donaldson, Jeanne Detorre-Ozeki, Leila Dibble, Tom Freeman, Robert Hammie, Julie Montgomery, Avery Pickford, and Justine Wong Sets in the game Set are lines in a certain four-dimensional space. Here we introduce planes into the game, leading to interesting mathematical questions, some of which we solve, and to a wonderful variation on the game Set, in which every tableau of nine cards must contain at least one configuration for a player to pick up. To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.258 Instant Insanity II Tom Richmond and Aaron Young Instant Insanity II is a sliding mechanical puzzle whose solution requires the special alignment of 16 colored tiles. We count the number of solutions of the puzzle’s classic challenge and show that the more difficult ultimate challenge has, up to row permutation, exactly two solutions, and further show that no similarly-constructed puzzle can have a unique ultimate solution. To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.265 Mancala Matrices L. Taalman, A. Tongen, B. Warren, F. Wyrick-Flax, and I. Yoon This paper introduces a new matrix tool for the sowing game Tchoukaillon, which establishes a relationship between board vectors and move vectors that does not depend on actually playing the game. This allows for simpler proofs than currently appear in the literature for two key theorems, as well as a new method for constructing move vectors. We also explore extensions to Mancala, a popular two-player sowing game. To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.273 Proof Without Words: Squares Modulo 3 Roger B. Nelsen Using the fact that the sum of the first n odd numbers is n^2, we show visually that n^2 ≡ 0 (mod 3) when n ≡ 0 (mod 3), and n^2 ≡ 1 (mod 3) when n ≡ ±1 (mod 3). To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.283 Chomp in Disguise Andrew MacLaughlin and Alex Meadows We investigate Chomp, a game popular with chocolate lovers, and various other combinatorial games associated with it. To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.284 Tetris Sudoku Philip Riley and Laura Taalman To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.292 Boggle Logic Puzzles: Minimal Solutions Jonathan Needleman Boggle logic puzzles are based on the popular word game Boggle played backwards. Given a list of words, the problem is to recreate the board. We explore these puzzles on a To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.293 Counting Knights and Knaves Oscar Levin and Gerri M. Roberts To understand better some of the classic knights and knaves puzzles, we count them. Doing so reveals a surprising connection between puzzles and solutions, and highlights some beautiful combinatorial To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.300 Domination and Independence on a Triangular Honeycomb Chessboard Joe DeMaio and Hong Lien Tran We define moves for king, queen, rook, bishop, and knight on a triangular honeycomb chessboard. Domination and independence numbers on this board for each piece are analyzed. To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.307 Are Stupid Dice Necessary? Frank Bermudez, Anthony Medina, Amber Rosin, and Eren Scott A pair of 6-sided dice cannot be relabeled to make the sums 2, 3, . . ., 12 equally likely. It is possible to label seven, 10-sided dice so that the sums 7, 8, . . ., 70 occur equally often. We investigate such relabelings for pq-sided dice, where p and q are distinct primes, and show that these relabelings usually involve stupid dice, that is, dice with the same label on every face. To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.315 Proof Without Words: The Area of an Inner Square Marc Chamberland What is the area of the (inner) square obtained by slicing the corners off a larger square? This visual proof avoids algebra by considering the area of a parallelogram. To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.322 A Power Rule Proof without Limits Colin Day Without using limits, we prove that the integral of x^n from 0 to L is L^n^ +1/(n + 1) by exploiting the symmetry of an n-dimensional cube. To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.323 Problems 1006-1010 Solutions 981-985 To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.325 Encyclopedia of Mathematics and Society, Sarah J. Greenwald and Jill E. Thomley eds., Salem Press, 2011, 1191 pp., ISBN 9781587658440. $395. Reviewed by Gizem Karaali To purchase the article from JSTOR: http://dx.doi.org/10.4169/college.math.j.44.4.332
{"url":"http://www.maa.org/publications/periodicals/college-mathematics-journal/college-mathematics-journal-contents-september-2013?device=mobile","timestamp":"2014-04-17T07:56:27Z","content_type":null,"content_length":"27638","record_id":"<urn:uuid:49c4315b-795b-4e91-a0cf-7c14b3fb68f8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Mathematics/Naive set theory From Wikibooks, open books for an open world When we talk of set theory, we generally talk about collections of certain mathematical objects. In this sense, a set can be likened to a bag, holding a finite (or conceivably infinite) amount of things. Sets can be sets of sets as well (bags with bags in them). However, a set cannot contain duplicates -- a set can contain only one copy of a particular item. When we look at sets of certain types of numbers, for example, the natural numbers, or the rational numbers, for instance, we may want to speak only of these sets. These collections of numbers are, of course, very important, so we write special symbols to signify them. We write sets in curly brackets -- { and }. We write all of the elements, or what the set contains, in the brackets, separated by commas. We generally denote sets using capital letters. For example, we write the set containing the number 0 and the number 1 as {0,1}. If we wish to give it a name, we can say B={0,1}. Special sets[edit] The aforementioned collections of numbers, the naturals, rationals, etc. are notated as the following: • the natural numbers are written $\mathbb{N}$ • the integers are written $\mathbb{Z}$ • the rational numbers are written $\mathbb{Q}$ • the real numbers are written $\mathbb{R}$ Here we will generally write these in standard face bold instead of the doublestruck bold you see above. So we write here N instead of $\mathbb{N}$ (NB following Wikipedia conventions). We can write some special relations involving sets using some symbols. Containment relations[edit] To say that an element is in a set, for example, 3 is in the set {1,2,3}, we write: We can also express this relationship in another way: we say that 3 is a member of the set {1,2,3}. Also, we can say the set {1,2,3} contains 3, but this usage is not recommended as it is also used to refer to subsets (see following). We can say that two sets are equal if they contain exactly the same elements. For example, the sets {2,3,1} and {3,1,2} both contain the numbers 1, 2 and 3. We write: We write the set with no elements as $\empty$, or {}. Here, we use the notation {} for the empty set (NB following Wikipedia conventions). The concept of the subset[edit] A very important concept in set theory and other mathematical areas is the concept of the subset. Say we have two sets A={0,1,2,3,4,5,6,7,8,9}, and B={0,1,2,3,4,5}. Now, B contains some elements of A, but not all. We express this relationship between the sets A and B by saying B is a subset of A. We write this $B\subseteq A$ If B is a subset of A, but A is not a subset of B, B is said to be a proper subset of A. We write this $B\subset A$ Note that if $B\subset A$, then $B\subseteq A$ Intersections and unions[edit] There are two notable and fundamental special operations on sets, the intersection and the union. These are somewhat analogous to multiplication and addition. The intersection of two sets A and B are the elements common to both sets. For example, if A={1,3,5,7,9} and B={0,1,3}, their intersection, written $A\cap B$ is the set {1,3}. If the intersection of any two sets are empty, we say these sets are disjoint. The union of two sets A and B are the all elements in both sets. For example if A={1,3,5,7,9} and B={0,2,4,6,8}. We say the union, written $A\cup B$ is the set {0,1,2,3,4,5,6,7,8,9}. Set comprehensions[edit] When we write a set, we can do so by writing all the elements in that set as above. However if we wish to write an infinite set, then writing out the elements can be too unwieldy. We can solve this problem by writing sets in set comprehension notation. We do this by writing these sets including a rule and by a relationship to an index set, say I. That is; $S=\{x \in I | rule\}$ where rule can be something like x^2, or x=3x. For example, this set forms the set of all even numbers: $\{x \in \mathbb{N}| x \mod 2 = 0\}$ This set forms the set of all solutions to the general quadratic: $\{x \in \mathbb{C}| ax^2+bx+c = 0\}$ Universal sets and complements[edit] Universal sets[edit] When we do work with sets, it is useful to think of a larger set in which to work with. For example, if we are talking about sets {-1,0,1} and {-3,-1,1,3}, we may want to work in Z in this circumstance. When we talk about working in such a larger set, such as Z in that instance, we say that Z is a universal set, and we take all sets to be subsets of this universal set. We write the universal set to be $\mathcal{E}$, however it may be simpler to denote this as E. Given a set A in a larger universal set E, we define the complement of A to be all elements in E that are not in A, that is the complement of A is: $\{x\in \mathcal{E}|xot\in A\}$ We write the complement as A' or A^c. In this document we will use A'. Problem set[edit] Based on the above information, write the answers to the following questions (Answers follow to even numbered questions) 1. Is $3/4\in\mathbb{Q}$? 2. Is $\sqrt{2}\in\mathbb{Q}$? 3. Is $\{x\in\mathbb{N}|2x\}=\{x\in\mathbb{N}|{x \over 2}\in \mathbb{N}\}$? 4. True or false? If false, give an example of an element in the first set which is not in the second. 1. $\mathbb{N} \subset \mathbb{Z}$ 2. $\mathbb{Q} \subset \mathbb{Z}$ 5. True or false? If false, give an example of an element in the first set which is not in the second. 1. $\mathbb{R} \subset \mathbb{Q}$ 2. $\mathbb{Z} \subset \mathbb{R}$ 6. Is $\{1,2,3\} \subset \{1,2,3,4\}$? 7. Is $\{1,2,3,5\} \subseteq \{1,2,3,4\}$? 8. Write the 5 elements of $\{x\in\mathbb{Z}|x-3 \mod 2 = 0\}$ 9. Write the elements of 10. Find a universal set such that these sets are subsets thereof: $\{x \in \mathbb{Z^+}|a=x^2 and\ \sqrt{a}\in\mathbb{N}\},\{x \in \mathbb{N}|x/3\}$ 11. Given $\mathcal{E}=\{0,1,2,3,4,5,6,7,8,9\}$, find A' given $A={1,4,7,9}$ 2. No, the square root of 2 is irrational, not a rational number 4.1. Yes 4.2. No 6. Yes. 8. 5 elements could be {3,5,7,9,11}. 10. $\mathcal{E}=\mathbb{Q}$ Further ideas[edit] These mentioned concepts are not the only ones we can give to set theory. Key ideas that are not necessarily given much detail in this elementary course in set theory but later in abstract algebra and other fields, so it is important to take a grasp on these ideas now. These may be skipped. Power set[edit] The power set, denoted P(S), is the set of all subsets of S. NB: The empty set is a subset of all sets. For example, P({0,1})={{},{0},{1},{0,1}} The cardinality of a set, denoted |S| is the amount of elements a set has. So |{a,b,c,d}|=4, and so on. The cardinality of a set need not be finite: some sets have infinite cardinality. The cardinality of the power set[edit] If P(S)=T, then |T|=2^|S|. Problem set[edit] Based on the above information, write the answers to the following questions. (Answers follow to even numbered questions) 1. |{1,2,3,4,5,6,7,8,9,0}| 2. |P({1,2,3})| 3. P({0,1,2}) 4. P({1}) 2. 2^3=8 4. {{},{1}} Set identities[edit] When we spoke of the two fundamental operators on sets before, that of the union and the intersection, we have a set of rules which we can use to simplify expressions involving sets. For example, $(A\cup B)'\cap B'\cap A$ how can we simplify this? Several of the following set identities are similar to those in standard mathematics This is incomplete and a draft, additional information is to be added
{"url":"http://en.wikibooks.org/wiki/Discrete_Mathematics/Naive_set_theory","timestamp":"2014-04-20T21:41:05Z","content_type":null,"content_length":"46145","record_id":"<urn:uuid:63d32bb8-f446-4df6-9518-00c8d549c5bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Bounds on a partition theorem with ambivalent colors up vote 5 down vote favorite I've been running into the following type of partition problem. Given positive integers h, r, k, and a real number ε ∈ (0,1), find n such that if every (unordered) r-tuple from an n element set X is assigned a set of at least εk 'valid' colors out of a total of k possible colors, then you can find H ⊆ X of size h and a single color which is 'valid' for all r-tuples from H. Lower bounds on the smallest such n can be obtained from lower bounds for Ramsey's Theorem. If k is sufficiently large, then partition the set of colors into [1/ε] pairwise disjoint sets of approximately equal size to emulate a proper [1/ε]-coloring of r-tuples. A simple pigeonhole argument shows that this is essentially sharp when r = 1 and k is large enough, i.e. one color must be 'valid' for at least nε points. Is the Ramsey bound more or less sharp for r > 1 or are there better lower bounds? The interesting case is when k is large since the proposed Ramsey lower bound is (surprisingly?) independent of k. co.combinatorics ramsey-theory add comment 2 Answers active oldest votes I do not think that the lower bound could depend only on epsilon. Below is the sketch of my argument. up vote 4 Fix h=3, r=2, eps=1/4, thus we color the edges of a graph, each with 25% of all the colors and we are looking for a "monochromatic" triangle. Let us take k random bipartitions of the down vote vertices and color the corresponding edges of the bipartite graph with one color. Using Hoeffding or some similar inequality we get that for big enough k every edge is colored at least accepted k/4 times if n is at most exp(ck), where c is some fixed constant with some positive probability. Therefore the bound must depend on k and not only on epsilon. Very nice! After coffee, I got the explicit lower bound $n > \sqrt{2}\exp(3k/40)$ when $h=3$ and $\epsilon=1/4$, by using Bernstein's Inequality instead. – François G. Dorais♦ Mar 25 '10 at 14:15 After more coffee, I successfully generalized your trick to arbitrary h, r; I will post the details separately. Thank you very much for your answer! – François G. Dorais♦ Mar 25 '10 at 14:49 Thx, you're welcome! – domotorp Mar 25 '10 at 16:25 add comment Here is a generalization of domotorp's answer to arbitrary h > r > 1. Independently for each color i ∈ {1,2,...,k}, pick a random H[i] from a family H of r-hypergraphs that don't contain any complete r-hypergraph of size h. Declare color i to be 'valid' for the r-tuple t = {t[1],...,t[r]} iff t ∈ H[i]. Let Y[t] be the number of 'valid' colors for t. Note that Y[t] is binomial with parameters (k, p) for some 0 < p ≤ 1/2 which is independent of k and also independent of t when H is closed under isomorphism. Hoeffding's Inequality then gives up vote 3 Prob[Y[t] ≤ εk] ≤ exp(-2k(p-ε)^2) down vote for 0 < ε < p. So the probability that Y[t] ≥ εk for all i is positive whenever n ≤ exp(2k(p-ε)^2/r) (not optimal). This is not enough since p implicitly depends on n. However, for fixed h > r > 1, p can be bounded away from 0. This can be seen by using for H the family of r-partite hypergraphs as domotorp did, but different choices of H give better bounds. This answer is community wiki because domtorp deserves all the credit. – François G. Dorais♦ Mar 25 '10 at 16:07 add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics ramsey-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/19255/bounds-on-a-partition-theorem-with-ambivalent-colors","timestamp":"2014-04-17T12:53:14Z","content_type":null,"content_length":"60133","record_id":"<urn:uuid:3d743e7b-a91c-41a1-8e30-fe5ad6e0f0b3>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
• Erdős, P., I. Joó, L. A. Székely (1987) Remarks on infinite series, Studia Sci. Math. Hung., 22: 395-400 • Hyunju Kim, Z. Toroczkai, I. Miklós, P. L. Erdős, L. A. Székely (2009) Degree-based graph construction, J. Phys. A., to appear. (And please also note that my Erdős number of the second kind is one... :) ) On the other hand, I do not have a joint publication with Paul Erdős. Then my Erdős number is bigger than 1, hence it is 2.
{"url":"http://ramet.elte.hu/~miklosi/erdosnumber.html","timestamp":"2014-04-19T04:20:04Z","content_type":null,"content_length":"958","record_id":"<urn:uuid:b5baa2c7-6606-4d16-8409-1fd375ece4a8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
e Intakes Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 203 DRI DIETARY REFERENCE INTAKES: Applications in Dietary Assessment C Assessing Prevalence of Inadequate Intakes for Groups: Statistical Foundations This appendix provides the formal statistical justification for the methods for assessing the prevalence of inadequate intakes that were described in Chapter 4. Additional details can be found in Carriquiry (1999). Let Yij denote the observed intake of a dietary component on the jth day for the ith individual in the sample, and define yi = E{Yij | i} to be that individual's usual intake of the component. Further, let ri denote the requirement of the dietary component for the ith individual. Conceptually, because day-to-day variability in requirements is typically present, ri is defined as = E{Rij | i} and, as in the case of intakes, Rij denotes the (often unobserved) daily requirement of the dietary component for the ith individual on the jth day. In the remainder of this appendix, usual intakes and usual requirements are simply referred to as intakes and requirements, respectively. The problem of interest is assessing the proportion of individuals in the group with inadequate intake of the dietary component. The term inadequate means that the individual's usual intake is not meeting that individual's requirement. THE JOINT DISTRIBUTION OF INTAKE AND REQUIREMENT Let FY,R (y,r) denote the joint distribution of intakes and requirements, and let ƒY,R (y,r) be the corresponding density. If ƒY,R (y,r) (or a reliable density estimate) is available, then OCR for page 203 DRI DIETARY REFERENCE INTAKES: Applications in Dietary Assessment (1) For a given estimate of the joint distribution ƒY,R, obtaining equation 1 is trivial. The problem is not the actual probability calculation but rather the estimation of the joint distribution of intakes and requirements in the population. To reduce the data burden for estimating ƒY,R, approaches such as the probability approach proposed by the National Research Council (NRC, 1986) and the Estimated Average Requirement (EAR) cut-point method proposed by Beaton (1994), make an implicit assumption that intakes and requirements are independent random variables —that what an individual consumes of a nutrient is not correlated with that individual's requirement for the nutrient. If the assumption of independence holds, then the joint distribution of intakes and requirements can be factorized into the product of the two marginal densities as follows: ƒY,R(r, y) = ƒR(r)ƒY(y) (2) where ƒY(y) and ƒR(r) are the marginal densities of usual intakes of the nutrient, and of requirements respectively, in the population of interest. Note that under the formulation in equation 2, the problem of assessing prevalence of nutrient inadequacy becomes tractable. Indeed, methods for reliable estimation of ƒY(y) have been proposed (e.g., Guenther et al., 1997; Nusser et al., 1996) and data are abundant. Estimating ƒR(r) is still problematic because requirement data are scarce for most nutrients, but the mean (or perhaps the median) and the variance of ƒR(r) can often be computed with some degree of reliability (Beaton, 1999; Beaton and Chery, 1988; Dewey et al., 1996; FAO/WHO, 1988; FAO/WHO/UNU, 1985). Approaches for combining ƒR(r) and ƒY(y) for prevalence assessments that require different amounts of information (and assumptions) about the unknown requirement density ƒR(r) and the joint distribution FY,R (y, r) are discussed next. OCR for page 203 DRI DIETARY REFERENCE INTAKES: Applications in Dietary Assessment THE PROBABILITY APPROACH The probability approach to estimating the prevalence of nutrient inadequacy was proposed by the National Research Council (NRC, 1986). The idea is simple. For a given a distribution of requirements in the population, the first step is to compute a risk curve that associates intake levels with risk levels under the assumed requirement distribution. Formally, the risk curve1 is obtained from the cumulative distribution function (cdƒ) of requirements. If we let FR(.) denote the cdƒ of the requirements of a dietary component in the population, then FR(a) = Pr(requirements ≤ a) for any positive value a. Thus, the cdƒ FR takes on values between 0 and 1. The risk curve ρ (.) is defined as ρ(a)=l − FR(a)=l − Pr(requirements ≤ a) A simulated example of a risk curve is given in Figure 4-3. This risk curve is easy to read. On the x-axis the values correspond to intake levels. On the y-axis the values correspond to the risk of nutrient inadequacy given a certain intake level. Rougher assessments are also possible. For a given range of intake values, the associated risk can be estimated as the risk value that corresponds to the midpoint of the range. For assumed requirement distributions with usual intake distributions estimated from dietary survey data, how should the risk curves be combined? It seems intuitively appealing to argue as follows. Consider again the simulated risk curve in Figure 4-3 and suppose the usual intake distribution for this simulated nutrient in a population has been estimated. If that estimated usual intake distribution places a very high probability on intake values less than 90, then one would con- 1 When the distribution of requirements is approximately normal, the cdƒ can be easily evaluated in the usual way for any intake level a. Let z represent the standardized intake, computed as z = (a − mean requirement) / SD, where SD denotes the standard deviation of requirement. Values of FR(z) can be found in most statistical textbooks, or more importantly, are given by most, if not all, statistical software packages. For example, in SAS, the function probnorm (b) evaluates the standard normal cdƒ at a value b. Thus, the “drawing the risk curve” is a conceptualization rather than a practical necessity. OCR for page 203 DRI DIETARY REFERENCE INTAKES: Applications in Dietary Assessment clude that most individuals in the group are likely to have inadequate intake of the nutrient. If, on the other hand, the usual nutrient intake distribution places a very high probability on intakes above 90, then one would be confident that only a small fraction of the population is likely to have inadequate intake. Illustrations of these two extreme cases are given in Figure 4-4 and Figure 4-5. In general, one would expect that the usual intake distribution and the risk curve for a nutrient show some overlap, as in Figure 4-6. In this case, estimating the portion of individuals likely to have inadequate intakes is equivalent to computing a weighted average of risk, as explained below. The quantity of interest is not the risk associated with a certain intake level but rather the expected risk of inadequacy in the population. This expectation is based on the usual intake distribution for the nutrient in the population. In other words, prevalence of nutrient inadequacy is defined as the expected risk for the distribution of intakes in the population. To derive the estimate of prevalence, we first define p(y) as the probability, under the usual intake distribution, associated with each intake level y and ρ(y) as the risk calculated from the requirement distribution. The calculation of prevalence is simple (3) where, in practice, the sum is carried out only to intake levels where the risk of inadequacy becomes about zero. Notice that equation 3 is simply a weighted average of risk values, where the weights are given by the probabilities of observing the intakes associated with those risks. Formally, the expected risk is given by where ρ(y) denotes the risk value for an intake level y, F is the usual OCR for page 203 DRI DIETARY REFERENCE INTAKES: Applications in Dietary Assessment intake distribution, and ƒ(y) is the value of the usual intake density at intake level y. When the NRC proposed the probability approach in 1986, statistical software and personal computers were not as commonplace as they are today. The NRC included a program in the report that could be used to estimate the prevalence of nutrient inadequacy using the probability approach. As an illustration, the NRC also mentioned a simple computational method: rather than adding up many products ρ(y) p(y) associated with different values of intakes, intakes are grouped by constructing m bins. The estimated probabilities associated with each bin are simply the frequencies of intakes in the population that “fall into” each bin. (These frequencies are determined by the usual intake distribution in the population.) The average risk associated with intakes in a bin is approximated as the risk associated with the midpoint of the bin. An example of this computation is given on page 28, Table 5-1, of the NRC report (1986). Currently, implementation of the probability approach can be carried out with standard software (such as BMDP, SAS, Splus, SPSS, etc.). In general, researchers assume that requirement distributions are normal, with mean and variance as estimated from experimental data. Even under normality, however, an error in the estimation of either the mean or the variance (or both) of the requirement distribution may lead to biased prevalence estimates. NRC (1986) provides various examples of the effect of changing the mean and the variance of the requirement distribution on prevalence estimates. Although the probability approach was highly sensitive to specification of the mean requirement, it appeared to be relatively insensitive to other parameters of the distribution as long as the final distribution approximated symmetry. Thus, although the shape of the requirement distribution is clearly an important component when using the probability approach to estimate the prevalence of nutrient inadequacy, the method appears to be robust to errors in shape specifications. The NRC report discusses the effect of incorrectly specifying the form of the requirement distribution on the performance of the probability approach to assess prevalence (see pages 32–33 of the 1986 NRC report), but more research is needed in this area, particularly on nonsymmetrical distributions. Statistical theory dictates that the use of the incorrect probability model is likely to result in an inaccurate estimate of prevalence except in special cases. The pioneering efforts of the 1986 NRC committee need to be contin- OCR for page 203 DRI DIETARY REFERENCE INTAKES: Applications in Dietary Assessment ued to assess the extent to which an incorrect model specification may affect the properties of prevalence estimates. THE EAR CUT-POINT METHOD The probability approach described in the previous section is simple to apply and provides unbiased and consistent estimates of the prevalence of nutrient inadequacy under relatively mild conditions (i.e., intake and requirement are independent, distribution of requirement is known). In fact, if intakes and requirements are independent and if the distributions of intakes and requirements are known, the probability approach results in optimal (in the sense of mean squared error) estimates of the prevalence of nutrient inadequacy in a group. However, application of the probability approach requires the user to choose a probability model (a probability distribution) for requirements in the group. Estimating a density is a challenging problem in the best of cases; when data are scare, it may be difficult to decide, for example, whether a normal model or a t model may be a more appropriate representation of the distribution of requirements in the group. The difference between these two probability models lies in the tails of the distribution; both models may be centered at the same median and both reflect symmetry around the median, but in the case of t with few degrees of freedom, the tails are heavier, and thus one would expect to see more extreme values under the t model than under the normal model. Would using the normal model to construct the risk curve affect the prevalence of inadequacy when requirements are really distributed as t random variables? This is a difficult question to answer. When it is not clear whether a certain probability model best represents the requirements in the population, a good alternative might be to use a method that is less parametric, that is, that requires milder assumptions on the t model itself. The Estimated Average Requirement (EAR) cut-point method, a less parametric version of the probability approach, may sometimes provide a simple, effective way to estimate the prevalence of nutrient inadequacy in the group even when the underlying probability model is difficult to determine precisely. The only feature of the shape of the underlying model that is required for good performance of the cut-point method is symmetry; in the example above, both the normal and the t models would satisfy the less demanding symmetry requirement and therefore choosing between one or the other becomes an unnecessary step. The cut-point method is very simple: estimate prevalence of inad- OCR for page 203 DRI DIETARY REFERENCE INTAKES: Applications in Dietary Assessment equate intakes as the proportion of the population with usual intakes below the median requirement (EAR). To understand how the cut-point method works, the reader is referred to Chapter 4, where the joint distribution of intakes and requirements is defined. Figure 4-8 shows a simulated joint distribution of intakes and requirements. To generate the joint distribution, usual intakes and requirements for 3,000 individuals were simulated from a χ2 distribution with 7 degrees of freedom and a normal distribution, respectively. Intakes and requirements were generated as independent random variables. The usual intake distribution was rescaled to have a mean of 1,600 and standard deviation of 400. The normal distribution used to represent requirements had a mean of 1,200 and standard deviation of 200. Note that intakes and requirements are uncorrelated (and in this example, independent) and that the usual intake distribution is skewed. An individual whose intake is below the mean requirement does not necessarily have an inadequate intake. Because inferences are based on joint rather than the univariate distributions, an individual consuming a nutrient at a level below the mean of the population requirement may be satisfying the individual 's own requirements. That is the case for all the individuals represented in Figure 4-8 by points that appear below the 45° line and to the left of the vertical EAR reference line, in triangular area B. To estimate prevalence, proceed as in equation 1, or equivalently, count the points that appear above the 45° line (the shaded area), because for them y < r. This is not a practical method because typically information needed for estimating the joint distribution is not available. Can this proportion be approximated in some other way? The probability approach in the previous section is one such approximation. The EAR cut-point method is a shortcut to the probability approach and provides another approximation to the true prevalence of inadequacy. When certain assumptions hold, the number of individuals with intakes to the left of the vertical intake = EAR line is more or less the same as the number of individuals over the 45° line. That is, or equivalently, OCR for page 203 DRI DIETARY REFERENCE INTAKES: Applications in Dietary Assessment Pr{y ≤ r} ≈ Fr(a) where FY(a) = PR{y ≤ a} is the cdf of intakes evaluated at a, for a = EAR In fact, it is easy to show that when E (r) = E(y): Pr(y ≤ r)= FY(EAR) The prevalence of inadequate intakes can be assessed as long as one has an estimate of the usual nutrient intake distribution (which is almost always available) and of the median requirement in the population, or EAR, which can be obtained reliably from relatively small experiments. The quantile FY(EAR) is an approximately unbiased estimator of Pr{y ≤ r} if ƒY,R (y,r) = fY(y) fR(r), that is intakes and requirements are independent random variables. Pr{r ≤ –α} = Pr{r ≥ α} for any α > 0, that is, the distribution of requirements is symmetrical around its mean; and > , where and denote the variance of the distribution of requirements and of intakes, respectively. When any of the conditions above are not satisfied, FY(EAR) ≠ Pr{y ≤ r}, in general. Whether FY (EAR) is biased upward or downward depends on factors such as the relative sizes of the mean intake and the EAR.
{"url":"http://books.nap.edu/openbook.php?record_id=9956&page=203","timestamp":"2014-04-16T10:48:17Z","content_type":null,"content_length":"60161","record_id":"<urn:uuid:fce62218-e62c-4e5e-8164-5dc0fabc13bb>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
The Apples in Stereo Non-Pythagorean Music Scale The Non-Pythagorean Music Scale By Robert Schneider A Non-Pythagorean Musical Scale (Including A Twelve Tone Octave Which May Be Associated With Traditional Musical Notation And With The Keys Of A Re-Tuned Piano) A new musical scale is created by letting successive pitches have the ratios to one another of the natural logarithms of successive whole numbers. Most of these pitches bear no correspondence to the twelve tones generated by the circle of fifths, attributed to Pythagoras, or to the twelve tones of equal temperament tuning. Successive notes in this scale grow closer and closer together, and the number of discrete tones in each octave increases nearly exponentially, with each successive octave. Listening to lower octaves the ear strains to bend the tones to traditional pitches lying near them. However most of the intervals of this scale are irrational, and do not correspond to any of the traditional twelve tones. As successive tones grow closer together they become almost indistinguishable, and the resemblance to a traditional scale disappears. To begin the scale on middle C we define a fundamental pitch K = C / 2 ln 2 = 264 Hz / 1.3863 = 190.44 Hz, where 264 Hz is a typical frequency for middle C and 1.3863 = 2 ln 2 (the notation ln M refers to the natural logarithm of M). Of course, any frequency may be substituted for C = 264 Hz in equation K to establish the scale in another key. Then each pitch in the scale is M = K ln M (pitch M is the Mth tone). For M=1 the first tone is silence, as ln 1 = 0. The second tone is middle C, the fourth tone is C one octave above middle C, and the sixteenth tone is two octaves above middle C. The eighth tone is G when the scale begins on C. The rest of the intervals M < 16 are not rational tones. Their ratios are irrational numbers, and they do not fall into the traditional scale or correspond to any pitches on a piano keyboard. Successive tones grow closer together as M increases. The C an octave above M = 16 is M = 64 so there are 48 tones in that octave. The next C is M = 256 so there are 192 tones in that octave. The next octave of C is M = 1024 and so on. The construction of a piano with such a great number of keys in the higher octaves seems impossible. It is likely that the electronic medium is better suited to composing with this scale. To create a 12-note scale suitable for playing upon a re-tuned traditional piano keyboard, and suitable for composing using traditional notation, we associate Tone 1 = K ln 4 with middle C on the The equation for the Nth tone in the octave above middle C is Tone N = K ln (3 + N), for N = 1 through 13. We then associate the successive tones with successive piano keys in the octave above middle C, to Tone 12 = K ln 15. Tone 13 falls on C one octave above Tone 1 (again, K = 264 / 2 ln 2 Hz = 190.44 Hz, so that Tone 1 = 264 Hz or middle C). Higher and lower octaves of these pitches may be associated with the other octaves of the keyboard by multiplying and dividing the pitches by powers of 2. Tone 1 = K ln 4 = 264 Hz (middle C) Tone 2 = K ln 5 = 306.24 Hz Tone 3 = K ln 6 = 340.56 Hz Tone 4 = K ln 7 = 369.6 Hz Tone 5 = K ln 8 = 396 Hz Tone 6 = K ln 9 = 417.12 Hz Tone 7 = K ln 10 = 438.24 Hz Tone 8 = K ln 11 = 456.72 Hz Tone 9 = K ln 12 = 472.56 Hz Tone 10= K ln 13 = 488.4 Hz Tone 11= K ln 14 = 501.6 Hz Tone 12= K ln 15 = 514.8 Hz Tone 13= K ln 16 = 528 Hz (C one octave above middle C) It would be desirable to play a piano tuned to these pitches, and to hear compositions based on this sequence of tones. The melodies and chords possible in this twelve-tone scale have a completely different nature from those produced by the traditional scale, as logarithms add according to a different algebra from whole and rational numbers, allowing for novel avenues of musical expression. Non-Pythagorean chords have distinct “textures” resulting from beats produced by the superposition of frequencies in logarithmic intervals, which differ in feeling from chords in the traditional There is an alien beauty to the non-Pythagorean musical scale when the listener becomes accustomed to the strange intervals.
{"url":"http://www.applesinstereo.com/non-pythagorean-music-scale/","timestamp":"2014-04-17T18:27:43Z","content_type":null,"content_length":"25013","record_id":"<urn:uuid:425eae6e-ae9e-44aa-ac35-8fa32f9f98d1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof for Poincaré? May 2002 Martin Dunwoody, a Professor at the University of Southampton, may have cracked one of the most difficult unsolved problems of mathematics. The problem is known as the Poincaré Conjecture, after the French scientist Henri Poincaré, who formulated the problem in 1904. Poincaré was working in topology - the study of those properties of mathematical objects that remain unaffected by smooth deformations - when he conjectured that a certain property of the 2-dimensional surface of a sphere also held for higher-dimensional analogues. Though relatively simple to state, the conjecture has proved extraordinarily hard to prove (or disprove!) - the 1982 proof for 4-dimensional "surfaces" won its author the Fields medal - and the remaining 3-dimensional case has resisted all attack for nearly 100 years. Dunwoody's approach to publicising his work has caused some controversy. He is refusing all interviews, and publishing his proof as a work in progress on the University of Southampton's website (you can check on his progress by going to his department's preprint site). The current posting - version number 8 - is described as an "outline of an attack on the Poincaré conjecture", and thanks five different people for pointing out problems in earlier versions. Dunwoody says at the beginning that there is one problem in particular that he does not see how to rectify, but that he hopes a certain approach will work. Topologically speaking, a coffee cup and a doughnut are the same So just what is the conjecture? Well, the 2-dimensional surface of a 3-dimensional sphere - the familiar surface of a ball - is, in topological terms, completely defined by the fact that it is simply connected. In other words, any 2-dimensional surface that is simply connected could be turned into the surface of a sphere just by stretching and distorting - no cutting or tearing would be needed. A surface is said to be simply connected if any closed loop on it can be slid off without cutting. You can see that the surface of a ball is simply connected by imagining a rubber band stretched round it - it can be slipped off smoothly. By contrast, a doughnut is not simply connected - if you cut a rubber band and refastened it around the doughnut as shown above, no amount of sliding would allow you to slip the rubber band off. In essence, Poincaré conjectured that an analogous result held in higher dimensions. Only the 3-dimensional case ("surfaces" of 4-dimensional objects) is still unknown, and this is what Dunwoody is working on. A doughnut is not the same as a ball Even if Dunwoody's proof ultimately turns out to be flawed, he will be in august company. Poincaré himself thought he had a proof, and realising his error led him to some important new mathematics. In fact, the history of all the great unsolved problems, and not just the Poincaré Conjecture, is littered with flawed proofs, many of which contain more fine mathematics than correct but pedestrian proofs of less important results. More than just fame depends on Dunwoody's success - there is also a fortune waiting. The Clay Institute, an organisation dedicated to increasing and disseminating mathematical knowledge, based in Cambridge, Massachusetts, has put up seven $1,000,000 prizes for answers to some of the most important outstanding problems in mathematics - and the Poincaré Conjecture is one of them. Dunwoody must be wishing he could phone a friend or ask the audience...
{"url":"http://plus.maths.org/content/proof-poincareacute","timestamp":"2014-04-20T13:49:10Z","content_type":null,"content_length":"26511","record_id":"<urn:uuid:7d4e4ec8-2a86-4f58-8c33-8a58f1d59521>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Dimension of fractal April 5th 2010, 02:11 AM #1 Dec 2009 Dimension of fractal for a cantor middle-half set K, I think it is one-dimensional because it is on a real line. But the book say that it is 1/2-dimensional. Consider KxK, I think it is 2-dimensional because it is a square(containing many small squares) on a plane. But it is a 1-dimensional set. Why? As a matter of fact, the "dimension" is a precise (and delicate) notion that has a mathematical definition (in fact, several). Maybe you know what the dimension of a vector space is (which is an integer). For Cantor subsets, one obviously needs a different definition, that would account for their different self-similarity. But the book say that it is 1/2-dimensional. Maybe your book says how it defines dimension. Anyway, you can find lots of ressource on the internet: look for "fractal dimension". In the case of your subset, you can cut it in 2 parts that look like the initial subset up to a scale $\frac{1}{4}$. Similarly, if we cut a line segment (d=1) in 4 we get parts that look like the initial subset to a scale $\frac{1}{4}$, if we cut a square (d=2) in 16 we get smaller squares like the first one to scale 1/4, and if we cut the cube (d=3) in $4^3$ we get smaller cubes like the first one to scale $\frac{1}{4}$, and in dimension $d$, we can cut a cube into $4^{d}$ smaller cubes that look like half the initial cube up to a scale $\frac{1}{4}$. So for the middle-half Cantor subset, you would have $2=4^d$, hence $d=1/2$. This is not a proof, since I gave no definition of dimension, but it gives a reason why it should be $1/2$. Last edited by Laurent; April 5th 2010 at 10:04 AM. thank you very much April 5th 2010, 09:53 AM #2 MHF Contributor Aug 2008 Paris, France April 5th 2010, 06:09 PM #3 Dec 2009
{"url":"http://mathhelpforum.com/advanced-math-topics/137359-dimension-fractal.html","timestamp":"2014-04-18T21:33:40Z","content_type":null,"content_length":"37496","record_id":"<urn:uuid:cc038ae0-c03d-4284-a89c-f5bc709119d7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Factorising Quadratic Equations June 1st 2009, 04:23 AM #1 Jun 2009 Factorising Quadratic Equations Hello, trying to factorise: So far i've got: AC = -6, so x^2+x-6x-6 = x(x+1)-6(x+1) Now getting to that point all makes sense to me, however in my textbook it says 'x+1 is a factor of both terms, so take that outside the bracket', leaving the answer as (x+1)(x-6). I don't understand how they got from x(x+1)-6(x+1) to (x+1)(x-6). Any help would be appreciated, thanks. Hello, trying to factorise: So far i've got: AC = -6, so x^2+x-6x-6 = x(x+1)-6(x+1) Now getting to that point all makes sense to me, however in my textbook it says 'x+1 is a factor of both terms, so take that outside the bracket', leaving the answer as (x+1)(x-6). I don't understand how they got from x(x+1)-6(x+1) to (x+1)(x-6). Any help would be appreciated, thanks. Can you factorise xA - 6A ? Since both terms are multiplied by (x+1), you can just take x+1 outside the brackets to give $x^2-5x-6=(x-6)(x+1)$. Yeah, A(x-6). Yeah, that's what it says in the book. Maybe i'm overcomplicating it, but I don't understand what you do when you take it out of the brackets. Aha. I see, great. Cheers. June 1st 2009, 04:39 AM #2 June 1st 2009, 05:14 AM #3 June 1st 2009, 05:39 AM #4 Jun 2009 June 1st 2009, 05:52 AM #5 June 1st 2009, 08:13 AM #6 Jun 2009
{"url":"http://mathhelpforum.com/algebra/91389-factorising-quadratic-equations.html","timestamp":"2014-04-20T03:12:23Z","content_type":null,"content_length":"46492","record_id":"<urn:uuid:7e9ff187-d463-4499-80a3-67ad28f064a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Interpolation FP1 Formula Re: Linear Interpolation FP1 Formula She says that after the tutoring she wants to watch Good Will Hunting with me in her room, and that it's her favourite movie... ...to be honest I'm surprised she still wants to meet. She basically revealed to me every intricate detail of her past physically intimate encounters. Supposedly she's never told anyone this stuff Re: Linear Interpolation FP1 Formula How did that make you feel? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Weird, perhaps also a little guilty and more cautious. Her boyfriend is in Cyprus so no chance of running into him, and she's going to break up with him anyway. Re: Linear Interpolation FP1 Formula So she says! How accurate is anything she is saying? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I don't know; I don't believe she will break up with him, but I have learned a bit about her sexuality and it's unusual for her to be confiding so much in me despite not knowing me too well. She sure is a mixed bag. Re: Linear Interpolation FP1 Formula You could be heading right into a buzzsaw. A really big one. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula What do you mean? Re: Linear Interpolation FP1 Formula Can we presume anything she is saying is true? Where was this bf and gf a couple of days ago? No mention of them. Closer she gets to the date, the weirder her revelations are becoming. So now she is supposedly dropping the bf who is in Cypus? Hmmmm, where is the gf? Is there an extraterrestrial in her past too? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula No, we can't... I've only met her once and she just looked like an ordinary shy girl. I still haven't made up my mind about whether to meet her or not. What on earth could she be planning? The girlfriend is supposedly one she had a long time ago. There is definitely something fishy going on here; if she is different, why is she into me? Re: Linear Interpolation FP1 Formula Has she given you her address? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula No, she's given me her road and says she'll meet me at the bus stop. Re: Linear Interpolation FP1 Formula Hmmmm, supposing she does not show? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Then I forget about her and move on. Or maybe I'll arrange to meet again but not show up myself. Re: Linear Interpolation FP1 Formula This keeps getting weirder. Apparently she has therapy twice a week. Re: Linear Interpolation FP1 Formula And what is this comment, "we are only going to break up?" Didn't even start yet and she has got a breakup planned? Do you know what she meant? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I don't know what she meant, but she said she'll meet up with him during the half-term (mid-Feb) and have a long talk with him. So she might not end up breaking up with him at all. But she keeps telling me she's different and that her whole school, including her parents, know. Re: Linear Interpolation FP1 Formula Hmmmm, she is different, I will have to agree with that. What kind of bf lives thousands of miles away? Is Hannah the significant other? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I have been wondering that. They are best friends or something. At this rate she'll probably tell me her name. She told me that she 'performs' for him on Skype weekly. Scary thing is she is still going at full speed and it's 1 AM. Re: Linear Interpolation FP1 Formula Yikes! Have you asked her some questions of a personal nature that concern you? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula What do you mean? What sorts of questions? Re: Linear Interpolation FP1 Formula She has gone to bed now. What a crazy day. She even told me she is sleeping without clothes tonight... why would she tell me that? I am baffled. Re: Linear Interpolation FP1 Formula Okay, I am going to go eat. You will have to just wait and see if you decide to go. That will answer all questions. Talk to you tomorrow, see ya. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Okay, see you later. Who knows what awaits... this almost reminds me of H except without the awkwardness. Re: Linear Interpolation FP1 Formula Give it time, the best might be yet to come. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Or the worst...
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=249911","timestamp":"2014-04-25T03:12:38Z","content_type":null,"content_length":"34656","record_id":"<urn:uuid:0d9e2895-c168-40f2-9ef5-ea7d44874586>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Sorting Algorithm Help 10-30-2007 #1 Registered User Join Date Apr 2007 Sorting Algorithm Help I'm writing my own sorting algorithm and needed some help. I want to write it in C++ but I'm not to sure how. Here is and example how I want it to work. Step 1: 56712983 -> comparisons are made between first element and last then 2nd element to 2nd to last and so on. Step 2: 36712985 -> now the elements are divided by two and both sides are compared like the step 1 3671 | 2985 Step 3: 1673 | 2895 -> now i want to divide both sides by two again and sort 16 | 73 | 28 | 95 Step 4: 16372859 -> now the elements are back together and I want to do the first step again. Step 5: 15327869 -> now I finish off the sorting process with a bubble sort. *note: If the array is an odd amount of elements I want the middle element to drop down. Also in step 3 I want to divide down to 2 or 3 elements depending on if its odd or even. I know this sorting algorithm isn't really that great but I wanted to try and write my own so if I could get any ideas that'd be cool.. if not thats cool to!..haha below is my bubble sort that I wrote in C++ which is my last step ..but i'm still not sure how to write the code to compare the first and last element as I showed in my steps then divide. const int size = 1000; // this is the array size int sortarray[size]; // this is the array which I'll use later int key = 0, j = 0; int main(){ srand(time(0)); // this part starts the random number generator for(int i = 0; i < size; i++){ // starts the for loop sortarray[i] = (rand()%1000) + 1; // makes numbers between 1 through 20 %/returns remainder cout<<sortarray[i]<<", "; for(int i = 0; i < size; i++){ // starts the bubble sort for(int j = size - 1; j > i; j--){ // starts the comparisons at the end of the array if(sortarray [j] < sortarray[j - 1]){ // checks the current element with the previous one key = sortarray[j]; // switches the elements if the conditions are met sortarray[j] = sortarray[j - 1]; sortarray[j - 1] = key; // formats the output prints to screen for(int i = 0; i < size; i++){ // prints the sorted array cout<<sortarray[i]<<", "; return 0; The first step should be quite easy: for (int i = 0, j = size - 1; i != j; ++i, --j) To do the same thing on both halves make the whole function recursive. After you are done with step 1, make two recursive calls (for example by passing a range as start and end pointer): my_sort(array, array+middle); my_sort(array+middle, array+end); So altogether the whole thing might look like this: void special_sort(int* start, int* end); void presort(int* start, int* end); void bubble_sort(int* start, int* end); void presort(int* start, int* end) // do first step and call recursively: presort(start, middle); presort(middle, end); void special_sort(int* start, int* end) presort(start, end); bubble_sort(start, end); However, I doubt this algorithm is very practical. If you want to explore more practical sorting algorithms look into selection sort or insertion sort (one of the fastest O(n*n) sorts and very useful to keep incoming data sorted). I might be wrong. Thank you, anon. You sure know how to recognize different types of trees from quite a long way away. Quoted more than 1000 times (I hope). I have implemented that algorithm, but instead of 'Bubble Sort' to finish it off it repeats the same thing again. I called it 'Optimistic Sort' and it is O(n*logn*logn). You can find it from the link in my sig. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" Wow thanks for your help guys..thats awesome. Now I just have to figure out how to exactly finish it off with the bubble sort without my code being to messy! Ok so here's the code to my sorting algorithm..i still have problems. I can divide it once and sort them..but i also want to divide the lower and upper bounds down to two or three (if odd) numbers like I show above and I can't figure it out. My bubble sort isn't work also which at the end of the code. I think that the array is still split and the bubble isn't taking it. I show what each sort is doing so you can see... Can anyone help me? using namespace std; const int SIZE = 10; int split(int *array, int lower, int upper) { int middle; int temp; int bound = (upper - lower) + 1; cout << "lower bound --> " << lower << endl; cout << "upper bound --> " << upper << endl; if(bound >= 2 && bound < 4) { middle = (upper + lower) / 2; return split(array,lower,middle); return split(array,middle+1,upper); else { for(int i=lower, j=upper; i<=j; i++, j--) { if(array[i] > array[j]) { temp = array[i]; array[i] = array[j]; array[j] = temp; return 0; int main() { int sortarray[SIZE]; // array of elements to sort int temp; // temp placeholder int middle; srand(1000); // seed the random number generator /* populate the array */ for (int i=0; i<SIZE; i++) { sortarray[i] = (rand()%10) + 1; /* output the populated array */ for (int i=0; i<SIZE; i++) { cout << sortarray[i] << endl; cout << endl; /* initial sort */ for(int i=0, j=SIZE-1; i <= j; i++, j--) { cout << sortarray[i] << "\t" << sortarray[j] << endl; if (sortarray[i] > sortarray[j]) { temp = sortarray[i]; sortarray[i] = sortarray[j]; sortarray[j] = temp; /* output the populated array */ cout << endl; for (int i=0; i<SIZE; i++) { cout << sortarray[i] << endl; /* split */ middle = (SIZE - 1) / 2; cout << "The mid-point is " << middle << endl; /* output the populated array */ cout << endl; for (int i=0; i<SIZE; i++) { cout << sortarray[i] << endl; for(int i=0, j=SIZE-1; i <= j; i++, j--) { cout << sortarray[i] << "\t" << sortarray[j] << endl; if (sortarray[i] > sortarray[j]) { temp = sortarray[i]; sortarray[i] = sortarray[j]; sortarray[j] = temp; /* output the populated array */ cout << endl; for (int i=0; i<SIZE; i++) { cout << sortarray[i] << endl; for(int i = 0; i < SIZE; i++){ // starts the bubble sort for(int j = SIZE - 1; j > i; j--){ // starts the comparisons at the end of the array if(sortarray [j] < sortarray[j - 1]){ // checks the current element with the previous one key = sortarray[j]; // switches the elements if the conditions are met sortarray[j] = sortarray[j - 1]; sortarray[j - 1] = key; /* output the populated array */ cout << endl; for (int i=0; i<SIZE; i++) { cout << sortarray[i] << endl; return 0; Bumping your thread is not considered "nice", but if you are fishing for comments, I'll give you some: srand(1000); // seed the random number generator What's the purpose of this? Since the seed is a constant, it's really no better than the default seed you get from the C library itself. if(bound >= 2 && bound < 4) { middle = (upper + lower) / 2; return split(array,lower,middle); return split(array,middle+1,upper); Your compiler should say "unreachable code" on the second line there. Compilers can produce warnings - make the compiler programmers happy: Use them! Please don't PM me for help - and no, I don't do help over instant messengers. Sorry I wasn't trying to bump it.. I forgot to put a Question Mark. Thanks for the help man! Don't put any of the code for the sorting into main itself. All main should do is call a function once, and then the array should be sorted. In split, you're trying to to the recursive calls before the end-to-end swapping is done. They should come after, unless you now want to compare the halves in a diferent manner. My homepage Advice: Take only as directed - If symptoms persist, please see your debugger Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong" 10-30-2007 #2 The larch Join Date May 2006 10-30-2007 #3 10-30-2007 #4 Registered User Join Date Apr 2007 11-01-2007 #5 Registered User Join Date Apr 2007 11-02-2007 #6 Registered User Join Date Apr 2007 11-02-2007 #7 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England 11-02-2007 #8 Registered User Join Date Apr 2007 11-02-2007 #9
{"url":"http://cboard.cprogramming.com/cplusplus-programming/95185-sorting-algorithm-help.html","timestamp":"2014-04-19T13:33:44Z","content_type":null,"content_length":"76802","record_id":"<urn:uuid:ffaf7255-d243-4ce9-8d7a-93cbdd90b8db>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
Is String Theory A Waste Of Time? . I saw no "emergence of a 4D World". Instead they begin with 4D simplices and end up with a 4D world. This is no more surprising to me than starting with little cubes and ending up with big cubes. Please correct me here. One way to understand it is to read the paper carefully and follow their references to the literature. It may be that you have not read the first page of the article, Carl. this is page 2 (the abstract occupies page 1). Here is a quote from page 2: ----quote from "Emergence of a 4D world--- Note that the dynamical nature of “dimensionality” implies that the Hausdorff dimension of the quantum geometry is not a priori determined by the dimensionality at the cut-off scale a, which is simply the fixed dimensionality d of the building blocks of the regularized version of the theory. An example in point are the attempts to define theories of quantum geometry via “Euclidean Dynamical Triangulations”, much-studied during the 1980s and ‘90s. In these models, if the dimension d is larger than 2, and if all geometries contribute to the path integral with equal weight, a geometry with no linear extension and d = infinity is created with probability one. If instead – as is natural for a gravityinspired theory – the Boltzmann weight of each geometry is taken to be the exponential of (minus) the Euclidean Einstein-Hilbert action, one finds for small values of the bare gravitational coupling constant a first-order phase transition to a phase of the opposite extreme, namely, one in which the quantum geometry satisfies d = 2. This is indicative of a different type of degeneracy, where typical (i.e. probability one) configurations are so-called branched polymers or trees (see [11, 12, 13, 14, 15, 16, 17] for details of the phase structure and geometric properties of the four-dimensional Euclidean theory). ----end quote---- The Dynamical Triangulations literature all through the 1990s is a history of frustration where they would put together, say, 4-simplices and the result would be something of small dimensionality like 2 or the dimensionality would go off to infinity. the 2004 result reported in "Emergence..." was highly nontrivial, as they say, and as they explain by reference to the earlier work. this behavior has been discussed in quite a few papers---not just in 4D case but also in 3D For instance look around page 7 of Loll's introductory paper "A discrete history..." which was written for grad students entering the field. She describes the 3D case, which is easier to picture. in the 3D case, one randomly assembles 3-simplices (tetrahedrons), but for a decade or so the result was always something highly branched out or highly compacted---- either 2 dimensional or very high, essentially infinite, dimensional. Loll provides some pictures, which I can't.
{"url":"http://www.physicsforums.com/showthread.php?t=81799&page=7","timestamp":"2014-04-17T12:48:04Z","content_type":null,"content_length":"108873","record_id":"<urn:uuid:308a3c60-4c43-4adf-ad63-fa9c16b7eaf9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Feynman Diagram Feynman's diagrams are the lingua fraca (sp?) of much of theoretical physics. As such, hundreds of books describe and explain the diagrams.Zee's new book on QFT, F. Gross' Relativistic QM and FT, Bjorken and Drell's texts, Collins, Martin and Squires, Particle Physics and Cosmology,Gauge Theories in Particle Physics by Aitchison& Hey, QED and the Men Who Made It -- Dyson, Feynman, Schwinger and Tomonoga ( a superb book), and the master's original paper Space-Time Approach to QED, Feynman in Quantum Electrodynamics, Dover, Schwinger -- a collection of the key QED papers up to the late 50s -- all of these works, a mere drop in the bucket, cover diagrams. I'm more familiar with the older literature, but I've got to bet that over the past five years, say, 50 to 100 books have come out, dealing in one way or another with diagrams. There's plenty to keep you occupied. Basically each diagram represents a term in perturbation theory, according to well defined rules. Note also, that Feynman diagrams are used in nonrelativistic theories -- solid state physics, nuclear phyics for example. Reilly Atkinson
{"url":"http://www.physicsforums.com/showpost.php?p=419977&postcount=5","timestamp":"2014-04-17T04:06:42Z","content_type":null,"content_length":"7817","record_id":"<urn:uuid:b25b3e86-c411-4eca-b910-507b5cda2390>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Union Grove, WI Precalculus Tutor Find an Union Grove, WI Precalculus Tutor ...I have kept up with the German language as best I could through my many travels to Germany and taking an intensive language class one summer in Germany. I have a passion for the German language and keep up with the language through self-study and occasionally participating in a German conversation class. I look forward to being able to help you out with your Math or German. 12 Subjects: including precalculus, calculus, ESL/ESOL, statistics ...Since I majored in science, I have an extended background in biology and chemistry. Math subjects I can assist with include basic math, algebra, statistics, geometry, trigonometry, and pre-calculus. Finally, I also have an extensive background in Spanish ranging from Spanish classes in high school to Spanish language, grammar, and linguistics classes in college. 15 Subjects: including precalculus, chemistry, Spanish, statistics ...I will be graduating this coming May (2014) with a degree in Elementary Education and a minor in Mathematics! I also have experience with music since I played the violin for 9 years. My primary goal is to work with middle school math students, but I have experience with students of all ages and can help with a variety of subjects! 25 Subjects: including precalculus, reading, geometry, algebra 1 ...I have a BS from UW - Milwaukee majoring in Mathematics. Prealgebra forms the basis for all advanced math and must be mastered early. Students who have trouble with basic concepts need lots of enrichment activities with hands on manipulates and examples from real life. 9 Subjects: including precalculus, calculus, geometry, algebra 1 ...I found the opportunity to continue on this path through WorldTeach from 2011 to 2012. As a volunteer that served in the Marshall Islands, I had the privilege to teach 120 high school Marshallese speaking students. In addition to the many responsibilities of teaching, I took part in coaching th... 58 Subjects: including precalculus, English, Spanish, geometry Related Union Grove, WI Tutors Union Grove, WI Accounting Tutors Union Grove, WI ACT Tutors Union Grove, WI Algebra Tutors Union Grove, WI Algebra 2 Tutors Union Grove, WI Calculus Tutors Union Grove, WI Geometry Tutors Union Grove, WI Math Tutors Union Grove, WI Prealgebra Tutors Union Grove, WI Precalculus Tutors Union Grove, WI SAT Tutors Union Grove, WI SAT Math Tutors Union Grove, WI Science Tutors Union Grove, WI Statistics Tutors Union Grove, WI Trigonometry Tutors Nearby Cities With precalculus Tutor Bassett, WI precalculus Tutors Benet Lake precalculus Tutors Camp Lake precalculus Tutors Franksville precalculus Tutors Honey Creek, WI precalculus Tutors Kansasville precalculus Tutors Lyons, WI precalculus Tutors New Munster precalculus Tutors Powers Lake, WI precalculus Tutors Richmond, IL precalculus Tutors Rochester, WI precalculus Tutors Russell, IL precalculus Tutors Somers, WI precalculus Tutors Wilmot, WI precalculus Tutors Woodworth, WI precalculus Tutors
{"url":"http://www.purplemath.com/union_grove_wi_precalculus_tutors.php","timestamp":"2014-04-19T20:21:50Z","content_type":null,"content_length":"24546","record_id":"<urn:uuid:8bac9cbb-deb9-4dd5-b1b7-76e462f54d78>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - A Factorization Algorithm Playdo, 1st post Clearly for certain special numbers N we have a one to one map of factor pairs of N and factor pairs of n+1 which is much smaller. Why is that? If N is of the form 1 + ... + a , then how do we know all factor pairs of N contain one factor of the form 1 + ... + a ? In other words, why is it clear that if a number has a base-a expansion 111....111 that any factor pair of that number has at least one of the factors also being in the form 111....111 (but with fewer 1's normally, of course)? Also, for your interest, [itex]\beta[/itex] is not a capital letter. Capital Beta looks just like the captial roman B. When using LaTeX, if you write a greek letter in lower case, like so \pi, then you'll get the lower case [itex]\pi[/itex] and if you write it like \Pi then you'll get [itex]\Pi[/itex]. Also, when using LaTeX in the same line as your text, use "itex" tags instead of "tex". So you'd write [ itex ]\phi ^2 - \phi - 1 = 0[ /itex ] to have the markup right in line [itex]\phi ^2 - \phi - 1 = 0[/itex]. To have it bigger and on a separate line: [ tex ]\phi ^2 - \phi - 1 = 0[ /tex ] [tex]\phi ^2 - \phi - 1 = 0[/tex] Playdo, 3rd post For every composite natural number [tex]N[/tex] there must be a reducible polynomial [tex]p(x)[/tex] over the natural numbers and a natural number [tex]a[/tex] for which [tex]N=p(a)[/tex] and at least one factor pair [tex]s[/tex] and [tex]t[/tex] of [tex]p[/tex] evaluated at [tex]a[/tex] satisfy [tex]N=p(a)=s(a)t(a)[/tex]. The last part, that at least one factor pair s and t satisfies N = p(a) = s(a)t(a) seems redundant from the fact that p is reducible and N is composite. I don't quite understand the point of your proof, the result seems trivial. Let N be composite, say N = nm for n, m > 1. If you define 0 to be a natural, let a be any natural less than or equal to min{n,m}. Otherwise, let a be any natural strictly less than min{n,m}. Let p (x) = (x + (n-a))(x + (m-a)). I'm in a rush right now, but I'll look at posts 2, 4, and 5 later.
{"url":"http://www.physicsforums.com/showpost.php?p=1024810&postcount=8","timestamp":"2014-04-18T00:24:46Z","content_type":null,"content_length":"9823","record_id":"<urn:uuid:c9b9887f-a4fd-41f5-911c-19f1522d3bb8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Bloomingdale, IL Calculus Tutor Find a Bloomingdale, IL Calculus Tutor ...Work here can begin with simple multiplication and division, working up to beginning Algebraic equations. The main focus however, is generally in word problems. A good foundation in understanding and solving word problems not only creates a basis in Mathematics, but also prepares the student for real-life situations. 11 Subjects: including calculus, geometry, algebra 1, algebra 2 ...I have completed undergraduate coursework in the following math subjects - differential and integral calculus, advanced calculus, linear algebra, differential equations, advanced differential equations with applications, and complex analysis. I have a PhD. in experimental nuclear physics. I hav... 10 Subjects: including calculus, physics, geometry, algebra 1 ...The product development aspect involves design for six sigma, which is very heavy in basic statistics and probability. I am qualified to teach low and high level statistics. Graduated High School with an A in Calculus. 20 Subjects: including calculus, physics, statistics, geometry ...I taught, from 07-09, an age group from 14-16 year olds. I had four years of experience with this program in High school. I also worked with the program a bit in College at Northern Illinois 26 Subjects: including calculus, reading, algebra 1, chemistry ...I used Matlab excessively for analyzing the vibration profiles of the engine with help of digital signal processing tools I can relate a lot of engineering concepts to applications used in the industry, which helps students to understand them very easily. I have a great desire to share knowled... 16 Subjects: including calculus, chemistry, physics, geometry
{"url":"http://www.purplemath.com/Bloomingdale_IL_Calculus_tutors.php","timestamp":"2014-04-16T07:58:07Z","content_type":null,"content_length":"24183","record_id":"<urn:uuid:94875dd1-b12d-4c27-a7c0-1f23a3dfd700>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Brazilian Journal of Physics Services on Demand Related links Print version ISSN 0103-9733 Braz. J. Phys. vol.31 no.1 São Paulo Mar. 2001 The influence of an external magnetic field on the fermionic Casimir effect M.V. Cougo-Pinto^*, C. Farina^ , A.C. Tort^ Instituto de Física, Universidade Federal do Rio de Janeiro CP 68528, Rio de Janeiro, RJ 21945-970, Brazil Received on 3 July, 2000. Revised version received on 8 September, 2000 The influence of an external constant uniform magnetic field on the Casimir energy associated with a Dirac field under antiperiodic boundary condition is computed using Schwinger's method. The obtained result shows that the magnetic field enhances the fermionic Casimir energy, in oposition to the bosonic Casimir energy which is inhibited by the magnetic field. H. B. G. Casimir showed in 1948 [1] that the presence of two closely spaced parallel metallic plates with no charge on them would shift the vacuum energy of the electromagnetic field by an amount [g] (a) given by: where ^2 is the area of each plate, a is the separation between them and the relation a ^3 to the topology of ^2 × [0, a], due to the boundary conditions imposed on the electromagnetic field by the metallic plates with separation a. The fermionic Casimir effect is of particular importance due to the fundamental role played by the electron in QED and the quarks in QCD. In the case of quarks we have a boundary condition of confinement given by nature, which makes the Casimir energy a natural ingredient in the hadron structure. The fermionic Casimir energy was first computed by Johnson [9] in the context of the MIT-bag model [10] for a massless Dirac quantum field confined between parallel planes with separation a. A more realistic description of quarks and gluons inside a hadron requires the confining boundary conditions to be imposed on a spherical surface. The Casimir effect in spherical geometry for massive fields is a much more complicated problem and has only recently been completely solved for massive fermionic [11] and scalar [12] fields. In the case of confining planes the fermionic Casimir energy a) obtained by Johnson [9] is given by: where x = 7/4. As in the original Casimir effect this energy comes from a shift from the usual space ^3 to the space ^2 × [0, a]. If instead of compactifying one dimension a] we compactify it into a circle S^1 [13, 14] of radius a/2p we obtain for the Casimir energy associated with the massless Dirac field the expression (2), where now x is equal to 7 × 4 or -8 × 4, according to a choice of twisted or untwisted spin connection, which corresponds to antiperiodic or periodic boundary conditions with period a, respectively. We should notice the similarity of those three results, for the Casimir energy of the Dirac massless field under MIT, periodic and antiperiodic boundary conditions. They show that all these boundary conditions give rise to the same dependence on a and differ only on the multiplicative numerical factor x. We may take advantage of this fact by choosing the simplest boundary condition in a first investigation of a Casimir effect. In the case of a fermionic field the compactification into [0, a] provided by the MIT boundary condition [10] gives rise to the most complicated calculations, especially in the massive case [15]. The periodic and antiperiodic conditions are much simpler in the massless and massive case. Let us also notice that, as shown by Ford [14], in the case of the Dirac field the antiperiodic boundary condition avoids the causality problems which occurs for periodic boundary condition. The results that we shall present here stems from the idea that vacuum fluctuations of a charged quantum field are affected not only by boundary conditions but also by external fields. Therefore, in the case of charged quantum fields it is natural and important to ask what kind of interplay occurs between the Casimir effect and the vacuum polarization effects, when boundary conditions and external field are both present. This question can be examined from two physically very distinct points of view. From one point of view we ask what is the influence of boundary conditions on the polarization effects of an external field and from the other we ask what is the influence of an external field on the Casimir energy of a charged field. We should expect on physical grounds the existence of such influences and it is necessary to calculate their features and magnitudes to clarify their role on and to obtain a deeper understanding of the Casimir and vaccum polarization effects. For a Dirac field the first point of view is conveniently treated by calculating an Euler-Heisenberg effective Lagrangian [16] with boundary conditions [17]. We present here the second point of view, in which we look for the precise influence of an external field on the Casimir energy of a Dirac field. The obtained results complement the ones obtained for the bosonic Casimir effect in external magnetic field [18]. We compute the influence of an external magnetic field on the Casimir energy of a charged Dirac field under antiperiodic boundary conditions and find that the energy is enhanced by the magnetic field. This result appears in opposition to the behaviour of a charged scalar field under Dirichlet boundary conditions, which has its Casimir energy inhibited by the external magnetic field. It is tempting to advance an explanation of this opposite behavior in terms of the spinorial character of the fields. After all the permanent magnetic dipoles of spin-half quantum field fluctuations should tend to paramagnetic alignment with the applied external field while the induced diamagnetic dipoles of the scalar quantum field tend to antialignment. However, it has been verified [19] that the magnetic properties of quantum vacuum depend not only on the spinorial character of the quantum field but also on the kind of boundary conditions to which it is submitted. Therefore, further investigations are necessary in order to formulate a sound physical explanation of the character of the change in the Casimir energy due to applied external magnetic We will take as external field a constant uniform magnetic field and as boundary condition on the Dirac field the antiperiodicity along the direction of the external magnetic field. The choice of a pure magnetic field excludes the possibility of pair creation at any field strength. The simplicity of antiperiodic boundary condition was remarked above and the other choices are obvious simplifying assumptions. These assumptions lead us to a convenient formalism to study the physical influence of an external field on the Casimir effect. The influence of external field on vaccum fluctuations of quantum fields have been considered by Ambjørn and Wolfram [20] and by Elizalde and Romeo [21] for the case of quantum scalar field in (1 + 1)-dimensional space-time. Ambjørn and Wolfram have considered the case of a charged scalar field in the presence of an external electric field while Elizalde and Romeo consider the case of a neutral scalar field in a static external field with the aim of addressing the problem of the gravitational influence on the Casimir effect. Let us also note that in the Scharnhorst effect [22] we have the interaction of an electromagnetic external field with the electromagnetic vacuum fluctuations affected by boundary conditions. However, in this case the boundary conditions are imposed on the quantum electromagnetic field and not on the Dirac field. The effect is then a two-loop effect, since the coupling between the external field and the quantum electromagnetic vacuum field requires the intermediation of a fermion loop. Here the boundary condition is on the Dirac field, the quantum electromagnetic field need not to be considered and the external electromagnetic field is not subjected to boundary conditions. In this way the effects that we describe appear at the one loop level, although higher orders corrections can be obtained with more loops. Let us proceed to the calculation of the influence of the external magnetic field on the Casimir energy of the Dirac field. We consider a Dirac field of mass m and charge e under antiperiodic boundary condition along the a. We consider those planes as large squares of side ® ¥ can be taken at the end of the calculations. The constant uniform magnetic field B is taken along the eB, where B is the component of B on the ^(1) [24]: where s[o] is a cutoff in the proper-time s, Tr is the total trace including summation in coordinates and spinor indices and H is the proper-time Hamiltonian given by: H = (p - eA)^2 - (e/2)s[mn] F^ mn + m^2, where p has components p[m ]= -i¶[m], A is the electromagnetic potential and F is the electromagnetic field, which is being contracted with the combination of gamma matrices s[mn ]= i[g[m], g[n]]/2. The antiperiodic boundary condition gives for the component of p which is along the ±pn/a (n Î 2-1), where by p are constrained into the Landau levels generated by the magnetic field while the time component p^0 has as eigenvalues any real number w. Therefore, we obtain for the trace in (3) the expression: where the first sum takes care of the four components of the Dirac spinor, the second sum is over the eigenvalues obtained from the antiperiodic boundary condition, the third sum is over the Landau levels with their degeneracy factor eB^2/2p, and the integral range of t and w are the measurement time T and the continuum of real numbers, respectively. Proceeding with Schwinger's method we use Poisson's summation formula [25] to invert the exponent in the second sum which appears in (4). We also write the sum over the Landau levels n¢ which appears in (4) in terms of the Langevin function L(x) = cothx - x^-1 and substitute the trace obtained by these modifications into (3) to obtain: where ^(1)(B) is an expression which does not depend on a and is the cutoff dependent expression which will give us the Casimir energy that we are looking for. The quantity ^(1)(B) given in (5) is actually the (unrenormalized) Euler-Heisenberg Lagrangian [16]. In (5), it represents a density of energy uniform throughout space that gives no contribution to the Casimir energy, which by definition is set to zero at infinite separation of the plates. A term proportional to the area ^2, which is usual in vacuum energy calculations, does not appear here, due to the alternating character of the series in (6). After the elimination of the cutoff in (6) we continue with Schwinger's method [23] by using Cauchy theorem in the complex s plane to make a p/2 clockwise rotation of the integration path in (6). Let us notice that in (3) and (6) it is implicit that the integration path is slightly below the real axis, because s must have a negative imaginary part in order to render the trace contributions in (3), (4) and (6) well defined. Consequently, the poles of the Langevin function in (6), which are on the real axis, are not swept by the p/2 clockwise rotation of the integration path. We are led by the rotation to an expression in which the part of the Casimir energy which exists in the absence of the external magnetic field can be expressed in terms of the modified Bessel function K[2] (formula 3.471,9 in [26]). In this way we obtain from (6) the expression: which gives the exact expression for the Casimir energy of the Dirac field in the presence of the external magnetic field B. When there is no external magnetic field the Casimir energy is given by the first term on the r.h.s. of equation (7). This term reduces to (2) with x = 7 × 4 in the limit of zero mass, as it should be expected. The second term on the r.h.s. of equation (7) measures the influence of the external magnetic field in the Casimir energy. The contribution of the magnetic field is governed by a quadrature, which is strictly positive, decreases monotonically as n increases and goes to zero in the limit n ® ¥. Consequently, we have by Leibnitz criterion a convergent alternating series in (7) and we may conclude that the external magnetic field increases the fermionic Casimir energy. This is the main result of this work, which elucidates part of the interplay between two of the most fundamental phenomena in relativistic quantum field theory, namely: the Casimir effect and the vacuum polarization properties due to an external field. The obtained enhancement of the fermionic Casimir energy by an external magnetic field may be compared with the opposite behaviour of the bosonic Casimir energy of an scalar field, which is inhibited by the external magnetic field. To see this we turn from spinorial QED to scalar QED keeping the same boundary conditions and external fields that we have been using. We obtain by calculations similar to the ones performed in [18] the following bosonic Casimir energy in the external magnetic field: where the function x) = cosec(x) - x^-1 was introduced in [18] and plays in scalar QED the same role played by the Langevin function in spinorial QED. The inhibition of the bosonic Casimir energy by the external field can then be seen in (8) by just noting that B ® ¥. For strong magnetic fields regime changes in the charged vacuum may be easier to occur [27]. In this case the integral in equation (7) is dominated by the exponential function, whose maximum is exp(- amn) and occurs at s = 2am/n. Therefore, we are justified in substituting the Langevin function by 1 - x^-1 in the strong magnetic field regime, which in the cases am am |B| |f[o]|/a^2 and |B| |f[o]| /a^2)(a/[c]), where f[o] is the fundamental flux 1/e and [c] is the Compton wavelength 1/m. In the strong field regime also the second term in (7) can be expressed in terms of a modified Bessel function (formula 3.471,9 in [26]), and the Casimir energy can be written as: By using in this expression the leading term in the ascending expansion and then in the assymptotic expansion of the Bessel function (see formulas 8.446 and 8.451,6 in [26]) we obtain the following expressions for small and large mass limits, respectively: We have obtained in (7) the general expression of the fermionic Casimir energy under the effect of an external magnetic field. The result shows that the external field increases the Casimir energy and reveals the interplay between two fundamental agents which are known to affect the Dirac vacuum fluctuations, namely: external fields and boundary conditions. We have derived expressions for the energy in the regime of strong magnetic field and in this regime we have also obtained the small and large mass limits. The approach we have followed here has a natural extension to more complicated gauge groups and consequently may be useful also in the investigation of the QCD vacuum. The authors are indebted to Jan Rafelski and Ioav Waga for many enlightening conversations on the subject of this work. M. V. C.-P. and C. F. would like to acknowledge CNPq (The National Research Council of Brazil) for partial financial support. [1] H. B. G. Casimir, Proc. Kon. Nederl. Akad. Wetensch. 51, 793 (1948). [ Links ] [2] M. J. Sparnaay, Physica 24, 751 (1958). [ Links ] [3] S. K. Lamoreaux, Phys. Lett. 78, 5 (1997). [ Links ] [4] U. Mohideen U and A. Roy, Phys. Rev. Lett. 81, 4549 (1998). [ Links ] [5] A. Roy and U. Mohideen, Phys. Rev. Lett., 82, 4380 (1999). [ Links ] [6] V. M. Mostepanenko and N. N. Trunov, The Casimir Effect and its Applications (Clarendon, Oxford, 1997). [ Links ]V. M. Mostepanenko and N. N. Trunov, Sov. Phys. Usp. 31, 965 (1988). [ Links ] [7] E. Elizalde, S. D. Odintsov, A. Romeo A, A. A. Bytsenko and S. Zerbini, Zeta Regularization Techniques with Applications (World Scientific, Singapore, 1994). [ Links ] [8] G. Plunien, B. Muller and W. Greiner, Phys. Rep. 134, 89 (1986). [ Links ] [9] K. Johnson, Acta Phys. Polonica B6, 865 (1975). [ Links ] [10] A. Chodos, R. L. Jaffe, K. Johnson, C. B. Thorn and V. F. Weisskopf, Phys. Rev. D9, 3471 (1974). [ Links ]A. Chodos, R. L. Jaffe, K. Johnson and C. B. Thorn, Phys. Rev., D10 2559 (1974). [ Links ]T. DeGrand, R. L. Jaffe, K. Johnson and J. Kiskis, Phys. Rev. D12, 2060 (1975). [ Links ]J. F. Donoghue, E. Golowich and B. R. Holstein, Phys. Rev. D12, 2875 (1975). [ Links ] [11] E. Elizalde, M. Bordag and K. Kirsten, J. Phys. A 31, 1743 (1998). [ Links ] [12] M. Bordag, E. Elizalde, K. Kirsten and S. Leseduarte, Phys. Rev. D 56, 4896 (1997). [ Links ] [13] B. S. DeWitt, C. F. Hart and C. J. Isham, it Physica 96A, 197 (1979). [ Links ] [14] L. H. Ford, Phys. Rev. D21, 933 (1980). [ Links ] [15] S. G. Mamaev and N. N. Trunov, Sov. Phys. J. 23, 551 (1980). [ Links ] [16] W. Heisenberg, Z. Phys. 90, 209 (1935). [ Links ]H. Euler and B. Kockel, 1935 Naturwissensch. 23, 246 (1935). [ Links ]W. Heisenberg and H. Euler, Z. Phys. 98, 714 (1936). [ Links ]V. S. Weisskopf, K. Dan. Vidensk. Selsk. Mat. Fys. Medd.14, 3 (1936). reprinted in J. Schwinger, Quantum Electrodynamics (Dover, New York,1958). [ Links ]and translated into English in A. I. Miller, Early Quantum Electrodynamics: a source book (University Press, Cambridge, 1994). [ Links ] [17] M. V. Cougo-Pinto, C. Farina, A. C. Tort and J. Rafelski, it Phys. Lett. B 434, 388 (1998). [ Links ] [18] M. V. Cougo-Pinto, C. Farina, M. R. Negrão and A. C. Tort, J. Physics A, 4457 (1999). [ Links ] [19] M. V. Cougo-Pinto, C. Farina, M. R. Negrão and A. C. Tort, Phys. Lett. B 483 (2000) 144. [ Links ] [20] J. Ambjørn and S. Wolfram, Ann. Phys. NY 147, 33 (1983). [ Links ] [21] E. Elizalde and A. Romeo, J. Phys. A 30, 5393 (1997). [ Links ] [22] K. Scharnhorst, Phys. Lett. B236, 354 (1990). [ Links ]G. Barton, Phys. Lett. B 237, 559 (1990). [ Links ]G. Barton and K. Sharnhorst, J. Phys. A26, 2037 (1993). [ Links ]M. V. Cougo-Pinto, C. Farina, F. C. Santos and A. C. Tort, Phys. Lett. B446, 170 (1999); [ Links ]J. Physics A 32, 4463 (1999). [ Links ] [23] J. Schwinger, Lett. Math. Phys. 24, 59 (1992). [ Links ] [24] J. Schwinger, Phys. Rev. 82, 664 (1951). [ Links ] [25] E. T. Whittaker and G. N. Watson, A Course of Modern Analysis (University Press, Cambridge, 1945). [ Links ] [26] I. S. Gradshteijn and I. M. Ryzhik, Tables of Integrals, Series, and Products (Academic, New York, 1965). [ Links ] [27] W. Greiner, B. Müller and J. Rafelski, Quantum electrodynamics of Strong Fields (Springer-Verlag, Berlin, 1985). [ Links ] ^*e-mail: marcus@if.ufrj.br ^ e-mail: farina@if.ufrj.br ^ e-mail: tort@if.ufrj.br
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97332001000100016&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-20T07:42:38Z","content_type":null,"content_length":"55634","record_id":"<urn:uuid:3b0841ec-1d0c-4d48-ad2e-8b6f0a79cf1e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Principal Component Analysis Problem: Reduce the dimension of a data set, translating each data point into a representation that captures the “most important” features. Solution: in Python import numpy def principalComponents(matrix): # Columns of matrix correspond to data points, rows to dimensions. deviationMatrix = (matrix.T - numpy.mean(matrix, axis=1)).T covarianceMatrix = numpy.cov(deviationMatrix) eigenvalues, principalComponents = numpy.linalg.eig(covarianceMatrix) # sort the principal components in decreasing order of corresponding eigenvalue indexList = numpy.argsort(-eigenvalues) eigenvalues = eigenvalues[indexList] principalComponents = principalComponents[:, indexList] return eigenvalues, principalComponents Discussion: The problem of reducing the dimension of a dataset in a meaningful way shows up all over modern data analysis. Sophisticated techniques are used to select the most important dimensions, and even more sophisticated techniques are used to reason about what it means for a dimension to be “important.” One way to solve this problem is to compute the principal components of a dataset. For the method of principal components, “important” is interpreted as the direction of largest variability. Note that these “directions” are vectors which may incorporate a portion of many or all of the “standard” dimensions in question. For instance, the picture below obviously has two different intrinsic dimensions from the standard axes. The regular reader of this blog may recognize this idea from our post on eigenfaces. Indeed, eigenfaces are simply the principal components of a dataset of face images. We will briefly discuss how the algorithm works here, but leave the why to the post on eigenfaces. The crucial interpretation to make is that finding principal components amounts to a linear transformation of the data (that is, only such operations as rotation, translation, scaling, shear, etc. are allowed) which overlays the black arrows above on the standard axes. In the parlance of linear algebra, we’re re-plotting the data with respect to a convenient orthonormal basis of eigenvectors. Here we first represent the dataset as a matrix whose columns are the data points, and whose rows represent the different dimensions. For example, if it were financial data then the columns might be the instances of time at which the data was collected, and the rows might represent the prices of the commodities recorded at those times. From here we compute two statistical properties of the dataset: the average datapoint, and the standard deviation of each data point. This is done on line 6 above, where the arithmetic operations are entrywise (a convenient feature of Python’s numpy Next, we compute the covariance matrix for the data points. That is, interpreting each dimension as a random variable and the data points are observations of that random variable, we want to compute how the different dimensions are correlated. One way to estimate this from a sample is to compute the dot products of the deviation vectors and divide by the number of data points. For more details, see this Wikipedia entry. Now (again, for reasons which we detail in our post on eigenfaces), the eigenvectors of this covariance matrix point in the directions of maximal variance, and the magnitude of the eigenvalues correspond to the magnitude of the variance. Even more, regarding the dimensions a random variables, the correlation between the axes of this new representation are zero! This is part of why this method is so powerful; it represents the data in terms of unrelated features. One downside to this is that the principal component features may have no tractable interpretation in terms of real-life Finally, one common thing to do is only use the first few principal components, where by ‘first’ we mean those whose corresponding eigenvalues are the largest. Then one projects the original data points onto the chosen principal components, thus controlling precisely the dimension the data is reduced to. One important question is: how does one decide how many principal components to use? Because the principal components with larger eigenvalues correspond to features with more variability, one can compute the total variation accounted for with a given set of principal components. Here, the ‘total variation’ is the sum of the variance of each of the random variables (that is, the trace of the covariance matrix, i.e. the sum of its eigenvalues). Since the eigenvalues correspond to the variation in the chosen principal components, we can naturally compute the accounted variation as a proportion. Specifically, if $\lambda_1, \dots \lambda_k$ are the eigenvalues of the chosen principal components, and $\textup{tr}(A)$ is the trace of the covariance matrix, then the total variation covered by the chosen principal components is simply $(\lambda_1 + \dots + \lambda_k)/\ In many cases of high-dimensional data, one can encapsulate more than 90% of the total variation using a small fraction of the principal components. In our post on eigenfaces we used a relatively homogeneous dataset of images; our recognition algorithm performed quite well using only about 20 out of 36,000 principal components. Note that there were also some linear algebra tricks to compute only those principal components which had nonzero eigenvalues. In any case, it is clear that if the data is nice enough, principal component analysis is a vey powerful tool. One thought on “Principal Component Analysis” 1. Reblogged this on Num3ri v 2.0 and commented: I wanted to reblog and translate this “post” in Italian, because-IMHO-is a well written article and deserves to be read by all those who follow my blog. Ho voluto fare il reblog e tradurre questo “post” in italiano, perchè -IMHO- è un articolo ben scritto e merita che sia letto da tutti coloro che seguono il mio blog.
{"url":"http://jeremykun.com/2012/06/28/principal-component-analysis/","timestamp":"2014-04-20T00:44:20Z","content_type":null,"content_length":"79920","record_id":"<urn:uuid:7107c020-1e6d-4a75-a752-4dab102a6fff>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Math problem- PLEASE HELP There are 2 parts: 1) What is the answer? 2) What chance do I have of randomly selecting it? First the question: 25% would be the correct answer except it is represented twice, increasing your odds to 50%. So I'd say the answer to the question is B (50% chance). Now to the random part: As 50% (B.) is the correct answer and only represented once out of 4 options, to randomly choose B. is a straight out one in four chance. So you have a 25% chance of getting it right, so A. or D. are the correct answers.
{"url":"http://whywontgodhealamputees.com/forums/index.php/topic,21209.msg472700.html","timestamp":"2014-04-16T04:12:54Z","content_type":null,"content_length":"73491","record_id":"<urn:uuid:22458795-f3a3-4af0-9f2e-2fae7bbbc8a2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Felix Salmon Highlights Financial Inconsistency Felix Salmon was on the HuffPost on May 10, telling people to sell , and generated 2904 comments and counting. This was just after the big system debacle, where volatility spiked and markets swooned. With hindsight, this was a bad idea, but that's not important. blog post, Felix explains his logic. The basic idea comes straight out of our standard utility model of risk and return. The idea is basically that if you have a utility function that is standard , your desire for stocks should obey the equation: % wealth in stock = Expected Return / (volatility^2 × RiskCoefficient) So, we have three variables determining one's equity allocation. The risk coefficient constant is usually considered to be between 1 and 3, lets say 2 (as Salmon assumes). A higher risk coefficient means you are more risk averse, and note that if you increase it, your percent allocation of wealth to stocks goes to zero. The numerator is the expected return premium one gets for investing in the stock market. Most people assume this is 5% annually (I think it's practically zero ). Finally, we have the variance of returns, or volatility squared. As 'the market' is diversified (unlike a single stock), the market's variance should be linearly related to priced risk. Luckily, we have a good proxy for that, the VIX index , which is a forward looking estimate of future volatility based on option values. This is really what changed, and drove Felix's recommendation. On May 6 implied volatility on options went from 25% to 40%, implying to Salmon one should drastically reduces their market exposure. Plugging in the numbers, we see this allocation goes from 40% to 15.63%, a massive re-allocation. That is, theoretically, one should sell over half your equities according to the ineluctable logic of economics! One way to note it's wrong, is that prices and quantities exist in an equilibrium. In the short run, quantities don't change, so if preferences or expectations change, then prices must adjust, not quantities. If this theory explains what everyone is doing (on average), it must affect prices and not the allocation to equities, because not everyone can sell over half their equities overnight: someone would have to buy them! So in equilibrium, the only way this allocation percentage applied to stocks stayed constant, if 1) risk aversion , or 2) their expected return As volatility went up by 2.56 in my example (from 0.25 to 0.40 ), is it reasonable to think people became, suddenly, 2.56 times risk averse? One can't observe 'risk aversion' directly, but this is highly implausible, because intuitively during a panic, people become more risk averse. That leaves the expected return. Presumably, the risk premium, the return you get for the displeasure of bearing undiversifiable risk, had to have risen 2.56 times. But this turns out to be an empirical matter: do expected returns rise when volatility rises? Is it plausible that during this cataclysm, people were simultaneously increasing their prospective anticipation of future returns? I think anyone with common sense realizes these are bat-shiat crazy interpretations of what the representative investor was thinking. The data on actual returns and market volatility is, if anything, negatively correlated. That is, think of volatile years, and you'll think of the worst years: 1990, 2008, 2001, 2002. But the standard response to such empirical rejections is to note that actual returns aren't the same as expected returns, they vary by some statistical noise, and so perhaps we just don't have enough data. Logically possible, though after over 50 years mining all the data ever recorded for the elusive 'risk factor' that explains asset return variability, I'm thinking it's improbable. Steve Sharpe and Gene Amromin actually got around this objection by looking at survey data, and found that in questionnaires investors tended to have higher return expectations when they forecast volatility as being relatively low, and lower return expectations when they forecast higher volatility. Exactly the opposite of what they should be thinking. This isn't a missing a constant in the second decimal, rather, screwing up the sign. As this is consistent with the theme of my book Finding Alpha , I thought this paper was awesome, and asked Steve Sharpe why it wasn't in a journal. He noted that referees just kept sending it back for various reasons. This is unsurprising, because all the referees presume there must be some sort of mistake, that this can't be true; it's counter to all their theoretical training. It reminds me of my attempt to get this paper published, which was rejected at several publications for being wrong, obvious, and irrelevant, which merely highlighted to me they didn't like it. I don't think it's a conspiracy, just another example that theory can make you see facts as noisy exceptions or highly important and causes you to respond accordingly. People see what they believe, not vice versa, so Sharpe's fact has to be wrong according to these referees. Sharpe's result really puts the standard model in a box. Unlike the CAPM betas, for which we can say we 'just don't know the true market portfolio', this result takes fewer assumptions, so its empirical failure is all the more fatal to the core financial theory. People should be increasing their expected returns in volatile markets, and on average that should manifest itself in actual returns. We don't see that in actual returns, or in surveys of expected returns. A powerfully bad theory is like a lie--it has many inconsistencies because it isn't true (a worse bad theory is wrong and consistent with the data, but merely because it doesn't predict anything). One of the many bad implications of having the delusion that risk begets a higher expected return is that people invest in the stock market thinking they then deserve a higher return, a strategy that worked pretty well in the US in the 20th century, as long as you implemented a low-cost strategy that minimized trading and taxes. In reality, you either have to hope for lady luck, or actually do a lot of work finding your investing alpha looking for subtle patterns, or like Warren Buffet actually manage the companies you own to perform better than average. The idea that a passive approach to equities implies higher-than-average returns puts you at the mercy of brokers who may be selling diamonds in the rough, but usually are selling hope ( it's a Black Swan, trust me, I'm smart! If you don't expect a return merely for shelling out your dough, you see investing in a risky asset class as merely a necessary condition for higher-than-average returns, not a sufficient condition. This insight would encourage a great deal more prudence where it is needed. * a standard utility function is of the form U(x)=x^(1-a)/(1-a), where x is your wealth or consumption, and 'a' is your coefficient of relative risk aversion. It has the desirable condition that your relative pain to a 40% loss is the same whether you are rich or poor (desirable because otherwise risk aversion should have gone down a lot over the past 100 years as we have gotten wealthier). 5 comments: "Steve Sharpe and Gene Amromin actually got around this objection by looking at survey data, and found that in questionnaires investors tended to have higher return expectations when they forecast volatility as being relatively low, and lower return expectations when they forecast lower volatility. Exactly the opposite of what they should be thinking.." Did you mean "higher volatility" in the next to last sentence? oops, sign error. tx. "High risk investments, on an average, should yield higher returns." The fallacy lies in the abstraction (on an average) being interpreted as true in every high-risk investment. If you invest in 1000 perceived high-risk instruments as well as 1000 perceived low-risk instruments, the payoff on the 1000 high-risk instruments will be better than the 1000 low-risk Many high-risk instruments will fail you miserably. Many low-risk instruments will surprise you (with good returns). An average investor's thinking that EVERY high-risk instrument will yield a higher return is foolish. VIX isn't a good measure of volatility for an ordinary investor who plans to hold investments for years, not a month or two. A multi-year estimate of volatility would fluctuate a lot less, and the realized returns might bear some resemblance to what a smart person would expect. Peter: well, the average holding period for individuals and funds is about a year, though there are lots at much higher and lower horizons. Still, prospective volatility does vary by a large factor (say, 2), and long term vol forecasts are highly correlated with the VIX, and they correlation with expected returns is negative.
{"url":"http://falkenblog.blogspot.com/2010/05/felix-salmon-highlights-financial.html","timestamp":"2014-04-17T15:26:50Z","content_type":null,"content_length":"105344","record_id":"<urn:uuid:ae415a84-33aa-4aa5-a040-40c139654c81>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Create a Sudoku Edit Article Sample Sudoku PuzzlesBlank Sudoku Page Edited by Blizzerand, Peter, Flickety, Thomscher and 6 others Like Sudoku, but got bored of sitting there doing puzzles in books? Why not make your own? 1. 1 Start with the solution. You could use a computer generated grid or copy the solution of another published puzzle, but it only takes a few minutes to make one by hand. (There are 6,670,903,752,021,072,936,960^[1] legal grids. You only need to find one.) 1. Draw a 9x9 grid made of 9 3x3 cells. 2. Get a pencil. This is better than a pen because inevitably you will make a mistake and it will be easier to correct it. 3. Fit the number 1 into any square, as long as it conforms to standard Sudoku rules. 4. Repeat that step until you have a 1 in each row, column, and 3x3 mini-grid. 5. Repeat that step with number 2-9. You may get stuck here. If so, backtrack by swapping some of the digits that are causing the conflict. Try moving a pair of like digits at two corners of a rectangle to the other two corners of that rectangle if both are still open. Try permuting the three digits in a row or column of any 3x3 cell. If that solves the problem and doesn't introduce any other conflicts, then continue. 2. 2 Randomize the solution grid. You may have started step one by writing 123456789 across the top row. If you don't want that in your puzzle, apply any combination of the following operations to your grid. In most cases one grid can generate trillions of others, but in some highly symmetrical cases, you might only get billions of others.^[1] □ Permute the rows 1-3, 4-6, or 7-9. □ Permute the columns 1-3, 4-6, or 7-9. □ Permute the 3x9 blocks of rows. □ Permute the 9x3 blocks of columns. □ Rotate the grid 90, 180, or 270 degrees. □ Reflect the grid about the horizontal, vertical, or either diagonal axis. □ Permute the digits. 3. 3 Remove digits that can be derived from the remaining digits. Remove at least one from each row, column and block. 4. 4 As you continue to make the puzzle harder by removing more digits, check that your puzzle is still solvable. Online solvers such as [1] and [2] can help here. If removing a digit leaves a puzzle with multiple solutions, or with a single solution that requires excessively difficult methods, then back up and try removing another digit. 5. 5 When you think you've removed enough of the clues, test solve it by hand to confirm its solvability and difficulty. If you're satisfied with your puzzle, save it and share it with the world. Sample Sudoku Puzzles Blank Sudoku Page • Sudoku puzzles can usually be reduced to 20-30 clues. • Some Sudoku setters prefer puzzles where the cells containing givens have 180 degree rotational symmetry. In this case, when removing digits in step 3, remove pairs of diagonally opposite digits. • There are a number of computer programs that can create Sudoku puzzles, but the best collections are hand-crafted. • There are many Sudoku variations and most of the instructions here easily generalize. The Scanraid solver supports some of the most common variations, but if you want to experiment with a more obscure variant, you might need to test by hand or write your own solver. Things You'll Need • A pen • A pencil • A rubber • Imagination • A lot of patience • Paper • Hope Article Info Categories: Sudoku Recent edits by: Allie, Jeff, BR In other languages: Español: Cómo crear un sudoku Thanks to all authors for creating a page that has been read 25,145 times. Was this article accurate?
{"url":"http://www.wikihow.com/Create-a-Sudoku","timestamp":"2014-04-20T18:35:26Z","content_type":null,"content_length":"69680","record_id":"<urn:uuid:fbb1a727-2f01-48e1-a490-1d0902a1a69a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Road Maker 2' printed from http://nrich.maths.org/ This problem follows on from Road Maker, where the rules of making roads are detailed in full. The Munchkin road making authority have commissioned you to work out the possible destinations for their roads. Use Cartesian coordinates where the first tile is placed with opposite corners on $ (0,0)$ and $(1,1)$. Investigate ways in which you can reach your destination. You may like to consider these questions: 1. Can you make roads with rational values for the $x$ coordinate of the destination? 2. Can you make roads with rational values for the $y$ coordinate of the destination? 3. Can you create a road with the $x$ coordinate equal to any integer multiple of one half? 4. Can you make roads for which the coordinates of the destination are both rational? Both irrational? 5. Can multiple roads lead to the same destination? For which destinations is the road unique? You might like to experiment with this interactivity This text is usually replaced by the Flash movie.
{"url":"http://nrich.maths.org/5924/index?nomenu=1","timestamp":"2014-04-20T05:59:54Z","content_type":null,"content_length":"4750","record_id":"<urn:uuid:0b640a35-c477-4b9e-ab77-f493a3288ea5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Equations of motion and energy for circular orbits 9.4 Equations of motion and energy for circular orbits Most inspiralling compact binaries will have been circularized by the time they become visible by the detectors LIGO and VIRGO. In the case of orbits that are circular - apart from the gradual 2.5PN radiation-reaction inspiral - the complicated equations of motion simplify drastically, since we have [16 ]To display conveniently the successive post-Newtonian corrections, we employ the post-Newtonian parameter Notice that there are no corrections of order 1PN in Equations (187) for circular orbits; the dominant term is of order 2PN, i.e. proportional to The relative acceleration where 189), opposite to the velocity 182, 183). The main content of the 3PN equations (189) is the relation between the frequency [37 , 38 ]: The length scale 184). As for the energy, it is immediately obtained from the circular-orbit reduction of the general result (170). We have This expression is that of a physical observable numerical value of we readily obtain from Equation (190) the expression of that we substitute back into Equation (191), making all appropriate post-Newtonian re-expansions. As a result, we gladly discover that the logarithms together with their associated gauge constant For circular orbits one can check that there are no terms of order 194), so our result for
{"url":"http://www.maths.soton.ac.uk/EMIS/journals/LRG/Articles/lrr-2006-4/articlesu20.html","timestamp":"2014-04-17T03:48:31Z","content_type":null,"content_length":"16944","record_id":"<urn:uuid:43bc1a34-b469-43b2-b2b9-f677caf77d1f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Playing with Probability Lesson Playing with Probability Lesson Plan Students learn how to calculate both theoretical and experimental probability by rotating through a series of work stations. Standards (NCTM 3-5) Number and Operation Understand numbers, ways of representing numbers, relationships among numbers, and number systems • recognize and generate equivalent forms of commonly used fractions, decimals, and percents; Data Analysis and Probability Formulate questions that can be addressed with data and collect, organize, and display relevant data to answer them • collect data using observations, surveys, and experiments; • represent data using tables and graphs such as line plots, bar graphs, and line graphs; Understand and apply basic concepts of probability • describe events as likely or unlikely and discuss the degree of likelihood using such words as certain, equally likely, and impossible; • predict the probability of outcomes of simple experiments and test the predictions; • understand that the measure of the likelihood of an event can be represented by a number from 0 to 1. Student Prerequisites Technological: Students must be able to: • perform basic mouse manipulations such as point, click and drag. • use a browser such as Netscape for experimenting with the activities. Teacher Preparation Teacher will need: • Have enough stations so that each pair of students can be working at an individual station. (You may want to have multiples of each station because some stations take longer to complete than This is a list of materials assuming that you will only need 6 stations. You can set up more than one of each station if needed. • 2 race boards and 4 race cars • 8 dice • 2 pieces of paper numbered 1- 12 • 10 square pieces of paper or 10 poker chips • an opaque bag • 15 white marbles • 5 red marbles • a spinner • 3 index cards (a mole drawn on the reverse of one card) • 2 pennies • a deck of playing cards Students will need: • access to a browser. • paper. • pencil. Lesson Outline 1. Focus and Review Introduce the idea of probability through a discussion about something similar to the lottery. 2. Objectives Students will be able to calculate both experimental and theoretical probabilities as well as display probabilities in both graphical and fraction form. 3. Guided Practice □ Work through an example work station with the students. □ Fill out the appropriate section on the data collection sheet with the class. 4. Teacher Input □ Explain the procedure to be followed at each station. □ Explain that experimental probability is the actual results gathered by doing the experiment several times. □ Describe to the students how to calculate theoretical probability. □ Put the students into pairs. □ Have the students work through the stations allowing 5 minutes for each station. 5. Independent Practice □ Have students rotate between the stations and complete their data collection sheet. □ You may also want to have a computer station set up for the students to work with several probability applets that model some of the activities at the various stations. Some appropriate applets are: 6. Closure □ Have each group share the experimental data they collected from one experiment. Ask them if the experimental probability they calculated is the same as the theoretical probability. □ Reinforce the concepts of theoretical verses experimental probability. □ Compile the class' data for all the experiments and compare the individual group experimental results to the collective class results. The compiled class results should be closer to the theoretical probability than most individual group's results. □ Discuss why this is so. □ Discuss why computers might be helpful when working with probability experiments. Please use this form for questions and comments about this project. © Copyright 1997-2002 The s Education Foundation, Inc.
{"url":"http://www.shodor.org/interactivate1.0/elementary/lessons/Probability.html","timestamp":"2014-04-21T12:11:22Z","content_type":null,"content_length":"7116","record_id":"<urn:uuid:3e5e66fe-dd9a-4c82-b3e8-939699eac92e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Damping in a rolling bearing arrangementDamping in a rolling bearing arrangement Damping in a rolling bearing arrangement The operation of a rotating machine at critical speeds frequently causes a high level of mechanical vibration, noise and excessive wear. An important objective regarding rotating machinery is either to eliminate major resonances or to avoid operating the equipment at speeds which could induce them.All shaft/bearing systems have a large number of frequencies at which they tend to vibrate. These are the so-called eigenfrequencies or natural frequencies. They are determined by the mass and stiffness distribution of the shaft as well as the location and stiffness of the supporting bearings. Sometimes the compliance of the housing also has a significant influence. Each eigenfrequency is associated with a different vibration mode, or eigenmode. In practice, engineers are mainly concerned with the modes at lower frequencies, because their vibration amplitudes are more pronounced than those at higher frequencies. Critical speeds The rotor speed at which the shaft/bearing system is in resonance with a periodic excitation force is defined as a critical speed. The most common excitation force is due to mass unbalance which occurs at the rotor speed. At most speeds the vibration amplitudes of a shaft/bearing system will be small, except when the rotational speed, and thus the frequency of unbalance excitation, is near an eigenfrequency of the system, which will then respond with excessive vibrations. Today’s sophisticated rotor dynamics programs can take into account the stiffness of shafts, bearings and housings. This enables engineers to design rotor bearing systems with critical speeds located away from the operational speed by a safe margin. Calculation of the undamped critical speeds and mode shapes is an excellent tool for preliminary evaluation of rotor bearing systems. These values can only give relative displacements, however, and a complete unbalance response analysis is needed to determine absolute displacements. This is particularly important for start-up and shut-down phases when critical speeds are often passed. The risk of machine failure at those passages should be estimated. In linear systems, the amplitude of the vibrations in resonance conditions depends mainly on the location and damping characteristics of the supporting bearings. While the stiffness of rolling element bearings is well understood in terms of external load and preload, little information is provided on the damping characteristics of a rolling bearing arrangement. In particular, there is no standardised procedure available to estimate damping in a rolling bearing arrangement using theoretical models. Vibration models for rolling bearings The performance of rolling element bearings is guaranteed only with sufficiently stiff housings and shafts. Tight support of thin and flexible bearing rings is important for correct bearing operation and to take advantage of high load carrying capacity. The weight of the rolling bearings is rather small in comparison to the dimensions of shafts and housings commonly used in mechanical engineering. In rotor-dynamics calculations, rolling bearings are therefore considered mainly as massless components. Bearing properties are often described by a linearised radial bearing stiffness (fig. 2) and a roughly estimated equivalent viscous damping For most accurate investigations a (5×5) bearing stiffness matrix can be calculated, including all translational and tilt stiffness terms as well as the cross-coupling terms. Sources of bearing damping Vibrations are excited by dynamic forces and mechanisms which occur in any rotating machine. Mechanisms which decrease the vibration amplitudes by converting vibration energy into energy which is not relevant for the vibrating system (e.g. heat) are called damping mechanisms. Based on SKF knowledge and on the experiments described below, the following major sources for damping within a rolling bearing arrangement may be addressed (fig. 1): Source #1 Damping of the elasto-hydrodynamic (EHD) lubrication film within the Hertzian contact zone between the rolling elements and the raceways. Source #2 Bearing interface damping between the bearing rings and housings or shaft respectively. Source #3 Damping due to squeezing lubricant within the so-called entry region where the oil is entrained into the Hertzian zone. Source #4 Material damping due to Hertzian deformation of the rolling elements and raceways. Test rig and parameter identification The test rig is designed so that its dynamic behaviour is determined mainly by those parameters which are under consideration. In this case, stiffness and damping coefficients of the rolling bearing arrangements are the major parameters. A pair of deep groove ball bearings 6309 are interference fitted to a heavy and very stiff symmetrical shaft. The assembly is then mounted into a very solid housing which is carried by a soft suspension (fig. 3 and 4). An experimental modal analysis verifies that the bearing arrangements represent the highest compliance in the system. All other parts (housing, shaft, preloading device etc.) behave as almost rigid bodies within the frequency range of interest, generating practically no additional structural damping. Measured excitation forces and vibration response signals are used to calculate experimental frequency response functions, which represent the relationship between the input excitation force and the resulting vibration response (fig. 4). In order to identify bearing stiffness and damping coefficients, an analytical multi-degree of freedom model is used that simulates the measured dynamic rig behaviour. This analytical model incorporates the unknown bearing stiffness and damping values, which are identified by curve-fitting the calculated transfer functions to the measured ones with common least square techniques (fig. Experimental investigations To separate experimentally the different damping mechanisms already mentioned, various test arrangements and operating conditions are applied. Experiments are carried out for various • rotor speeds • bearing preloads • vibration excitation forces • vibration excitation signals • conditions of bearing lubrication • conditions of housing to ring interface. Experimental results It is important to emphasise that the effectiveness of the mechanisms in dissipating energy depends on local compliance at their locations. For example, the experiments showed a high damping capacity of the EHD lubrication film (source #1) and even a higher damping capacity of the bearing interface (source #2). Owing to enormous stiffness values within the EHD layer and the bearing however, these mechanisms become fixed, and their high damping capacity cannot contribute to the overall damping of the rolling bearing arrangement. In case of vibrations, the major compliance occurs within the material of the rolling elements and the raceways, resulting in squeeze effects within the oil entry region. The damping mechanism within the material (hysteresis damping, source #4) and the damping mechanism within the oil entry region (source #3) predominantly determine the overall bearing damping. For deep groove ball bearings, as well as for angular contact ball bearings, it was found that dry bearings without lubricant possess the lowest damping coefficients of all investigated conditions. The equivalent viscous damping coefficients are in the range of 330 Ns/m to 550 Ns/m. Even the slightest amount of oil within the rolling contacts increases the bearing damping dramatically. The viscous damping coefficients for the non-rotating bearing for example, are identified to be in the range of 1,800 Ns/m to 2,100 Ns/m. With increasing rotor speed, the stiff EHD layer develops and fixes the damping mechanism within that zone. The values converge to the lower damping values of the lubricant-free bearing. Theoretical approach The identified minimum damping ability of a dry (lubricant-free) ball bearing can be estimated fairly accurately with a loss factor hvwhich is commonly applied in material damping theory. The empirical approach: hv=DE[D]/DE[V]= 2Pf c/k gives a simple relationship between the unknown bearing damping coefficient c and the bearing stiffness coefficient k which can be calculated. The term f describes the vibration frequency, DE[D] the dissipated energy per load cycle and DE[V] the maximum energy due to elastic deformation. For a deep groove ball bearing 6309, the loss factor can be estimated with hv » 1%. For angular contact ball bearings, a follow-up research programme was recently started at SKF Österreich AG and SKF ERC in co-operation with the Technical University of Vienna, Institute for Machine Dynamics and Measurements. Further damping data will soon be available for these bearing types. Comparison of results In this project the bearing stiffness and damping coefficients are obtained by experiments. This involves the risk of measuring unknown effects which may falsify the identification of the interesting bearing stiffness and damping values. For verification purposes, the bearing stiffness coefficients are calculated for all investigated operating conditions with proven SKF programs. An example of the close agreement between measured and calculated stiffness values is shown in figure 6. The data is shown in terms of axial bearing preload. The lower half of figure 6 compares experimentally identified bearing damping values with those obtained by the theoretical approach described. Except for lightly preloaded bearings, the graph shows good correlation between measured and calculated values. Rotor dynamics in practice SKF has access to a series of sophisticated computer programs covering all needs in calculating the performance of rolling bearings in arbitrary applications. Static properties such as bearing deflections, contact stresses, load distributions or bearing stiffness values for any operating condition are calculated with a program developed by SKF. On special request from customers, finite element calculations and rotor dynamics calculations can also be carried out. For rotor dynamic calculations an additional program is available. This program allows a comprehensive study of the dynamic behaviour of shaft/ bearing systems and is connected to a data base including all SKF standard bearings. Recently, a customer experienced failures of SKF angular contact ball bearings mounted in air blowers after very short operating periods. Bearing defects could be excluded as the source of failure after investigations of the material, the heat treatment and the bearing dimensions. Static overloading of the bearings was also ruled out by means of simple calculations. When the operating speed of the air blower was increased, dynamic problems were expected. Subsequent calculations with the rotor dynamics program showed that the lowest eigenfrequency (bending mode of shaft) was close to that of the operating speed. This was eventually verified by the customer through experiment. Applying the rotor dynamic program, the effect of an increased shaft diameter of one or more shaft sections, and the influence of the blower location on the dynamic behaviour of the entire system, could be estimated. This formed the basis for an improved air blower design. Robert Zeillinger and Hubert Köttritsch, SKF Österreich AG, Steyr, Austria If you are interested in using the text and images contained within this online magazine for publication, please contact the Editor-in-Chief on e-mail EVOLUTION@SKF.COM for approval. You are welcome to quote from our articles free of charge, but please credit the source as 'Evolution - the business and technology magazine from SKF (WWW.SKF.COM) If in doubt, please contact the Editor-in-Chief.
{"url":"http://evolution.skf.com/damping-in-a-rolling-bearing-arrangement/","timestamp":"2014-04-20T13:19:41Z","content_type":null,"content_length":"45558","record_id":"<urn:uuid:6868188e-8f02-47f7-a8d9-e21ed52f77ee>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Using Zeta Zeros to Tally Sigma Times Tau In number theory, the number of divisors of an integer is usually denoted by . ( is the lowercase Greek letter tau.) For example, 4 has three divisors (namely, 1, 2, and 4), so . The sum of the divisors of is denoted by . ( is the lowercase Greek letter sigma.) So, . Suppose . The sum of for is an irregular step function that jumps up at every integer . For example, for , this sum is = . For , this sum is = . This Demonstration shows how we can approximate this step function with a sum that involves zeros of the Riemann zeta () function. Snapshot 1: the graphs of the step function and the formula using no zeta zeros Snapshot 2: the graphs of and the formula using 100 pairs of zeta zeros After you use the slider to choose (the number of pairs of zeta zeros to use), this Demonstration uses the following formula to calculate : (1) . In this formula, where is Euler's constant. In equation (1), is the complex zero of the Riemann zeta function with positive real part. The first three complex zeros of the zeta function are approximately , , and . These zeros occur in conjugate pairs, so if is a zero, then so is . If you use the slider to choose, say, (one pair of zeta zeros), then the first sum in equation (1) adds the two terms that correspond to the first pair of conjugate zeros, and . These terms are conjugates of each other. When these terms are added, their imaginary parts cancel while their real parts add. So, the applied to the first sum is merely an efficient way to combine the two terms for each pair of zeta zeros. Notice that the second sum has the same form as the first, except that the second sum extends over the real zeros of the zeta function, namely, . However, the second sum is too small to visibly affect the graphs, so this sum is not computed here. If you plot a graph using no zeta zeros, then the graph is computed with only the terms through . Where Does Equation (1) Come From? To prove equation (1), we start with the following identity, which holds for (see [1], equation D-58, and [2], Theorem 305): (2) . Perron's formula (see reference [3]) takes an identity like equation (2) and gives a formula for the sum of the numerators as a function of , in this case, . When we apply Perron's formula to equation (2), we get equation (1). To apply Perron's formula, we integrate this integrand around a contour in the complex plane. Each part of equation (1) is the residue at one of the poles of this integrand. The residue at the pole at is . At , has a pole (of order 1), so the integrand has a pole of order 2 at due to the . The residue at is . Similarly, the integrand also has a pole of order 2 at . The expression is the residue at . Mathematica can compute these residues. For example, this calculation Residue[Zeta[s]^2 Zeta[s-1]^2 / Zeta[2 s - 1] x^s/s, {s, 2}] gives , where and have the values given above. Finally, the integrand has a pole at each complex zero of . The first sum in equation (1) is just the sum of the residues at these complex zeros of zeta. Each complex zero gives rise to one term in the sum. ( is the complex zero of , so is the complex zero of ). In the same way, the second sum arises from the real zeros () of . ( is the real zero of , so is the real zero of ). [2] G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers , 4th ed., Oxford: Oxford University Press, 1965, p. 256. [3] H. L. Montgomery and R. C. Vaughan, Multiplicative Number Theory: I. Classical Theory , Cambridge: Cambridge University Press, 2007, p. 397.
{"url":"http://demonstrations.wolfram.com/UsingZetaZerosToTallySigmaTimesTau/","timestamp":"2014-04-21T04:35:27Z","content_type":null,"content_length":"57894","record_id":"<urn:uuid:ee90d156-3c43-41fc-84bc-901c9be47352>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: grading a regents - could we try this? Replies: 20 Last Post: Jun 4, 2013 9:33 PM Messages: [ Previous | Next ] Kathy Re: grading a regents - could we try this? Posted: May 16, 2013 4:46 PM Posts: 78 Registered: I will check with her. She is usually very careful not to share something she is not 100% sure of. She's new but we've been very impressed with her so far and I would bet that her 8/17/06 information is correct. ----- Original Message ----- From: MHulton@holland.wnyric.org To: nyshsmath@mathforum.org Sent: Thursday, May 16, 2013 8:36 AM Subject: Re: grading a regents - could we try this? We have the exact same situation. One Alg II & Trig teacher. My thoughts, is it allowed that the Algebra II and Trig teacher makes up the key and guides the other teacher correcting the exam through the correct solutions using the rubric as their guide before they even take the exams from the office to start to correct? One more thing, if I interpreted this correctly, are you saying that if there is a mathematical question, the principle is allowed to come and ask me about the validity of the solution? (even though I am the instructor of the course). May I ask for verification on this? Who said it was ok? Thank you, Melanie Hulton Mrs. Melanie Hulton Mathematics Department Chair Algebra, Algebra II/Trigonometry, and Trigonometry Instructor Holland Central Jr./Sr. High School 716 537 8200 ext 7112 email: mhulton@holland.wnyric.org -----owner-nyshsmath@mathforum.org wrote: ----- To: <nyshsmath@mathforum.org> From: "Kathy Noftsier" Sent by: owner-nyshsmath@mathforum.org Date: 05/16/2013 07:21AM Subject: Re: grading a regents - could we try this? My principal has checked this out and said that I cannot be in the same room. We will have one high school and two middle school teachers grading all three - possibly with help from a third middle school teacher and an elementary teacher (who is also certified high school math but hasn't taught it in many years). Yes, this is going to be tough on the middle school teachers. In the past they have brought in substitutes for them for some of the days. We have always needed to call on their help with only two teachers in the high school. The process, if they have any questions, is that they must ask her first (the principal), then she must bring the question to the high school teacher - me for both geometry and alg 2/trig. Integrated algebra shouldn't be quite as bad since I don't teach it but I wrote the curriculum for it so if any questions arise I can probably handle them. Kathy Noftsier ----- Original Message ----- From: "Steven Fenton" <sfenton@uvstorm.org> To: <nyshsmath@mathforum.org> Sent: Wednesday, May 15, 2013 6:04 PM Subject: grading a regents - could we try this? > Need help with this hypothetical situation - our district is small with > only 3 high school math teachers and 2 middle school math teachers - under > 1000 k-12 students. Given the size of the school we have one teacher for > Algebra 2 & Trigonometry and two Geometry teachers. State regulations say > we are not allowed to grade our own regents exams. The middle school > teachers will have classes to the end of the first week of regents, and > then field trips and middle school graduation the second week of regents > so their availability is questionable. The two middle school teachers have > expressed concern about the lack of a knowledge base especially for Geo > and Trig. Two of the high school teachers (not teaching Trig) have > expressed the same concern about the Trig course. We understand that state > regulations for grading math regents also say the regents exams are to be > graded by a committee of three certified teachers. We have discussed > trying regional scoring. > For grading High School Math Regents Exams could we do the following...? > Could the two teachers not teaching the course and somehow find a > competent third adult to be the third grader be guided by the teacher of > record for the regents exam? The teacher of record would be in the same > room taking questions on how a question could be graded but not know whose > exam the question is coming from. We are a little uneasy given the > complexity of the problems that could appear in any of the math regents > for individuals who do not have the experience with the subject matter. > Thoughts??? Is this right or wrong?? > ******************************************************************* > * To unsubscribe from this mailing list, email the message > * "unsubscribe nyshsmath" to majordomo@mathforum.org > * > * Read prior posts and download attachments from the web archives at > * http://mathforum.org/kb/forum.jspa?forumID=671 > ******************************************************************* * To unsubscribe from this mailing list, email the message * "unsubscribe nyshsmath" to majordomo@mathforum.org * Read prior posts and download attachments from the web archives at * http://mathforum.org/kb/forum.jspa?forumID=671 Teach CanIt if this mail (ID 0nJAzlkj3) is spam: Spam: http://milton1.wnyric.org/canit/b.php?i=0nJAzlkj3&m=cc4213117a58&t=20130516&c=s Not spam: http://milton1.wnyric.org/canit/b.php?i=0nJAzlkj3&m=cc4213117a58&t=20130516&c=n Forget vote: http://milton1.wnyric.org/canit/b.php?i=0nJAzlkj3&m=cc4213117a58&t=20130516&c=f ******************************************************************* * To unsubscribe from this mailing list, email the message * "unsubscribe nyshsmath" to majordomo@mathforum.org * * Read prior posts and download attachments from the web archives at * http://mathforum.org/kb/forum.jspa?forumIDg1 ******************************************************************* Date Subject Author 5/15/13 grading a regents - could we try this? Steven Fenton 5/16/13 Re: grading a regents - could we try this? JFish@csufsd.org 5/16/13 Re: grading a regents - could we try this? Kathy 5/16/13 Re: grading a regents - could we try this? MHulton@holland.wnyric.org 5/16/13 Euclid6675@aol.com 5/16/13 Lou Cino 5/16/13 ElizWaite@aol.com 5/16/13 Euclid6675@aol.com 5/17/13 Elaine Zseller 5/19/13 Euclid6675@aol.com 5/16/13 Re: grading a regents - could we try this? Kathy 5/16/13 Re: grading a regents - could we try this? gWilkie@highlands.com 5/16/13 RE: grading a regents - could we try this? bill wickes 5/17/13 RE: grading a regents - could we try this? Bruce L Hodgson 5/17/13 Re: grading a regents - could we try this? gWilkie@highlands.com 5/17/13 Re: grading a regents - could we try this? MathCaryl@aol.com 5/18/13 Re: grading a regents - could we try this? Kathy 5/17/13 Re: grading a regents - could we try this? ElizWaite@aol.com 5/17/13 RE: grading a regents - could we try this? edward mertson 5/16/13 Re: grading a regents - could we try this? Bolster, Kimberly 6/4/13 Re: grading a regents - could we try this? Steven Fenton
{"url":"http://mathforum.org/kb/message.jspa?messageID=9124193","timestamp":"2014-04-21T01:04:52Z","content_type":null,"content_length":"47918","record_id":"<urn:uuid:94a047fb-b1c1-408a-bf64-827db4a34bc2>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Ecology and Society: Power asymmetries in small-scale fisheries– a barrier to governance transformability? Note: Respondents have been sorted according to their centrality in the LEK network and the x-axis is logarithmically scaled. Blue diamonds represent centrality scores in the knowledge network. Red squares represent centrality scores in the gear exchange network. Black lines indicate the logarithmic curves fitted to each individual data set (polynomial fit). Deviations from this pattern have been qualitatively identified and are indicated with circles in the graph. Numbers in brackets correspond to the numbers in the column “Deviations” in Table 3.
{"url":"http://www.ecologyandsociety.org/vol15/iss4/art32/figure2.html","timestamp":"2014-04-19T18:12:24Z","content_type":null,"content_length":"1966","record_id":"<urn:uuid:65b1a372-dd29-4d43-8a61-28ea480b58ad>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Downers Grove Science Tutor Find a Downers Grove Science Tutor ...As a current dental student, my passion for learning is proven every day, and I hope to inspire your child to achieve their academic goals. I played soccer from the time I was four until I was eighteen. I was the captain of my high school varsity team while I was an assistant coach for an team of 8th graders. 17 Subjects: including psychology, physical science, biology, chemistry ...I have completed college classes with high marks in anatomy and physiology, biology, and chemistry and am available to all ages of students seeking help in those subjects. I also offer assistance in various middle/high school math subjects, as well as reading and writing. My availability is flexible and I am willing to meet students at their choice of location. 4 Subjects: including biology, physiology, anatomy, prealgebra ...I have had hundreds receive I.M.E.A. district honors and several reach state. One year (1996) six Lake Park clarinetists all made I.M.E.A. Watching students grow and achieve is a joy of my 15 Subjects: including ACT Science, reading, English, writing ...I also have relevant business experience and feel comfortable advising students on any matters related to career choices. I invite you to message me if you have any additional questions. I look forward to helping you help yourself. 35 Subjects: including biology, elementary (k-6th), SPSS, phonics I am an instructor at the community college level where I teach Intro to Biology, Human Structure and Function, Anatomy and Physiology, and Fundamentals of Chemistry. I have also student-taught and have been a home-bound teacher at Hinsdale Central. I am certified teacher in math, physics, biology and chemistry. 14 Subjects: including physics, SAT math, algebra 1, algebra 2
{"url":"http://www.purplemath.com/downers_grove_il_science_tutors.php","timestamp":"2014-04-18T03:51:33Z","content_type":null,"content_length":"24059","record_id":"<urn:uuid:ce0b0d2d-dba4-4fb5-a4b6-2c185c6ae4e4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure 2: Comparison of the theoretical Snell's law given by (15a) and (15b) (lines) against full numerical computations (points) for a unit-amplitude () spatial soliton at a linear interface () with and when (a) and (b) . Curves below (above) the line have (), so that the refraction is internal (external).
{"url":"http://www.hindawi.com/journals/jamp/2012/137967/fig2/","timestamp":"2014-04-16T14:30:45Z","content_type":null,"content_length":"17330","record_id":"<urn:uuid:d604e84d-5bdd-431c-9f51-966134c1984f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
200 km/h conversion to mph You asked: 200 km/h conversion to mph kilometres per hour miles per hour 124.274238447467 miles per hour the speed 124.274238447467 miles per hour Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/200_km/h_conversion_to_mph","timestamp":"2014-04-17T16:01:00Z","content_type":null,"content_length":"52460","record_id":"<urn:uuid:bc00bede-797d-468c-9395-3e1b90219c21>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Incremental multi-step Q-learning Results 1 - 10 of 71 - Journal of Artificial Intelligence Research , 1996 "... This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem ..." Cited by 1298 (23 self) Add to MetaCart This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word "reinforcement." The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning. , 1994 "... Reinforcement learning algorithms are a powerful machine learning technique. However, much of the work on these algorithms has been developed with regard to discrete finite-state Markovian problems, which is too restrictive for many real-world environments. Therefore, it is desirable to extend these ..." Cited by 285 (1 self) Add to MetaCart Reinforcement learning algorithms are a powerful machine learning technique. However, much of the work on these algorithms has been developed with regard to discrete finite-state Markovian problems, which is too restrictive for many real-world environments. Therefore, it is desirable to extend these methods to high dimensional continuous state-spaces, which requires the use of function approximation to generalise the information learnt by the system. In this report, the use of back-propagation neural networks (Rumelhart, Hinton and Williams 1986) is considered in this context. We consider a number of different algorithms based around Q-Learning (Watkins 1989) combined with the Temporal Difference algorithm (Sutton 1988), including a new algorithm (Modified Connectionist Q-Learning), and Q() (Peng and Williams 1994). In addition, we present algorithms for applying these updates on-line during trials, unlike backward replay used by Lin (1993) that requires waiting until the end of each t... - MACHINE LEARNING , 1996 "... The eligibility trace is one of the basic mechanisms used in reinforcement learning to handle delayed reward. In this paper we introduce a new kind of eligibility trace, the replacing trace, analyze it theoretically, and show that it results in faster, more reliable learning than the conventional ..." Cited by 186 (11 self) Add to MetaCart The eligibility trace is one of the basic mechanisms used in reinforcement learning to handle delayed reward. In this paper we introduce a new kind of eligibility trace, the replacing trace, analyze it theoretically, and show that it results in faster, more reliable learning than the conventional trace. Both kinds of trace assign credit to prior events according to how recently they occurred, but only the conventional trace gives greater credit to repeated events. Our analysis is for conventional and replace-trace versions of the offline TD(1) algorithm applied to undiscounted absorbing Markov chains. First, we show that these methods converge under repeated presentations of the training set to the same predictions as two well known Monte Carlo methods. We then analyze the relative efficiency of the two Monte Carlo methods. We show that the method corresponding to conventional TD is biased, whereas the method corresponding to replace-trace TD is unbiased. In addition, we show that t... - In Proceedings of the Fifteenth International Conference on Machine Learning , 1998 "... This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decomposition of the value function. The MAXQ decomposition has both a procedural semantics---as a subroutine hierarchy---and a declarative semantics---as a representation of the value function of a hierarchi ..." Cited by 122 (3 self) Add to MetaCart This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decomposition of the value function. The MAXQ decomposition has both a procedural semantics---as a subroutine hierarchy---and a declarative semantics---as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. Conditions under which the MAXQ decomposition can represent the optimal value function are derived. The paper defines a hierarchical Q learning algorithm, proves its convergence, and shows experimentally that it can learn much faster than ordinary "flat" Q learning. Finally, the paper discusses some interesting issues that arise in hierarchical reinforcement learning including the hierarchical credit assignment problem and non-hierarchical execution of the MAXQ hierarchy. 1 Introduction Hierarchical approaches to reinforcement learning (RL) problems promise ma... - JOURNAL OF MACHINE LEARNING RESEARCH , 2003 "... We extend Q-learning to a noncooperative multiagent context, using the framework of generalsum stochastic games. A learning agent maintains Q-functions over joint actions, and performs updates based on assuming Nash equilibrium behavior over the current Q-values. This learning protocol provably conv ..." Cited by 108 (0 self) Add to MetaCart We extend Q-learning to a noncooperative multiagent context, using the framework of generalsum stochastic games. A learning agent maintains Q-functions over joint actions, and performs updates based on assuming Nash equilibrium behavior over the current Q-values. This learning protocol provably converges given certain restrictions on the stage games (defined by Q-values) that arise during learning. Experiments with a pair of two-player grid games suggest that such restrictions on the game structure are not necessarily required. Stage games encountered during learning in both grid environments violate the conditions. However, learning consistently converges in the first grid game, which has a unique equilibrium Q-function, but sometimes fails to converge in the second, which has three different equilibrium Q-functions. In a comparison of offline learning performance in both games, we find agents are more likely to reach a joint optimal path with Nash Q-learning than with a single-agent Q-learning method. When at least one agent adopts Nash Q-learning, the performance of both agents is better than using single-agent Q-learning. We have also implemented an online version of Nash Q-learning that balances exploration with exploitation, yielding improved performance. , 1996 "... A key element in the solution of reinforcement learning problems is the value function. The purpose of this function is to measure the long-term utility or value of any given state and it is important because an agent can use it to decide what to do next. A common problem in reinforcement learning w ..." Cited by 92 (6 self) Add to MetaCart A key element in the solution of reinforcement learning problems is the value function. The purpose of this function is to measure the long-term utility or value of any given state and it is important because an agent can use it to decide what to do next. A common problem in reinforcement learning when applied to systems having continuous states and action spaces is that the value function must operate with a domain consisting of real-valued variables, which means that it should be able to represent the value of infinitely many state and action pairs. For this reason, function approximators are used to represent the value function when a close-form solution of the optimal policy is not available. In this paper, we extend a previously proposed reinforcement learning algorithm so that it can be used with function approximators that generalize the value of individual experiences across both, state and action spaces. In particular, we discuss the benefits of using sparse coarse-coded funct... - Proceedings of the Seventeenth International Conference on Machine Learning , 2000 "... Eligibility traces have been shown to speed reinforcement learning, to make it more robust to hidden states, and to provide a link between Monte Carlo and temporal-difference methods. Here we generalize eligibility traces to off-policy learning, in which one learns about a policy different from the ..." Cited by 50 (4 self) Add to MetaCart Eligibility traces have been shown to speed reinforcement learning, to make it more robust to hidden states, and to provide a link between Monte Carlo and temporal-difference methods. Here we generalize eligibility traces to off-policy learning, in which one learns about a policy different from the policy that generates the data. Off-policy methods can greatly multiply learning, as many policies can be learned about from the same data stream, and have been identified as particularly useful for learning about subgoals and temporally extended macro-actions. In this paper we consider the off-policy version of the policy evaluation problem, for which only one eligibility trace algorithm is known, a Monte Carlo method. We analyze and compare this and four new eligibility trace algorithms, emphasizing their relationships to the classical statistical technique known as importance sampling. Our main results are 1) to establish the consistency and bias properties of the new methods and 2) to empirically rank the new methods, showing improvement over one-step and Monte Carlo methods. Our results are restricted to model-free, table-lookup methods and to offline updating (at the end of each episode) although several of the algorithms could be applied more generally. 1. , 1995 "... This dissertation is submitted for consideration for the dwree of Doctor' of Philosophy at the Uziver'sity of Cambr'idge Summary This thesis is concerned with practical issues surrounding the application of reinforcement lear'ning techniques to tasks that take place in high dimensional continuous ..." Cited by 45 (0 self) Add to MetaCart This dissertation is submitted for consideration for the dwree of Doctor' of Philosophy at the Uziver'sity of Cambr'idge Summary This thesis is concerned with practical issues surrounding the application of reinforcement lear'ning techniques to tasks that take place in high dimensional continuous state-space environments. In particular, the extension of on-line updating methods is considered, where the term implies systems that learn as each experience arrives, rather than storing the experiences for use in a separate off-line learning phase. Firstly, the use of alternative update rules in place of standard Q-learning (Watkins 1989) is examined to provide faster convergence rates. Secondly, the use of multi-layer perceptton (MLP) neural networks (Rumelhart, Hinton and Williams 1986) is investigated to provide suitable generalising function approximators. Finally, consideration is given to the combination of Adaptive Heuristic Critic (AHC) methods and Q-learning to produce systems combining the benefits of real-valued actions and discrete switching - In Proceedings of the 13th International Conference on Machine Learning , 1996 "... Reinforcement learning is the process by which an autonomous agent uses its experience interacting with an environment to improve its behavior. The Markov decision process (mdp) model is a popular way of formalizing the reinforcement-learning problem, but it is by no means the only way. In this pap ..." Cited by 44 (5 self) Add to MetaCart Reinforcement learning is the process by which an autonomous agent uses its experience interacting with an environment to improve its behavior. The Markov decision process (mdp) model is a popular way of formalizing the reinforcement-learning problem, but it is by no means the only way. In this paper, we show how many of the important theoretical results concerning reinforcement learning in mdps extend to a generalized mdp model that includes mdps, two-player games and mdps under a worst-case optimality criterion as special cases. The basis of this extension is a stochastic-approximation theorem that reduces asynchronous convergence to synchronous convergence. 1 INTRODUCTION Reinforcement learning is the process by which an agent improves its behavior in an environment via experience. A reinforcement-learning scenario is defined by the experience presented to the agent at each step, and the criterion for evaluating the agent's behavior. One particularly well-studied reinforcement-le... "... When several agents learn concurrently, the payoff received by an agent is dependent on the behavior of the other agents. As the other agents learn, the reward of one agent becomes non-stationary. This makes learning in multiagent systems more difficult than single-agent learning. A few methods ..." Cited by 38 (6 self) Add to MetaCart When several agents learn concurrently, the payoff received by an agent is dependent on the behavior of the other agents. As the other agents learn, the reward of one agent becomes non-stationary. This makes learning in multiagent systems more difficult than single-agent learning. A few methods, however, are known to guarantee convergence to equilibrium in the limit in such systems. In this paper we experimentally study one such technique, the minimax-Q, in a competitive domain and prove its equivalence with another well-known method for competitive domains. We study the rate of convergence of minimax-Q and investigate possible ways for increasing the same. We also present a variant of the algorithm, minimax-SARSA, and prove its convergence to minimax-Q values under appropriate conditions. Finally we show that this new algorithm performs better than simple minimax-Q in a general-sum domain as well.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=161595","timestamp":"2014-04-21T01:03:28Z","content_type":null,"content_length":"41447","record_id":"<urn:uuid:f8496c15-1ec1-4743-84a1-3c2a4fee0aef>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Characterization of the Lie derivative up vote 9 down vote favorite The exterior differential of differential forms on a manifold can be characterized as the unique super-derivation of degree 1 on the exterior algebra of forms such that $<df,X>=X(f)$ for $f$ a $C^{\ infty}$ function, $X$ a vector field. So we really only need to know how to compute $df$, and everything else follows formally. Is there a similar characterization for the Lie derivative acting on differential forms? My guess: Extend $L_{X}$ from $L_{X}f=X(f)$ to an action on all forms so that it commutes with $d$ (Cartan's magic formula?) and perhaps $L_{[X,Y]}=[L_{X},L_{Y}]$. In other words, the Lie derivative should be a homomorphism of Lie algebras from vector fields to degree zero derivations of the de Rham algebra. If this is correct, can anyone give references for this point of view? dg.differential-geometry smooth-manifolds add comment 4 Answers active oldest votes Two ways of thinking about $L_X$ on differential forms: (1) Define it by using the infinitesimal flow determined by $X$. This implies that (a) $L_X$ is a degree $0$ derivation of the algebra of differential forms (because pulling back by a diffeomorphism is an automorphism of the algebra), and (b) it commutes with $d$ (because pulling back by a diffeomorphism commutes with $d$), and (c) $L_Xf=Xf$. And there cannot be more than one operator with properties (a), (b), (c), because the algebra is generated by functions and exact $1$-forms. (2) Define it by $L_X=d\circ i_X+i_X\circ d$. This also implies (a), (b), and (c) (using $d^2=0$), so it has to be the same as (1). Conceptually it's better to think of an operator on sheaves of differential forms (forms defined on open subsets), because the fact the algebra of global forms is generated by what we said uses some ad hoc constructions in the $C^{\infty}$ case and is false in the holomorphic case, or in algebraic geometry. And of course in algebraic geometry you don't have the flow up vote 12 even locally, so (2) is especially good. down vote accepted I did not mention tangent vector fields and $L_XY=[X,Y]$. But let me point out the following. For any sort of tensor bundle you can name (made by starting with tangent bundle, dualizing, tensoring, symmetrizing, ), there is an $L_X$ acting. These are all instances of the following: you have a derivation $D$ of functions, and a related linear operator $L$ on sections of the bundle, and it satisfies $L(fs)=(Df)s+fLs$. The operator on the tensor product of two bundles satisfies $L(s\otimes t)=Ls\otimes t+s\otimes Lt$. And the operator on a bundle and its dual? If we denote the pairing of sections of $E$ and sections of $E^\star$ into functions by $(s,\alpha)$, then we have $D(s,\alpha)=(Ls,\alpha)+(s,L\alpha)$. This can serve as definition of either $Ls$ or $L\alpha$ in terms of the other. For $E=TM$ and $D=X$ and $L=L_X$ this says $X(Y,df)=(L_XY,df)+(Y,L_Xdf)$, i.e. $XYf=(L_XY)f+(Y,L_Xdf)$, which makes the statement $L_XY=XY-YX$ equivalent to the statement $L_Xdf=d(Xf)$. add comment As a possible reference, I would bring your attention on a paper T.J. Willmore, The definition of Lie derivative, Proc. Edinb. Math. Soc. (Ser.2) 1960, 12, 27-29. up vote 2 down vote It is freely available here. The link appears broken. – Vladimir Dotsenko Mar 15 '12 at 18:20 Dear Vladimir Dotsenko, thank you very much, it should be fixed now. – Giuseppe Tortorella Mar 16 '12 at 10:39 The link still does not work for me. – Deane Yang Mar 16 '12 at 11:12 Dear Deane Yang it should be correct now, thank you. – Giuseppe Tortorella Mar 16 '12 at 16:34 add comment Let me add my two cents to the great answers of Dick Palais and Tom Goodwillie. Let $A^\bullet(M)=\bigoplus_{k=0}^\infty A^k(M)$ be the graded algebra of differential forms on your manifold $M$. Then, for a vector field $X$, $\iota_X$ is the unique derivation of degree $(-1)$, satisfying $\iota_X(\alpha)=\alpha(X)$ for all $\alpha\in A^1(M)$, and $d$ is the unique derivation of degree $(+1)$, satisfying $df(X)=X(f)$ for all $f\in A^0(M)$. up vote 1 down vote If $D_1$ and $D_2$ are graded derivations of degrees $p$ and $q$ respectively, their graded commutator $$ D_1D_2 - (-1)^{pq}D_2D_1 $$ is a derivation of degree $p+q$. In our case, this is $$ L_X = d\iota_X + \iota_X d, $$ a degree zero derivation. add comment The Lie derivative $L_X$ with respect to a smooth vector field $X$ is of course well-defined on the whole tensor algebra and it is a derivation of this algebra. If $f$ is a smooth function it satisfies $L_X (f) = X(f)$ and $L_X(df) = d(L_X(f))$. And if $Y$ is a smooth vector field it satisfies $L_X(Y) = [X,Y]$. Since the tensor algebra is generated by functions, up vote 12 differentials of functions, and vector fields these properties characterize $L_X$. down vote Thank you. Could you clarify what you mean by `whole tensor algebra'? Do you mean $\oplus_{i,j} TM^{\otimes i} \otimes T^{*}M^{\otimes j}$, then tensor algebra of $TM \oplus T^{*}M$? And when you say the tensor algebra is generated by..., I suppose you mean locally, over contractible open subsets, right? – A. Pascal Mar 15 '12 at 15:57 @A.Pascal Sorry if I was being too elliptical. By the "tensor algebra" I meant the bi-graded algebra of all smooth tensor fields of any co- or contravariance. Actually, by using partitions of unity you do not need to restrict to contractible open subsets. – Dick Palais Mar 15 '12 at 17:31 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry smooth-manifolds or ask your own question.
{"url":"http://mathoverflow.net/questions/91294/characterization-of-the-lie-derivative?answertab=active","timestamp":"2014-04-16T04:40:32Z","content_type":null,"content_length":"70114","record_id":"<urn:uuid:5a267cd1-6b20-4192-804b-79d96e7526db>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of the Optical Society of America The introduction of the natural and the diapoint variables has simplified considerably the investigation of problems in the large, problems where we can give the eiconal in closed form, and therefore for each object ray the image ray. The author and H. Boegehold have, first together and then alone, investigated problems of this kind. The author sketches first once more his new method, and gives then a series of examples, such as the eiconal for the case where to each object point belongs a rotation symmetric caustic, the eiconal for sharp image formation of two surfaces, the eiconal in the case where there is perfect centric symmetry, and other examples which are discussed as far as the time limit permits. M. HERZBERGER, "Optics in the Large," J. Opt. Soc. Am. 27, 202-206 (1937) Sort: Year | Journal | Reset 1. W. R. Hamilton, third supplement to an essay on the theory of systems of rays. Trans. Irish Academy 17, 1–144 (1837); also in Mathematical Papers, Vol. I (1931). 2. H. Bruns, "Das Eikonal," Leipzig Sitz. ber. 21, 321–436 (1895). 3. M. Herzberger, Strahlenoptik (J. Springer, 1931), pp. 167–175. 4. See for instance M. Herzberger, "On the Fundamental Optical Invariant," J. O. S. A. 25, 295–304 (1935). 5. M. Herzberger, "New Theory of Optical Image Formation," J. O. S. A. 26, 197–204 (1936). 6. H. Boegehold and M. Herzberger, "Kugelsymmetrische Systeme," Zeits. für Ang. Math. u. Mech. 15, 157–178 (1935). 7. C. Maxwell, On the general laws of optical instruments, Scientific Papers I (1858), pp. 271–285. 8. H. Boegehold and M. Herzberger, "Kann man zweiverschiedene optische Flächen scharf abbilden?" Comp. Math. 1, 1–29 (1935). 9. H. Boegehold, "Raumsymmetrische Abbildung," Zeits. f. Instrumentenk. 56, 98–109 (1936). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/josa/abstract.cfm?uri=josa-27-6-202","timestamp":"2014-04-21T08:20:07Z","content_type":null,"content_length":"64016","record_id":"<urn:uuid:74d2ea21-7d23-4e09-8696-5df719549156>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
113 projects tagged "Mathematics" The Java Algebra System (JAS) is an object oriented, type safe, multi-threaded approach to computer algebra. JAS provides a well designed software library using generic types for algebraic computations implemented in the Java programming language. The library can be used as any other Java software package, or it can be used interactively or interpreted through a Jython or JRuby front end. The focus at the moment is on commutative and solvable polynomials, power-series, multivariate polynomial factorization, Gröbner bases, and applications.
{"url":"http://freecode.com/tags/mathematics?page=1&sort=popularity&with=18&without=","timestamp":"2014-04-21T07:51:09Z","content_type":null,"content_length":"98160","record_id":"<urn:uuid:f5fa03e8-9ef8-4264-a27c-057ade2f8bec>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] A surprising result from benchmarking Dan Becker dbecker@alum.dartmouth.... Sun Mar 11 00:02:13 CST 2007 Hi Everyone, I'm new to numpy, and I'm finding it hard to predict what is fast in python/numpy and what is slow. The following seems puzzling: I am doing the same thing an ugly way and a cleaner way. But the ugly map/lambda/filter expression is 15x faster than using numpy's internals. Can anyone explain why? For now, this makes me nervous about incorporating basic numpy functionality into real programs. ---Code starts here--- import scipy import time import psyco from numpy import matrix print("New run") greaterPerLine=[sum(x) for x in highEnough] print("method 1 took %f seconds"%elapsed1) print("method 2 took %f seconds"%elapsed2) ---Output starts here--- New run method 1 took 3.566760 seconds method 2 took 0.232356 seconds Thanks so much! More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-March/026449.html","timestamp":"2014-04-19T12:19:43Z","content_type":null,"content_length":"3621","record_id":"<urn:uuid:d63af5d3-381f-4c25-abd1-e45103c7c1ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
Tensor Product of Spaces Well, you know that the dimension of V is [tex]d_1d_2...d_n[/tex]. So I guess that you probably need this many vectors to express an arbitrary vector in V... Well, yes, you need d vectors to form a basis for V. But I am asking something slightly different. I'll give a concrete example: Suppose n=2 and V and V (the space of linear functionals on Then V = V [tex]\otimes[/tex] V Any tensor in V is a 2x3 matrix: M = [tex]\left[ \begin{array}{cccc} a & b & c \\ d & e & f \end{array} \right][/tex] Now, the problem is that not every such matrix is the tensor product of two elements v and v of V and V respectively. Any maximal-rank matrix is able to be represented as the sum of exactly two (and no fewer) such "elementary" tensors (where an "elementary" tensor is the tensor product of two elements and v (need not be basis vectors) of V and V respectively). A non-maximal rank matrix (i.e., one with rank strictly less than 2 = min{2,3}) can be represented as a sum of fewer "elementary" tensors (e.g., one). V contains elements M in the correct form such that I can put a vector x (a column vector) to the right of M, and I will get another vector back as a result of that "multiplication". If I put a linear functional (row vector) to the left of it, I will get back another linear functional (row vector). If I put both a vector to the right, and a linear functional to the left, I will get back a real number. The matrix M itself can be thought of as two linear functionals on (row vectors) attached to two particular vectors (column vectors) in . The topmost row of M is a linear functional attached to the vector (1,0) (in column form), and the bottommost row of M is a linear functional attached to the vector (0,1) (in column form). When you put a vector (in ) to the right of M, you are using those two linear functionals to determine coefficients a , a (for the top and bottom rows, repsectively) which will factor into the sum a (1,0) + a You can also paint a similar picture for thinking about multiplying a linear functional (row vector) on the left of M. In this case, M can be thought of as consisting of three vectors (in ) each attached to one of the row vectors (1,0,0), (0,1,0), and (0,0,1). However, since there are three such linear functionals, they cannot be linearly independent. So you really only need two linear functionals attached (repsectively) to two row vectors, although this time the row vectors may not be basis vectors. So either way you think about it, you see that you really only need two bits of "elementary" information to represent any particular matrix M in V, even though there are 6 basis elements to V. The "elementary" bits of information can be represented as a pair (x,x') in V (modulo an equivalence relation). For matrices, the number of "elementary" bits of information you need is just the maximal number of linearly independent rows (or columns) of M. For higher order tensors, I postulate that it is something a bit more complicated, but that it has an upper bound (which is achieved) of R (where R is defined above).
{"url":"http://www.physicsforums.com/showthread.php?t=480130","timestamp":"2014-04-20T05:49:11Z","content_type":null,"content_length":"33346","record_id":"<urn:uuid:a4c09548-3a33-4870-b9df-2330f3627516>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What are the solutions of the equation x² – 9 = 0? Use a graph of the related function Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f6b4678e4b014cf77c8173e","timestamp":"2014-04-18T23:23:49Z","content_type":null,"content_length":"65663","record_id":"<urn:uuid:84e1aa17-3e82-4217-838a-e41ea35d7456>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
We work out the impact that the recently determined time-dependent component of the Pioneer Anomaly (PA), interpreted as an additional exotic acceleration of gravitational origin with respect to the well known PA-like constant one, may have on the orbital motions of some planets of the solar system. By assuming that it points towards the Sun, it turns out that both the semi-major axis a and the eccentricity e of the orbit of a test particle experience secular variations. For Saturn and Uranus, for which modern data records cover at least one full orbital revolution, such predicted anomalies are up to 2-3 orders of magnitude larger than the present-day accuracies in empirical determinations their orbital parameters from the usual orbit determination procedures in which the PA was not modeled. Given the predicted huge sizes of such hypothetical signatures, it is unlikely that their absence from the presently available processed data can be attributable to an “absorption” for them in the estimated parameters caused by the fact that they were not explicitly modeled. The magnitude of a constant PA-type acceleration at 9.5 au cannot be larger than 9 10^-15 m s^-2 according to the latest observational results for the perihelion precession of Saturn. You must be logged in to post a comment.
{"url":"http://harvard.voxcharta.org/2012/04/04/orbital-effects-of-the-time-dependent-component-of-the-pioneer-anomaly-replacement-4/","timestamp":"2014-04-18T00:15:40Z","content_type":null,"content_length":"29563","record_id":"<urn:uuid:42657f3e-7deb-46a2-86af-e89ead27ef61>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
symmetric visitor pattern up vote 1 down vote favorite I'm using visitor pattern to define a set of operations on some classes. Some operations are commutative, so I end up with duplication in the visitor pattern code. Let's say I have classes A B C, and the operations: A*A, A*B, A*C, B*A, B*B, B*C, C*A, C*B, C*C. A*A, B*B, C*C are unique. A*B, B*A and friends will have code duplication I could implement A*B, and make B*A call A*B but I will end up asking myself: in which file did I implement the operation between A and B again, in A or in B? (there will be about 6 classes, so I will ask this question a lot. 15 pairs of possible operations) There is a risk of someone in the future making an infinite loop of A*B calling B*A calling A*B when implementing a new operation. It's unnatural to have a convention that decides which should be implemented A*B or B*A. I could make a 3rd file with all the implemented functions which are called by either A*B and B*A, doesn't seem very object oriented. How would you solve this issue? (I could list some code, but it's long and doesn't illustrate the point easily) c++ oop design-patterns visitor add comment 2 Answers active oldest votes You are right, you should definitely refrain from implementing A*B as a call of B*A. In addition to a potential of creating an infinite chain of calls, that approach does not reflect the symmetry of the operation in your code, because the code is not symmetric. up vote 3 down vote A better approach is to implement a "symmetric" operation in a helper class or as a top-level function, depending on what is supported in your language, and then make both A*B and B*A accepted call that helper implementation. there is a guideline that it's not exactly ok for class to use the attributes of other classes. I guess there isn't much of a choice – titus Aug 18 '12 at 15:35 Scott Meyer has an excellent discussion on the topic in his More Effective C++ book (Item 31: Making functions virtual with respect to more than one object.) His example is making a collision function for a game where various objects may collide in space. His example is symmetric too (it does not matter if an asteroid hits a spaceship or a spaceship hits an asteroid, the explosion is the same). He starts with a visitor pattern, and gradually works out a C++ - specific solution that relies on RTTI. – dasblinkenlight Aug 18 '12 at 15:40 is this in the 1996 edition? – titus Aug 18 '12 at 16:11 @titus I am pretty sure that it is (I read that book in 1997, and the item was there). – dasblinkenlight Aug 18 '12 at 16:31 I tried to implement with RTTI, I used typeid().name(), it was very slow, then I tried making a static function for each subclass that returns an int, this is one of the indexes in a function pointer matrix . It's still 3x slower than the visitor pattern. Here is the repository github.com/titusnicolae/comparison/blob/master/rtti.cpp – titus Aug 23 '12 at 15:45 show 5 more comments My suggestion would be use builder which will act as parameter Builder new ParameterBuilder() .setFirst( "A" ) .setSecond( "B" ) up vote 1 down .setThird( "C" ) vote ... Then you will have only one method which takes ParameterBuilder as argument. hmm, I could implement stuff as Object* result = new MultiplyHelper().addOperand("A").addOperand("B").compute(); – titus Aug 18 '12 at 15:54 I also have some non-commutative functions such A/B, using a symmetric construct as the above one for a non-symmetric operation is not ok. Better the other way around. – titus Aug 18 '12 at 16:00 Parameter builder is just for taking parameters what you do with them is up to you. What your doing is also good. I suggested this solution to keep your methods intact and only change method signature. – AmitD Aug 18 '12 at 16:02 add comment Not the answer you're looking for? Browse other questions tagged c++ oop design-patterns visitor or ask your own question.
{"url":"http://stackoverflow.com/questions/12019757/symmetric-visitor-pattern","timestamp":"2014-04-23T08:58:37Z","content_type":null,"content_length":"76725","record_id":"<urn:uuid:bd389eaf-0a5c-4a14-be7b-04401b2ab4c2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
logarithmic proof of quotient rule logarithmic proof of quotient rule Let $f$ and $g$ be differentiable functions and $\displaystyle y=\frac{f(x)}{g(x)}$. Then $\ln y=\ln f(x)-\ln g(x)$. Thus, $\displaystyle\frac{1}{y}\cdot\frac{dy}{dx}=\frac{f^{{\prime}}(x)}{f(x)}-\ frac{% g^{{\prime}}(x)}{g(x)}$. Therefore, $\begin{array}[]{rl}\displaystyle\frac{dy}{dx}&\displaystyle=y\left(\frac{f^{{% \prime}}(x)}{f(x)}-\frac{g^{{\prime}}(x)}{g(x)}\right)\\ &\\ &\displaystyle=\frac{f(x)}{g(x)}\left(\frac{f^{{\prime}} (x)}{f(x)}-\frac{g^{{% \prime}}(x)}{g(x)}\right)\\ &\\ &\displaystyle=\frac{f^{{\prime}}(x)}{g(x)}-\frac{f(x)g^{{\prime}}(x)}{(g(x))^% {2}}\\ &\\ &\displaystyle=\frac{g(x)f^{{\prime}}(x)-f(x)g^{{\ prime}}(x)}{(g(x))^{2}}.\end% {array}$ Once students are familiar with the natural logarithm, the chain rule, and implicit differentiation, they typically have no problem following this proof of the quotient rule. Actually, with some prompting, they can produce a proof of the quotient rule similar to this one. This exercise is a great way for students to review many concepts from calculus. Mathematics Subject Classification no label found no label found Added: 2006-10-10 - 05:11
{"url":"http://planetmath.org/logarithmicproofofquotientrule","timestamp":"2014-04-19T12:01:18Z","content_type":null,"content_length":"53109","record_id":"<urn:uuid:7b54a718-f7df-4ee3-adf8-6cf865fd5f46>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
Continuous function on a compact subset of R March 5th 2009, 10:34 AM #1 Feb 2009 Continuous function on a compact subset of R Here's the question: "Let K be a compact subset in $\Re$ and let $f : K \rightarrow \Re$ be a continuous function. Prove that for every $\epsilon > 0$ there exists $L_{\epsilon}$ such that $|f(x) - f(y)| \leq L_{\epsilon} |x-y| + \epsilon$ for every $x,y \in K$." I don't really see the "point" of the question - can't we just take $L_{\epsilon} = |f(x) - f(y)| / |x-y|$? What are we "supposed" to do? Here's the question: "Let K be a compact subset in $\Re$ and let $f : K \rightarrow \Re$ be a continuous function. Prove that for every $\epsilon > 0$ there exists $L_{\epsilon}$ such that $|f(x) - f(y)| \leq L_{\epsilon} |x-y| + \epsilon$ for every $x,y \in K$." I don't really see the "point" of the question - can't we just take $L_{\epsilon} = |f(x) - f(y)| / |x-y|$? What are we "supposed" to do? You can't just take $L_{\epsilon} = |f(x) - f(y)| / |x-y|$, because $L_{\epsilon}$ has to be a constant. Suppose for example that f is the function $f(x) = \sqrt x$ defined on the interval K= [0,1]. Then $|f(x) - f(y)| / |x-y|$ becomes unbounded when x=0 and y→0. I think the trick is to consider two separate cases: when x and y are close together, and when they are not. Given $\epsilon>0$, the uniform continuity of f tells you that there exists a $\delta>0$ such that $|f(x) - f(y)| < \epsilon$ whenever $|x-y|<\delta$. That deals with the case when x and y are close together (in other words, within δ of each other). Now you just have to show that $|f(x) - f(y)| / |x-y|$ is bounded when $|x-y|\geqslant\delta$. (I'll leave that part to you.) March 5th 2009, 01:12 PM #2
{"url":"http://mathhelpforum.com/differential-geometry/77094-continuous-function-compact-subset-r.html","timestamp":"2014-04-23T17:51:16Z","content_type":null,"content_length":"39531","record_id":"<urn:uuid:23719507-edd6-444d-8715-8f67126ccef7>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Thompson groups seminar Thompson groups seminar, Fall 2009 Reading list To answer the question "What is a specific group X?" one responds with an isomorphism class. This is somewhat unsatisfactory and one usually wants sample representatives of that class. Thompson groups have many well known representatives. No one representative is "the best" and it is good to know more than one. A presentation might look like a different way to describe a group, but technically it just specifies another representative of the isomorphism class. However, presentations have their different advantages and it is good to know some of them as well. Cannon, Floyd and Parry To start with representatives of the isomorphism classes and some presentations, it is best to start with the paper by Cannon, Floyd and Parry. Here is a preprint of their paper. I think it is mostly identical to the published version, but I am not sure. I will refer to the paper as [CFP]. The notation of [CFP] has started to stick. They mention that several older sources use different letters for the groups, and some newer sources still use other letters, but ninety percent of the papers written since [CFP] agree with the names of the groups used in [CFP]. The representatives that are used might be the same, slightly different or very different. Remember that only an isomorphism class is being specified. The important parts for F are: 1. The PL function representative for F in Section 1. 2. The tree diagrams of Section 2 and the relation to the PL functions of Section 1. This includes the normal form of 2.7, and the set of positive elements of 2.8. 3. The presentations for F in Section 3. 4. Items 4.1, 4.2, 4.3, 4.4, 4.5 in Section 4 are quite important. Item 4.3 is useful for proving that a candidate representative of the Thompson group F is actually isomorphic to F. One just finds a non-abelian group that satisfies the relations of F. I usually refer to 4.3 as NPNAQ (No Proper Non-Abelian Quotients). The rest of Section 4 is interesting but less essential at the beginning. The group T is introduced in Section 5. The material up to 5.3 is a good introduction. From that point the effort is to prove that the presentation above 5.3 is a correct presentation for T. This is a major effort and can be deferred until later. The group V is handled similarly in Section 6. The effort to the presentation above 6.2 is to describe the group and its elements and find relations that are satisfied. From 6.2 on is the proof that the presentation is correct. Again, this can be deferred. The derivations of the presentations for T and V include as a major step that the groups are simple. The proof is very involved since it is done for the abstract presentations rather than for the group of homeomorphisms. The reason for this is that once the simplicity of the presentation is shown, it is a triviality that the presenation is the correct presentation for the group of homeomorphisms. Easier proofs of simplicity exist for the groups of homeomorphisms. This will be discussed later. Section 7 gives one attempt to generalize F to higher dimensions. No one that I know of has worked with this since [CFP] appeared. Generalizing V has proven to be much easier. Brown's Finiteness Properties paper The paper Finteness properties of groups by Ken Brown discusses several classes of groups including Thompson's groups. We include only the pages on Thompson's groups. The groups are represented as automorphisms of a certain algebra in Section 4A. They are related to the discussion in [CFP] by the use of tree pairs. They generalize the usual groups by considering trees with more than two descendants of each node. In Section 4B, the homeomorphisms are brought in which ties up the circle of points of view. The fact that trees with more than 2 children per node are considered in Section 4A results in PL homeomorphisms with slopes other than powers of 2 being considered here. Generators and relations are discussed in Section 4C. The discussion in Section 4D starts an analysis that ends with the proof that certain subgroups are simple. This is a long involved section since many different groups are being considered. It can be deferred until later. Section 4E gets to the point of the paper: the groups have strong finiteness properties. This refers to parts of the paper that have not been copied. What might be interesting is the action that is defined on related complexes. This section can also be deferred. Section 2 of the Brin-Guzman paper discusses some of the groups in Browns' finiteness properties paper and some generalizations. The representation here is of PL homeomorphisms on the half line. The fact that the line and half line can be used as successfully as the unit interval is useful. There is minor reference to Section 1 in that the groups are put into two large classes (A and B). Because of this some of the material in Section 1 (through Section 1.4) would be good to read. In Section 2, there is much overlap with Brown's paper and should be easy going. The material in 2.5 does not appear in Brown (it appears in other places) and should be read. Nothing after Section 2.5 is worth reading at this point. The proofs of simplicity are less involved than the proofs in [CFP] or in Brown's finiteness properties paper. They are less involved than the proofs in [CFP] since they are about the homeomorphism groups and not the presentations and are less involved than in the Brown paper since fewer groups are considered. The argument of Higman in 2.4.2(g) is very important. It is repeated with less machinery surrounding it in the paper below on higher dimensional groups. Higher dimensional groups The paper on higher dimensional groups has lots of nice pictures relating to a generalization of V (the evidence that V is easier to move into higher dimensions than F) and two proofs of simplicity. One (for [2V, 2V]) is geometric (3.3 which is based on the more painful 3.2) and one (for V) is very combinatorial and is almost trivial (Section 12) and is included to show how easy it is to prove simplicity for V and how much harder it apparently is to prove it for 2V. It turns out to be almost as easy to prove for 2V and the needed ingredients are given here. The paper by Brown and Geoghegan has a short introduction to the group F. It includes discussions of certain points (normal form, free abelian subgroups, universal properties) but assumes some results such as NPNAQ. We only include the relevant pages. Thompson's notes You will see references in various papers to "widely circulated, handwritten notes of Richard J. Thompson." They are here scanned in two groups. They are scanned at high resolution since they are hard to read and so they are in two files: Pages 1-7 and Pages 8-11. They are more readable than they look at first, but are rough going. They are not that continuous and take huge jumps in spots. The ending pages are rather scattered.
{"url":"http://www.math.binghamton.edu/matt/thompson/index.html","timestamp":"2014-04-20T15:56:20Z","content_type":null,"content_length":"7803","record_id":"<urn:uuid:47352cb6-c136-420e-bc0d-4c723644907f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodhaven Algebra 2 Tutor Find a Woodhaven Algebra 2 Tutor ...I am familiar with AP Calculus AB and BC. I received a score of "5" on both exams. My work in 16 AP classes has expanded my vocabulary tremendously; I received a 740 on the SAT Reading section, a section which tests your vocabulary skills. 43 Subjects: including algebra 2, English, calculus, reading ...I have a solid foundation in math, biology, chemistry, physics, and physiology with an emphasis in neurobiology. I love the process of teaching and learning, and would look forward to tutoring you if you are the student, or your child if you are the parent. I wanted to start out tutoring things more outside of my direct area of expertise, because I wanted the challenge of learning to 11 Subjects: including algebra 2, chemistry, physics, biology ...It is interesting how none of the education methods is perfect and each has its loop holes. I often deal with students having problems understanding math. I give my best to the students and it is working. 9 Subjects: including algebra 2, chemistry, calculus, algebra 1 ...In my math and physics tutoring experience, I've helped students from high school through college levels and beyond! That includes the following subjects: algebra I & II, geometry, trigonometry, pre-calculus, and calculus (including AP, AB and BC). I've helped students at many different public a... 12 Subjects: including algebra 2, physics, MCAT, trigonometry ...My name is Lawrence and I would like to teach you math! Since 2004, I have been tutoring students in mathematics one-on-one. My approach to mathematics tutoring is creative and 9 Subjects: including algebra 2, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/Woodhaven_algebra_2_tutors.php","timestamp":"2014-04-21T11:06:15Z","content_type":null,"content_length":"23882","record_id":"<urn:uuid:7dca870b-8012-4267-b894-e7bdb163d747>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
stones horizontal curve crown thrust line pressure stone strain weight ARCH, in building, a portion of mason-work disposed in the form of an arc or bow, and designed to carry the building over an open space. The simplest and oldest expedient for supporting a structure over a door-way is to use a single stone or lintel of sufficient length. On account of the difficulty of procuring stones of great size, this expedient can only be used for moderate apertures ; nor can it be applied when there is to be a heavy superstructure, because the weight resting on the lintel would cause a compression of the upper, and a distension of the under side. Now, no kind of stone can bear any considerable distending strain, and thus stone-lintels are liable to fracture. The ancient Greek temples afford instances of the use of horizontal lintels of considerable size, but these architraves carry only the cornice of the building. The employment of a colonnade with flat architraves to support an upper story is contrary to sound principles, and, even in the case of ordinary houses, we see that the builder has been fain to relieve the pressure on the lintel by means of a concealed arch. In stone-work we must depend on compression alone. When a lintel had been accidentally broken in two, we may suppose that the masons had set the ends of the halves upon the door-posts, and brought the broken ends together. In this way there would be formed a support for the upper building much stronger than was the stone when entire ; only there is a tendency to thrust the doorposts asunder, and means must be taken to resist this out-thrust. The transition from this arrangement to that of three or more wedge-shaped stones fitted together was easy, and thus the gradual development of the arch resulted. So long as such structures are of small dimensions no great nicety is required in the adaptation of the parts, because the friction of the surfaces and the cohesion of the mortar are sufficient to compensate for any impropriety of arrangement.. But when we proceed to construct arches of large span we are forced to consider carefully the nature and intensity of the various strains in order that provision may be made for resisting them. Until the laws of the equilibrium of pressures were discovered, it was not possible to investigate these strains, and thus our knowledge of the principles of bridge-building is of very recent date ; nor even yet can it be said to be perfected. The investigation is one of great difficulty, and mathematicians have sought to render it easier by introducing certain pre-supposed conditions ; thus, in treatises on the theory of the arch, the structure is regarded as consisting of a course of arch-stones resting on abutments, and carrying e, load which is supposed to press only downwards upon the arch-stones. Also cohesion and friction are put out of view, in other words, the investigation is conducted as if the stones could slide freely upon each other. Now, if the line of pressure of one stone against another cross their mutual surface perpendicularly, there is no tendency to slide; and if this condition be adhered to throughout the whole structure, there must result complete stability, since the whole of the friction and the whole consistency of the cement contribute thereto. But if, in any case, the line of pressure should cross the mutual surface obliquely, the tendency to slide thereby occasioned must be resisted by the cohesion, and so the firmness of the structure would be impaired. Hence an investigation, conducted on the supposition of the non-existence of cohesion, must necessarily lead us to the best possible construction. Ent we can hardly say as much in favour of the hypothesis that the load presses only downwards upon the arch-stones. In order to place such a supposition in accordance with the facts of the case, we should have to dress the inner ends of the arch-stones with horizontal facets for the purpose of receiving and transmitting the downward pressure. But if, as is usually the case, the inner surfaces be oblique, they cannot transmit a vertical pressure unless in virtue of cohesion, and then this hypothesis of only downward pressure on the arch-stones is not in accordance with the fundamental principle of stability. In a thorough. investigation this hypothesis must be set aside, and the oblique pressure on the inner ends of the arch-stones must be taken into account. Since the depth of the arch-stones is small in comparison with the whole dimensions of the structure, and since the line of the pressure transmitted from one to another of them must always be within that depth, it is admissible to suppose, for the purpose of analysing the strains, that the arch-stones form an exceedingly thin course, and that their joints are everywhere normal to the curve of the arch. Eventually, however, the depth of the arch-stones must be carefully considered. We may best obtain a clear view of the whole subject by first assuming that the load presses only downwards on the arch-stones, or that the inner ends of those are cut with horizontal facets. Let Q'P'APQ (fig. 4) represent a portion of such an arch placed equally on the two sides of the crown A, then the whole weight of the structure included between the two vertical lines PIT and PH must be supported at P' and P, so that the downward pressure at the point P must be the weight of the building imposed over AP. This pressure downwards is accompanied by a tendency to separate the supporting points P' and P. Now, as this tendency is horizontal its intensity cannot be changed by the load acting only downwards, and must remain the same throughout the structure, wherefore the actual pressure at P must be found by combining this fixed horizontal thrust with the downward pressure equal to the weight of the bridge from A to P. If, then, we draw alt horizontally to represent this constant thrust, and ap upwards to represent the weight of this portion of the arch, the line pis must, according to the law of the composition of pressures, indicate both in direction and in intensity the actual strain at the point P. This pressure must be perpendicular to the joint of the stones, and must therefore be parallel to the straight line drawn to touch the curve at P. Hence, if the form of the inside of the arch, or the intrados as it is called, be prescribed, we can easily discover the law of the pressures at its various parts ; thus, to find the strain at the point Q, we have only there to apply a tangent to the curve and to draw kg parallel thereto ; kg represents the oblique strain at Q ; eq represents the whole weight from the crown A to Q, and therefore pq is proportional to the weight imposed upon the position PQ of the arc. Using the language of trigonometry, the horizontal thrust is to the oblique strain at any part of the curve as radius is to the secant of the angle of inclination to the horizon; also the same horizontal thrust is to the weight of the superstructure as radius is to the tangent of the same inclination. And thus, if the intrados be a known curve, such as a circle, an ellipse, or a parabola, we are able without much trouble to compute, on this hypothesis, the load to be placed over each part. If we use the method of rectangular co-ordinates placing x along OH and z vertically downwards, so that Pr may be the increment of x, irQ that of z, the tangent of the inclination at P is - and therefore if h stand for the horizontal strain, and so for the weight of the arch, we have - , "k to = azs while the oblique strain is h Rx2,• Also the ox? change of weight from P to a proximate point Q is Szo = h8(Tx) . Let RST be the outline which the mason-work would have if placed compactly over the arch-stones, in which case RST is called the extrados, then the weight supported at P is proportional to the surface ARSP, and the increment of the weight is proportional to PSTQ, hence if the weights and strains be measured in square units of the vertical section of the structure, and if y be put for PS, the thickness of the mason-work, we have - 8w = y8x, whence y= k - n ‘):2 When the curve APQ is given, the relations of z and of its differentials to x are known, and thus the configuration of the extrados can be traced, and we are able to arrange the load so as to keep all the strains in equilibrium. But when the form of the extrados is prescribed and that of the intrados is to be discovered, we encounter very great difficulties. Seeing that our hypothesis is not admissible in practice, it is hardly worth while to engage in this inquiry ; it may suffice to take a single, and that the most interesting case. If the whole space between the arch-stones and the roadway be filled up, the extrados becomes a straight line, and when this is horizontal we have y= z so that the form of the arch must be such as to satisfy the condition - z =, - that is to say, z must be a function of x such as to be proportional to its own second derivative or diferential coegicient. Now this character is distinctive of the catenarian functions, and therefore ultimately xi; A( e'Vi+e ), where A is AO, the thickness at the crown of the arch, and e the basis of the Napierian system of logarithms. In this case, since 8w= Ay, - so that the form of the areh and also its weight may readily be computed by help of a table of catenarian functions. Let us now consider the case when the ends of the arch-stones are dressed continuously, while the imposed load is formed of stones having vertical faces. The weight of the column PSTQ resting on the oblique face PQ is prevented from sliding by a resistance on the vertical surface QT, which resistance goes to partly oppose the horizontal strain transmitted by the preceding arch-stone ; and thus the out-thrust of the arch, instead of being entirely resisted by the ultimate abutment, is spread over the whole depth of the structure. In this case the horizontal thrust against QT is to the weight of the column as Qr., the increment of 7, 13 to , the increment of x; wherefore, putting H for the horizontal thrust at the crown of the arch, and h for that part of it which comes down to P, the decrement of h. from P to Q is proportional to the rectaugle under PS and Qr, that is to say, Sk=y&. Now, the whole decrement from the crown downwards is the sum or integral of all such partial decrements, and therefore the horizontal thrust transmitted to P is expressed by the symbol - w=fy8x. But the resultant of these two pressures must be perpendicular to the joint of the arch-stones, or parallel to the line of the curve • wherefore ultimately we obtain, as the con- dition of equilibrium in such a structure, the equationSz(II - fy8z) = SxfySx Since the vertical pressure at P is so, while the horizontal strain is h, the intensity of the oblique strain at P must be ,/(w2 +M). Now, in passing to the proximate point Q, so becomes so + 8w, while it is reduced to it - Stoaz - so that the oblique strain at Q must be - J( (20 + 802 ÷ (it - Stutz)2 ), zu =1 - wherefore the strain at Q is „,/(w2 + h2), or exactly the same as that at P. This result might have been obtained from the consideration that the thrust upon the surface PQ is perpendicular to the oblique strain, and can tend neither to augment nor to diminish it. Hence, as a characteristic of this arrangement, we have the law that the tension across the joints of the arch-stones is the same all along, and therefore is equal to H, the horizontal tension at the crown of the arch. From this it at once follows that if r be the radius of curvature at the point P, y being the vertical thickness of the mason-work there, H = ry, so that if R be the radius of curvature at the crown of the arch, and A the thickness there, the horizontal thrust there, or the strain transmitted along the arch-stones, is H = RAH, being measured in square units of surface ; hence also A : y :r :11, or the thickness at any place, is inversely proportional to the radius of curvature there. When the form of the intrados is given, its curvature at any point is known, and from that the thickness of the stone-work and the shape of the extrados can be found. The most useful case of the converse problem is, again, that in which the extrados is a horizontal straight line. Let OH, figure 6, be the horizontal extrados, and A the crown of the arch ; make also AB such that its square may represent the horizontal thrust there; then, having joined OB and drawn BC perpendicular to it, and meeting the continuation of OA in C, C is the centre of curvature for the crown of the arch. Or, if the radius of curvature and the thickness of the arch at the crown be pre • scribed, we may obtain the horizontal thrust by describing on CO a semicircle, cutting a horizontal line through A in the point B, then the horizontal thrust is equal to the weight of the quantity of the stone-work which would fill up the square on AB. The conditions of the problem require that the curve APQ be so shaped as that the radius of curvature at any point P shall be inversely proportional to the ordinate HP. Resuming the general equation of conditionSz(H - fy8z)= 8.rfy8x , and observing that in this case y=z, we haveSz(H - .118z)=8*Sx Now the integral .11-8: is 1z2, but as it must be reckoned only from A where 2= A, the equation becomes Sz(11 + 1-A2 - iz2)= 8.rfiSx . The coefficient of Sz becomes less when z increases, and when 1;z2= 112 + 2A2, this coefficient becomes zero, at which time az also becomes zero in proportion to Sz ; that is to say, the direction of the curve becomes vertical. Wherefore, if we make OD = D such that D2 = A2+ 2112, we shall obtain that depth at which the curve is upright, or at which the horizontal ordinate DQ is.the greatest, and then the equation takes the form-- Sz(D2 - z2) = 28:03.r , difficulty, and therefore it may be convenient to attempt a graphical solution. Since, for any vertical ordinate HP( = z), the horizontal thrust is 1(D2 - 7.2), while the oblique strain is 1(D2 - A2), the obliquity of the curve at P has for - its cosine the value D2 z2 wherefore the angle at which the curve crosses the horizontal line pP is knoWn. Let then a multitude of such lines be drawn in the space between BA and DQ, and let the narrow spaces thus marked be crossed in succession from A downwards by lines at the proper inclination, and we shall obtain a representation of the curve, which will be nearer to the truth as the intervals are more numerous. The beginning of the curve at A may be made a short are of a circle described from the centre O. Since the minute differentials thus obtained are proportional to the sides of a triangle whose hypotenuse is D2 - A2, and one of whose sides is D2 - z2, we must have - 8x - 4./ ( (D2 _ A2)2_ (D2_ 2,2)2) 8> , and the integration of this would give the value of x. If we put 0 for the inclination of the curve at any point P, D2 - z2 = (D9 - A2) cos 0, ••• z= (D2_ (D2 _ A 2) cos OP, and taking the differential, Sz = RD?. = .A.2) sin st,(D2 - (D2 - A2) cos CI 80, II. cos p. .•. 8x - %/(D2-21-1 cos p) where 211 is put for its equivalent D2 - A2. The integral of this expression may be obtained by developing the radical in terms arranged according to the powers of cos 0, and then integrating each term separately. The result is a series of terms proceeding by the powers of cos 0, the coefficient of each power being itself an interminate series ; and the rate of convergence is so slow as to make the labour of the calculations very great. Such expressions belong to the class of elliptic functions, for which peculiar methods have been devised. Fortunately the actual calculation is not required in the practice of bridge - building, and therefore we shall only refer the reader to the above-named subject. If the horizontal thrust and the thickness at the crown of the arch be prescribed, the radius of curvature there must be the same whichever of the two hypotheses be adopted ; now, if we sweep an arch from the centre C with the radius CA, the catenarian curve lies outside of it, while the curve which we have just been considering lies inside. Each of these is compatible with sound principles: the one if the inner ends of the arch-stones be dressed with horizontal facets, the other if the ends be dressed to a continuous curve ; wherefore, between these two limits we may have a vast variety of forms, each of which may be made consistent with the laws of equilibrium by merely dressing the inner ends of the arch-stones at the appropriate angles. Hence an entirely new field of inquiry, in which we may find the complete solution of the general problem: - " The intrados and extrados of an arch being both prescribed, to arrange the parts consistently with the laws of equilibrium:' Let PQ represent the inner end of one of the arch-stones, the part Qq being vertical, and Pq being sloped at some angle which is to be found ; put t for the tangent of the inclination of the joint P to the vertical, for that of Pq to the horizontal line, then the horizontal strain at P is - I°' while the corresponding strain at Q is w -F&o - ' and if Pq were horizontal these would be alike ; t but the obliquity of Pq causes the load Sw which is placed on it to generate a horizontal pressure 08w, wherefore - t t+St =08w, whence w St 1 Or Ot w St -1. = - - - - t2 t Sw t Now, when the forms of the intrados and extrados are both given, the values of w, t, 8w, 8t, are thence deducible, so that the value of B may always be computed by help of differentiations only ; excepting, indeed, that integrations may be needed for determining the value of w, which is the area included between the two curves. In this very simple investigation we have the complete solution of the principal problem in bridge-building. The data needed for determining the shape of the inner end of the arch-stone are already in the hands of the architect, who must know, from his plans, the weight of each part and the inclination of each joint ; so that, with a very small addition to the labour of his calculations, he is enabled to put the structure completely in equilibrium, even on the supposition of there being no cohesion and no friction ; that is to say, he is enabled to obtain the greatest stability of which a structure having the prescribed outlines is susceptible. Even although he may not care to have the stones actually cut to the computed shape, and may regard their usual roughness and the cement as enough, he may judge, by help of the above formula, of the practicability of his design ; for if at any place the value of 08t come out with the wrong sign, that is, if w.8t be less than t.8w, the building is unstable, whereas if w.St be greater than t.8w everywhere, the design, as far as these details go, is a safe one. In every possible arrangement of the details, the horizontal thrust at the crown of the arch is transmitted to and resisted by the ultimate abutments. The only effect, in this respect, of varieties in the form of construction is to vary the manner of the distribution of that strain among the horizontal courses. Hence one great and essential element of security, - the first thing, indeed, to be seen to, is that the ground at the ends of the proposed bridge be able to resist this out-thrust. Another, and not less important one is, that the arch-stones be able to withstand the strains upon them. In this respect much depends on the workmanship ; it is all important that the stones touch throughout their whole surfaces ; if these surfaces be uneven the stones must necessarily be subjected to transverse strains, and so be liable to fracture. The practice, too common among house-masons, of cheaply obtaining an external appearance of exactitude, by confining their attention to a chisel-breadth around the outside, is not permissible here, nor should any reliance be placed on the layer of mortar for making up the inequalities. The limit to the span of an arch depends primarily on the quality of the material of the arch-stones. At the crown of the arch the horizontal thrust is the weight of as much of the masonry as fills a rectangle whose length is equal to R, the radius of curvature, and whose breadth is A, the effective thickness there; now this strain has to be borne by the arch-stones, whose depth we shall denote by d, and therefore these stones must be subjected, as it were, to the direct pressure of a vertical column whose height is - d. This column must be much shorter than that which the stone is actually able to bear. The ability of a substance to resist a crushing pressure is generally measured by the length of the column which it is able to support, without reference to the horizontal section ; but it may be questioned whether this mode of estimation be a sound one, for it does seem natural to suppose that a block three inches square should bear a greater load than nine separate blocks each one inch square, seeing that the centre block in the entire stone is protected on all sides ; and thus it is possible that we under-estimate the greatest practicable span of a stone arch. This difficult subject belongs to the doctrine of " Strength of Materials." Anon, SKEWED. - In the earlier days of bridge-building the road was led so as to cross the river or ravine perpendicularly, but in modern engineering we cannot always afford to make the detour necessary for this purpose, and must have recourse to the skewed or oblique arch, having its plan rhomboidal, not rectangular. if AB, CD, figure 8, represent the roadway, and EF, GH, the boundaries of the abutment walls placed obliquely, we easily perceive that the thrust cannot be perpendicular to the abutments, for then it would go out on the side walls which have no means of resistance ; the thrust can only be resisted in the direction of the road. Hence if the structure be divided into a multitude of slices by vertical planes parallel to the parapet, the strains belonging to each slice must be resisted within that slice, and each should form an arch capable of standing by itself. The abutment, therefore, cannot have a continuous surface as in the common or right arch, but must be cut in steps to resist the oblique pressure ; wherefore also the ultimate foundation stones must present surfaces perpendicular to the road. Attending for the moment to-one only of these divisions, say to a thin slice contiguous to the side wall EG, let us study the manner in which the arch-stones in it must be shaped. At the crown I the pressure is horizontal in the plane EIG, and therefore the joint of the stones there must be perpendicular to AB, and so also must be its projection on the horizontal plane. Proceeding along the line of the curve to the point R, we observe that the pressure there must be in the direction of a tangent to the curve, wherefore the surface of a joint at R must be perpendicular to that tangent, and the exposed face of the stone must be right-angled. Now, the projection upon a horizontal surface of a right angle placed obliquely is not necessarily right ; in this case it cannot be right, and therefore the course of a line of joints represented in plan must bend away from being perpendicular to the side wall towards being parallel to the line of the abutment. Thus a continuous course of joints beginning at I must be shown in plan by sonic curved line such as IPp. In many of the skewed bridges actually built, the outline of the arch is divided into equal parts, as seen on the ends of the vault ; the curved joint-lines IPp thus become portions of screws drawn on an oblique cylinder, and, although the arch-stone at the crown be rectangular, those on the slope cease to be so. The bearing surface is thus inclined to the direction of the pressure, and the tendency is to thrust out the arch-stones at the acute corners F and G. The fault is exactly the same as if, in ordinary building, the mason were to bed the stones off the level. The consequence is that skewed stone-bridges have not given satisfaction, the fault being attributed to the principle of the skew, whereas it should have been assigned to the unskilfulness of the design. Let figure 9 be an elevation projected on a vertical plane parallel to AB, EIG, FSH, being the outlines of the ends of the arch, and the sections taken at equal intervals along the crown line being also shown; then, since the projection of a right angle upon a plane parallel to one of its sides is always right, the joint at R, as seen on this elevation, must be perpendicular to the curve at 11, and thus the curve IPp, representing one of the joint-courses, must cross each of the vertical sections perpendicularly. In this way each of the four-sided curvilinear spaces into which this elevation is divided must be right-angled at its four corners. This law is general, and enables us to determine the details of any proposed oblique arch. If we draw, as in figure 9, the end elevation of the vault as intersected by numerous parallel planes, and lead a curved line crossing all these intersections perpendicularly, we obtain the end elevation of one of the joint-lines, and are able from it to prepare any other of its projections. The form and character of this end elevation IPp depends entirely on the nature of the curve BIG, hut is the same whatever may be the angle of the skew. In order to examine its general character, let us take in the crown line two closely contiguous points I, K, and from these lead the joint-lines IP, KQ, of equal length, then the straight line PQ is equal and parallel to IK, on any of the projections. If in the end elevation, figure 9, we continue the joint IP to meet the vertical section OQ in p, we may regard PQp as a small rectilineal triangle, right angled at p, while PQp is the inclination to the horizon. Now, PQ Qp : : It : cos PQp, while PQ is equal to KI, the breadth of the arch-stone at the crown, wherefore the breadth of the course at the crown is to the breadth of. the same course at any other place as radius is to the cosine of the inclination there. -Hence it follows, as is shown in the end elevation, figure 10, that the arch-stones gradually diminish in breadth from still narrower, and an infinity of them would be needed to reach the abutment of a semicircular or semi-elliptic arch, because the cosine of the inclination there is zero. In no properly-built skewed bridge can the arch-stones show equal divisions ; and it is impossible to continue the arch to the complete half circle or half ellipse. Passing from the end elevation, figure 9, to the plan, figure 8, we observe that Qp on the plan is less than the actual Qp of the elevation in the ratio of the cosine of the inclination to radius, and, therefore, on the plan, the breadth at the crown is to the apparent breadth of the course at any other place as the square of the radius is to the square of the cosine of the inclination there; so that, at the inclination of GO° the apparent breadth will be quarter of that at the crown. Again, in figure 11, which is the side elevation of the vault, or its projection on a vertical plane perpendicular to the road, the apparent distance Qp is to the actual distance Qp of figure 9 as the sine of the inclination is to radius, wherefore, the apparent breadth Qp on this projection is proportional to the product of the sine by the cosine of the inclination, that is, to half the sine of twice the inclination. The width on this projection is therefore greatest at an inclination of 45°, being there just one-half of the actual breadth at the crown of the arch. This reasoning is founded on the supposition that the distance IK is excessively small, and the resulting conclusions are strictly true only of an infinitely narrow course of arch-stones; they are, indeed, differential equations which must be integrated in order to be applied to actual practice. Thus we have seen that the curved line IP, figure 9, crosses the section NP perpendicularly at P, but then it does not continue in this direction for any perceptible distance. The draughtsman may attempt to trace it by making the sections very numerous, and by drawing perpendiculars across the successive intervals; but however numerous he may make these sections, he can thus only effect an approximation to the true curve. We must integrate, that is, we must obtain the aggregate of an infinite number of infinitely small portions in order to reach an absolutely true result. These conclusions hold good whatever may be the outline of the arch. The most common, and therefore the most interesting case, is when the longitudinal section is circular, the cross section taken perpendicularly to the abutment being then an ellipse with its shorter diameter placed horizontally, the vault being an oblique cylinder. Figure 9 is actually drawn for the circular arch. If then 0 be the centre of the circular arc NP, the curve IP must at P tend towards 0, so that the draughtsman, while making the step across one of the intervals, has only to keep his straight edge up to the corresponding place of the centre. If we place the paper horizontally, fix a small heavy round body at P to the end of a thread OP, and then draw the end 0 of that string along the straight line HEF, P would always more towards the then position of the point 0, and would trace out the curve of which we are in search. The projection, then, of the joint of an oblique circular arch upon a vertical plane parallel to the road, is always the curve known by the name of the Tractorg. All tractories have the same shape, the size merely is regulated by the length of the thread OP, that is, by the radius of curvature of the circular arch. Hence, if the delineation of it have been accurately made in one case, the curve for another case may be obtained by mere enlargement or reduction ; or, still better, in all eases it may be traced by help of a table of co-ordinates, such as that subjoined, which shows the dimensions of the tractory as represented in figure 12, in decimal parts of the radius of curvature of the arch. The computations have been made for equal motions of the point 0, corresponding, therefore, to equal distances measured along the crown-line of the arch. The headings of the columns sufficiently explain their contents. By help of these the form of the tractory may easily be obtained, and with a piece of veneer or of thin metal cut to this shape, the architect may obtain all the details of the intended structure, first working out the said elevation, figure 9, and transferring the several points therefrom to the other projections. If we put s for the angle of the skew, v for the distance IN measured along the crown of the vault, and i for the inclination at the point P, r being the radius of the arch, the distance IN or i0 of figure 10 is clearly v sin s, and as the result of the integration, we obtain - v sin s - Nap. log tan (450 + , by help of which equation we can readily determine i when v is known, or v when i is given. The table of Napierian logarithmic tangents being very scarce, it is convenient to convert these into denary or common logarithms. Putting, as is usual, M for the modulus of denary logarithms, that is, for '43429 44819, the above equation becomes - - r . v . sin s = log tan (45° + from which it is quite easy to tabulate the values of i corresponding to equidifferent values of v, because the constant factor - has to be only once computed; i, that is, the number of degrees in the arc NP being thus computed for each of the successive sections of the vault, we have only to divide a tape-line so as to show degrees and minutes of the actual circle in order to be able at once to mark the course of the joints upon the centering of the arch; or better still, instead of the degrees, we may write upon the tape the successive values of NP, and then the commonest workman will Le able to lay off the lines. The only other kind of skewed arch likely to possess any interest is the elliptic. In right arches the semi•ellipse is sometimes used on account of the grace of its form, but this reason for its adoption disappears in the case of the skew, because then we can only use a portion of the semi-ellipse. The end elevation of a joint in an elliptic skewed arch is a modified form of the tractory, and the general features of the arrangement are analogous to those of the circular arch. The arch-stone of a common bridge is wedge-shaped, having two flat faces AacC, BbdD, inclined to suit the breadth of the course, but in the skewed bridge the corresponding faces are twisted, Cc not being parallel to Aa, and thus the dressing of them requires both skill and care. The dimensions of the stone and the inclinations of its four edges may easily be computed when its intended position is known, and thus the degree of twist on each of its faces may be ascertained, and the lines may then be marked off on the ends of the stone. The theory of the skewed arch was given for the first time in the Transactions of the Royal Scottish Society of Arts for 1833 ; from which it was copied into the Civil Engineer and Architect's Journal for July 1840, which see. (For the history and various forms of the arch see Aneni- User Comments
{"url":"http://www.libraryindex.com/encyclopedia/pages/covwzllci7/arch-stones-horizontal-curve.html","timestamp":"2014-04-18T16:28:57Z","content_type":null,"content_length":"132812","record_id":"<urn:uuid:63fc1c5a-97a7-4615-b207-46a5cf883694>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
graph: average of average degree of neighbours, is at least the average degree January 21st 2013, 12:51 AM #1 Jan 2013 graph: average of average degree of neighbours, is at least the average degree thanks for letting me become a member of this forum. I would like to ask my very first question here Let G be a simple graph, with loops, multiple edges and without weighted edges. For every vertex $v_i$, let $d_i$ denote its degree (number of neighbours), and let $w_i$ denote the average of the degrees of neighbours of $v_i$ Now one should prove that the average $D$ of the $d_i$ is at most the average $W$ of the $w_i$, with equality if and only if the graph is regular (all $d_i$ are equal). At first, I thought that this was simply related to the friendship paradox, where one considers $\frac{\sum d_i}{n}\leq X=\frac{ \sum d_i^2}{\sum d_i},$ which follows essentially from Cauchy-Schwarz. However, I noticed that the above is not quite the same. For instance, if you take a star graph on 4 vertices, then the average degree $D$ is 1.5, while the number $W$ is 2.5. The number $X$ from the above is 2, though. I found a discussion here, where a graph on four vertices is considered, here $D= 2, W=2.4167$ but $X=2.25$ Why your friends have more friends than you: the friendship paradox - Mind Your Decisions Whenever I tried to prove that $D \leq W$, I started struggling with a $\frac{1}{d_i}$ somewhere that I can't get rid off. I'm probably missing something rather trivial, but for now I can't see it. Any ideas? Many thanks! Last edited by evilbu; January 21st 2013 at 01:05 AM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/discrete-math/211763-graph-average-average-degree-neighbours-least-average-degree.html","timestamp":"2014-04-17T14:22:42Z","content_type":null,"content_length":"33900","record_id":"<urn:uuid:d26c4cf5-de89-4691-8b21-1fea3879ffe1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 2004 [00142] [Date Index] [Thread Index] [Author Index] Re: sorting polynomials by the degree of certain terms • To: mathgroup at smc.vnet.net • Subject: [mg49220] Re: [mg49202] sorting polynomials by the degree of certain terms • From: Andrzej Kozlowski <akoz at mimuw.edu.pl> • Date: Thu, 8 Jul 2004 02:50:54 -0400 (EDT) • References: <200407070542.BAA25018@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com On 7 Jul 2004, at 14:42, David Hessing wrote: > Hi, > I dealing with huge polynomials (around 50,000 terms). Each term may > contain some real coefficient, followed by any combination of 11 > variables, each of which may be raised to some some power. However, > I'm interested in sorting the polynomial according to the sum of the > degrees of only 2 of the variables. In other words, in the sorted > expression, the first terms would be those where the two variables do > not appear (the sum of their powers is zero). The next terms would be > the terms where one of the two variables appeared raised to the power > 1. The next terms would be the terms where both variables appeared > with power 1, or just one of the variables appeared with power 2. And > so on. > I've played with passing the expression to the Sort function, along > with a defined sorting function, but I can't get it to work. Any help > would be greatly appreciated. > -David Presumably what you have in mind is ordering monomials? You could define a monomial order function with respect to variables vars as follows: MyOrderQ[f_, g_, vars_] := With[{ a = Plus @@ Take[Exponent[f, vars], 2], b = Plus @@ Take[ Exponent[g, vars], 2]}, Which[a < b, True, a == b, g}], True, False]] What this does is compares the sums of the exponents of the first two variables in two monomials. The one with the smaller sum is taken to be smaller. If the sums are equal then the canonical ordering is used. So, with three variables x,y,z: MyOrderQ[z^5,x y z,{x,y,z}] This is because the sum of the exponents of x and y in z^5 is 0. in this case the sums are the same so the canonical ordering is used (x comes before y) And here is how you sort several monomials: Sort[{x*y, x^2*y^2*z, x^4*z, z^3}, MyOrderQ[#1, #2, {x, y, z}] & ] {z^3, x*y, x^4*z, x^2*y^2*z} Here there was again a "tie" for the last place when sums of powers were compared so the canonical ordering was used with x^4 coming before x^2 y^2. Of course one could easily reverse this. (I hope I have not misunderstood you!) Also, note that you can't just "sort" a polynomial, in the sense of arranging the order of its terms in your own order because Matheamtica will rearrange it into canonical order again. So what you need to do is to first convert it to a list and then sort. For example, to sort the f = x*y + x^4*z + x^2*y^2*z + z^3; you have to do something like: Sort[List @@ f, MyOrderQ[#1, #2, {x, y, z}] & ] {z^3, x*y, x^4*z, x^2*y^2*z} But if you now apply Plus, Mathematica will at one re-arrange it in canonical order: Plus @@ % x*y + x^4*z + x^2*y^2*z + z^3 Andrzej Kozlowski Chiba, Japan • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2004/Jul/msg00142.html","timestamp":"2014-04-21T07:11:09Z","content_type":null,"content_length":"37560","record_id":"<urn:uuid:99fd97d4-c172-414f-8bdc-debcac6d7d56>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Runnemede Algebra 2 Tutor Find a Runnemede Algebra 2 Tutor ...To this end I have helped a summer camp in Philadelphia by providing curriculum material for its math and language arts program. I received my bachelors degree in Biochemistry and Molecular Biology, and genetics is a program requirement. I completed the course with an A- and have done laboratory work that applies molecular genetics. 12 Subjects: including algebra 2, chemistry, geometry, biology Hi, I have had experience as a tutor for the past two years in the basic sciences at Rutgers University, where I majored in Cellular Biology and Neuroscience. I have helped students in a variety of classes, of which range between: Chemistry, Organic Chemistry, Biology, Algebra and Geometry. I am currently attending UMDNJ-SOM's medical program. 19 Subjects: including algebra 2, chemistry, calculus, geometry ...I also have been a professional artist, and am always happy to assist in art training and practice. But life is not just the classroom; I take an active interest in soccer, basketball, baseball, Frisbee, surfing, and hiking. If you would like to work on those outdoors/athletic skills, I can happily assist you. 23 Subjects: including algebra 2, reading, English, writing ...This includes, but is not limited to, set theory, proofs (such as in geometry) and model theory. Much of the SAT test includes testing the students' reasoning and logic skills. Over the past 7 years, I have tutored many hundreds of students in developing these skills. 22 Subjects: including algebra 2, statistics, ASVAB, geometry ...I’ve had the pleasure of working at Germantown Friends, Vaux, Strawberry Mansion, and Roxborough, in addition to the many individuals that I have tutored as an independent contractor. I started my professional life as an engineer, but even as I was studying mechanical engineering at the Universi... 23 Subjects: including algebra 2, reading, Series 7, Praxis
{"url":"http://www.purplemath.com/Runnemede_Algebra_2_tutors.php","timestamp":"2014-04-16T07:19:39Z","content_type":null,"content_length":"24162","record_id":"<urn:uuid:45646879-e05e-460d-a1ce-a58be340e4fb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
A New Proof of Central Limit Theorem for i.i.d. Random Variables Abstract and Applied Analysis Volume 2013 (2013), Article ID 294910, 5 pages Research Article A New Proof of Central Limit Theorem for i.i.d. Random Variables School of Mathematical Sciences, Qufu Normal University, Qufu, Shandong 273165, China Received 13 November 2013; Accepted 16 December 2013 Academic Editor: Xinguang Zhang Copyright © 2013 Zhaojun Zong and Feng Hu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Central limit theorem (CLT) has long and widely been known as a fundamental result in probability theory. In this note, we give a new proof of CLT for independent identically distributed (i.i.d.) random variables. Our main tool is the viscosity solution theory of partial differential equation (PDE). 1. Introduction Central limit theorem (CLT) has long and widely been known as a fundamental result in probability theory. The most familiar method to prove CLT is to use characteristic functions. To a mathematician having been already familiar with Fourier analysis, the characteristic function is a natural tool, but to a student of probability or statistics, confronting a proof of CLT for the first time, it may appear as an ingenious but artificial device. Thus, although knowledge of characteristic functions remains indispensable for the study of general limit theorems, there may be some interest in an alternative way of attacking the basic normal approximation theorem. Indeed, due to the importance of CLT, there exist the numerous proofs of CLT such as Stein’s method and Lindeberg’s method. Let us mention the contribution of Lindeberg [1] which used Taylor expansions and careful estimates to prove CLT. For more details of the history of CLT and its proofs, we can see Lindeberg [1], Feller [2, 3], Adams [4], Billingsley [5], Dalang [6], Dudley [7], Nourdin and Peccati [8], Ho and Chen [9], and so on. Recently, motivated by model uncertainties in statistics, finance, and economics, Peng [10, 11] initiated the notion of independent identically distributed random variables and the definition of -normal distribution. He further obtained a new CLT under sublinear expectations. In this note, inspired by the proof of Peng’s CLT, we give a new proof of the classical CLT for independent identically distributed (i.i.d.) random variables. Our proof is short and simple since we borrow the viscosity solution theory of partial differential equation (PDE). 2. Preliminaries In this section, we introduce some basic notations, notions, and propositions that are useful in this paper. Let denote the class of bounded functions satisfying for some depending on ; let denote the class of continuous functions ; let denote the class of bounded and-time continuously differentiable functions with bounded derivatives of all orders less than or equal to on and-time continuously differentiable functions with bounded derivatives of all orders less than or equal to on . Let be a random variable with distribution function , so that, for any , If is any function in , the mathematical expectation of exists and Our proof is based on the following classical results for i.i.d. random variables and normally distributed random variables with zero means. Proposition 1. Suppose is a sequence of i.i.d. random variables. Then(i)for each , if , then , (ii); for each , if , then where . Proposition 2. Suppose is a normally distributed random variable with and , denoted by . Then if and is independent of , we have, for each , We will show that a normally distributed random variable with and is characterized by the following PDE defined on : with Cauchy condition . Equation (7) is called the heat equation. Definition 3. A real-valued continuous function is called a viscosity subsolution (resp., supersolution) for (7), if for each function and for each minimum (resp., maximum) point of , we have is called a viscosity solution for (7) if it is both a viscosity subsolution and a viscosity supersolution. Remark 4. For more basic definitions, results, and related literature on viscosity solutions of PDEs, the readers can refer to Crandall et al. [12]. Lemma 5. Letbe an distributed random variable. For each , we define a function Then we have We also have the estimates: for each , there exists a constant such that, for all and , , Moreover, is the unique viscosity solution, continuous in the sense of (11) and (12), of (7) with Cauchy condition . Proof. Since we then have (11). Letbe independent of such that . By Propositions 1 and 2, we have It follows from this and (11) that which implies (12). Now, for a fixed point , let satisfy and . By (10), we have, for , where is a positive constant, and then, we have Hence, is a viscosity subsolution for (7). Similarly, we can prove that is a viscosity supersolution for (7). The proof of Lemma 5 is completed. 3. A New Proof of CLT for i.i.d. Random Variables Theorem 6. Let be a sequence of i.i.d. random variables. We further assume that Denote . Then In order to prove Theorem 6, we need the following lemma. Lemma 7. Under the assumptions of Theorem 6, we have for any , where is . Proof. The main approach of the following proof derives from Peng [10]. For a small but fixed , let be the unique viscosity solution of By Lemma 5, Since (21) is a uniformly parabolic PDE, thus by the interior regularity of (see Wang [13]), we have We set and . Then By Taylor’s expansion, We now prove that Indeed, for the 3rd term of , by Proposition 1, For the second term of , by Proposition 1, we have Thus combining the above two equalities with we have Thus, (27) can be rewritten as But since both and are uniformly -hölder continuous in and -hölder continuous in on , we then have Thus where is a positive constant. As , we have On the other hand, for each , and , Thus and by (23) It follows from (23), (36), (38), and (39) that Since can be arbitrarily small, we have Proof of Theorem 6. For notional simplification, write Let be any positive number, and take small enough such that . Construct two functions , such that Then and for each , Obviously, and . By Lemma 7, we have So Hence Since this is true for every , we have Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. The authors would like to thank the editor and the anonymous referees for their careful reading of this paper, correction of errors, and valuable suggestions. The authors thank the partial support from the National Natural Science Foundation of China (Grant nos. 11301295 and 11171179), the Doctoral Program Foundation of Ministry of Education of China (Grant nos. 20123705120005 and 20133705110002), the Postdoctoral Science Foundation of China (Grant no. 2012M521301), the Natural Science Foundation of Shandong Province of China (Grant nos. ZR2012AQ009 and ZR2013AQ021), and the Program for Scientific Research Innovation Team in Colleges and Universities of Shandong Province of China. 1. J. Lindeberg, “Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechnung,” Mathematische Zeitschrift, vol. 15, no. 1, pp. 211–225, 1922. View at Publisher · View at Google Scholar · View at MathSciNet 2. W. Feller, An Introduction to Probability Theory and Its Applications, vol. 2 of 2nd edition, John Wiley & Sons, New York, NY, USA, 1971. View at MathSciNet 3. W. Feller, “The fundamental limit theorems in probability,” Bulletin of the American Mathematical Society, vol. 51, no. 11, pp. 800–832, 1945. View at MathSciNet 4. W. J. Adams, The Life and Times of the Central Limit Theorem, vol. 35 of History of Mathematics, Kaedmon Publishing, New York, NY, USA, 2nd edition, 2009. View at MathSciNet 5. P. Billingsley, Probability and Measure, John Wiley & Sons, New York, NY, USA, 3rd edition, 1995. View at MathSciNet 6. R. C. Dalang, “Une démonstrationélémentaire du théorème central limite,” Elemente der Mathematik, vol. 60, no. 1, pp. 1–9, 2005. 7. R. M. Dudley, Real Analysis and Probability, Cambridge University Press, New York, NY, USA, 2nd edition, 2002. View at Publisher · View at Google Scholar · View at MathSciNet 8. I. Nourdin and G. Peccati, Normal Approximations with Malliavin Calculus, vol. 192 of From Stein's Method to Universality, Cambridge University Press, New York, NY, USA, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 9. S. T. Ho and L. H. Y. Chen, “An ${L}^{p}$ bound for the remainder in a combinatorial central limit theorem,” Annals of Probability, vol. 6, no. 2, pp. 231–249, 1978. View at MathSciNet 10. S. G. Peng, “Law of large numbers and central limit theorem under nonlinear expectations,” http://arxiv.org/abs/math/0702358. 11. S. G. Peng, “A new central limit theorem under sublinear expectations,” http://arxiv.org/abs/0803.2656. 12. M. G. Crandall, H. Ishii, and P. L. Lions, “User's guide to viscosity solutions of second order partial differential equations,” Bulletin of the American Mathematical Society, vol. 27, no. 1, pp. 1–67, 1992. View at Publisher · View at Google Scholar · View at MathSciNet 13. L. H. Wang, “On the regularity theory of fully nonlinear parabolic equations: II,” Communications on Pure and Applied Mathematics, vol. 45, no. 2, pp. 141–178, 1992. View at Publisher · View at Google Scholar · View at MathSciNet
{"url":"http://www.hindawi.com/journals/aaa/2013/294910/","timestamp":"2014-04-20T06:45:26Z","content_type":null,"content_length":"390675","record_id":"<urn:uuid:e2817a59-9200-41f3-9f4f-f196a8417aab>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Sylow theorems for infinite groups up vote 5 down vote favorite Are there classes of infinite groups that admit Sylow subgroups and where the Sylow theorems are valid ? More precisely, I'm looking for classes of groups $\mathcal{C}$ with the following properties: • $\mathcal{C}$ includes the finite groups • in $\mathcal{C}$ there is a notion of Sylow subgroups that coincides with the usual one when restricted to finite groups • Sylow's theorems (or part of them) are valid in $\mathcal{C}$ An example of such a class $\mathcal{C}$ is given by the class of profinite groups. Silly example: $\mathcal C$=profinite groups. Sylow subgroups = maximal closed pro-$p$ subgroups. One reason this is silly is that topological groups are not really a type of group. – Will Sawin Oct 23 '12 at 5:12 @Will: the question mentions profinite groups at the end already. – KConrad Oct 23 '12 at 5:53 add comment 6 Answers active oldest votes You may also read Chapter 13 of Kurosh's book. For instance, it contains a proof of Baer's theorem (cited by @Igor) which says that all p-Sylow subgroups of a locally normal group are isomorphic. up vote 5 down vote Locally normal means periodic with finite conjugacy classes. add comment A number of older papers by V.P. Platonov (in Russian, often followed by English translations) deal with periodic linear groups or linear algebraic groups in which the notions of Sylow theory make sense and where some results from the finite case actually generalize. One of the more substantial papers deals especially with conjugacy theorems: The theory of algebraic linear groups and periodic groups. (Russian) Izv. Akad. Nauk SSSR Ser. Mat. 30 1966 573–620. up vote 5 In other papers Platonov also works with classes of topological groups in a similar spirit. down vote P.S. Concerning sources, the long 1966 paper appears in an English translation (by the group theorist Kurt Hirsch) in volume 69 of the AMS Translations (Series 2), 1969; but this doesn't seem to be accessible online. There is a Google Scholar entry containing a full text PDF version of the Russian original here. add comment Well, the wikipedia gives an example of a sylow theorem, and there is more on this in these notes by Igusa. There is also the following paper of Baer: Sylow theorems for infinite groups up vote 3 down vote Reinhold Baer Source: Duke Math. J. Volume 6, Number 3 (1940), 598-614. add comment Amalgams of finite groups provide another example. Let $A$ and $B$ be finite groups and let $C = A \cap B.$ Suppose that $P$ is a Sylow $p$-subgroup of $A$, and that $C$ contains a Sylow $p$-subgroup of $B.$ Then the amalgam $A*_{C}B$ has a unique conjugacy class of maximal finite $p$-subgroups, but is an infinite group as long as $C$ is proper in both $A$ and $B.$. In fact, up vote the process an then iterated to the case where $A$ and $B$ may themselves be amalgams of finite groups of this type, and so on. For general results on amalgams, see J-P. Serre's book 3 down "Trees". For applications of this type of construction to fusion systems on finite $p$-groups, see two recent papers of mine in Journal of Algebra and Transactions of the AMS. add comment The best reference for this subject is the book of martyn Dixon: Locally finite groups and Sylow theory. up vote 3 down vote add comment In groups of finite Morley rank there is a Sylow theory for the prime $p=2$. up vote 2 down vote add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/110373/sylow-theorems-for-infinite-groups","timestamp":"2014-04-20T01:30:14Z","content_type":null,"content_length":"69042","record_id":"<urn:uuid:bf699881-d8af-4778-98c3-9f631b3d52a6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
geometry problem (angle of body diagonal of a cube) refer to the following image so consider the angle of the yellow theta on the top left. this is 45*. if we fix one side of both red lines at the blue circles, and we slide the other end along the green side of the cube, ie just think of the green lines as rails for the red lines to slide along. then this will extend the lateral length of the lines as the length in the z-direction (up and down) remains constant. shouldn't this then DECREASE the angle specified in the picture not increase it? i'm asking because i've been asked to solve for this angle and θ<45* is not what i got. I got the angle to be 70* which is not intuitive. i am asking for intuition on this problem as opposed to an involved analytical method of solving for that angle. i've already done it analytically i just have no idea why that angle increases and doesn't decrease. thanks all
{"url":"http://www.physicsforums.com/showthread.php?t=732540","timestamp":"2014-04-20T03:14:57Z","content_type":null,"content_length":"24961","record_id":"<urn:uuid:b6a51505-8004-40a1-b8de-d425f08bdba7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00031-ip-10-147-4-33.ec2.internal.warc.gz"}