content
stringlengths
86
994k
meta
stringlengths
288
619
A Spreadsheet Model for Viral Growth I worked with two companies in the past month that needed a way to model viral adoption as part of their effort to understand the impact of a strategic decision to deliver part or all of their technology under an opensource license. Having such model enables them to understand the critical variables and success-factors, set targets for them and then measure as they embark on their I set out to make a generic model that can at once be used at both companies. Hoping it may be of use to others as well, I will publish it here. I had much benefit from articles published by David Skok and Andrew Chen and try to use their terms and insights as much as possible. The basic assumption for viral growth is that every new user of a product becomes an ambassador for that product and will convert others to start using that product as well. There are two variables that allow you to create a model for viral growth: 1. Viral Coefficient: the total number of new users generated by an existing user 2. Viral Cycle Time: the amount of time it takes before all these new users have been generated Viral growth can happen in many ways, from word-of-mouth, to active solicitation and anything in between. This model makes no assumptions about that, it just measures the results. Here is an example in which the unit of time is set in weeks, the starting number of users is 1000, the viral coefficient is .3 and the cycle time is 3 weeks: The model reflects that the viral impact starts in the first period (week) after a user has started using a product and lasts throughout the duration of the cycle. If new users join in the first week, they start working their viral magic in the week after that. So, the 1,000 users in week zero will deliver 300 new users (coefficient is .3) over 3 weeks (cycle time is 3), or 100 new users in week 1, 2 and 3. The 100 new users in week 1 deliver 0.3*100=30 new users, 10 in the weeks 2, 3 and 4. The 110 new users in week 2 deliver 33 more users, and so forth. As the coefficient is less than 1 in this case, the number of new users per week trends to zero over time. If the coefficient is 1 or larger, the viral engine will keep on giving. The pattern for the number of new users per week (in C15:L15) is that each number is equal to the sum of the numbers of the previous three weeks, multiplied with the viral coefficient and then divided by the viral cycle time. In the next image you can see how that pattern is turned into a spreadsheet formula: The formula in cell H24 is: This formula defines the range of values that can be summarized as *all* previous weeks ($B24:G24), but only includes the numbers in the weeks for which the week number is 3 or larger (columns E, F and G). You can download the Excel spreadsheet with this model (see above). I have also published the model as a Google Spreadsheet. Take a look and play with the values for Viral Coefficient and Viral Cycle Time, to see how they impact the growth of your user base. To inspect the formulas, select a cell and then hover the mouse over the formula area on the right bottom of the browser window. In the next entries I will add further metrics to this model that have an impact on viral distribution. 3 thoughts on “A Spreadsheet Model for Viral Growth” 1. This is awesome. I deal a lot with social media and try to explain the benefits of viral communication to different Silicon Valley companies. I would love to throw a chart like this in there. There's so much potential with viral adoption. It's exciting. 2. Hi Craig, Happy that this is useful to you. Feel free to use these charts, that is why they are here. 3. Cool Mark, Since Jurvetson defined this formula way back it’s been my go-to formula for calculating viral growth. When I build a viral growth product, I code my product to measure the two key factors you mention (fan out, and propagation delay), and then plug those values into the spreadsheet to see when my tipping point is going to happen. Fascinating to see the effect that small changes in these two factors make. Often these two factors can be improved by making the right feature changes. With real user measurement in place, I can validate whether or not my feature changes improve these factors, and then recalculate when my tipping point will happen. Then you can explode in 6 months as opposed to waiting 18 months.
{"url":"http://markdevisser.com/?p=268","timestamp":"2014-04-19T01:48:08Z","content_type":null,"content_length":"23219","record_id":"<urn:uuid:ae9f2508-7888-476d-aa8c-8693f10d3aea>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
Consequences of the Axiom of Choice Project Consequences of the Axiom of Choice Project Homepage It is with deep regret that we announce the death of Jean Rubin on October 25, 2002. The book Consequences of the Axiom of Choice by Paul Howard ( Send E-Mail to Paul Howard ) and Jean E. Rubin is volume 59 in the series Mathematical Surveys and Monographs published by the American Mathematical Society in 1998. This book is a survey of research done during the last 100 years on the axiom of choice and its consequences. (Connect to The AMS Bookstore for ordering information.) The Consequences of the Axiom of Choice Project is a continuation of the research that produced the book. We would appreciate learning of any corrections or additions that should be made to the project. (phoward@emunix.emich.edu) On this page you will find: • Changes and additions to the data base that have occurred since publication of the book • A TeX version of the implication table, Table 1 which may be downloaded and printed. (Hold down the shift key and click on the file name to download.) • A TeX version of the auxillary table, Table 2 which may be downloaded and printed (Hold down the shift key and click on the file name to download.) • A view of the implication table, Table I, restricted to any subset of 30 or fewer forms of the axiom of choice. (Fill out the form below.) • A listing of all models known to satisfy any specified set of conditions. (Complete the form below.) To see a list of all models with specified characteristics: 1. Enter the numbers of the forms that are to be true in the model separated by spaces or commas below. (Ten forms maximum) The list of forms to be TRUE in the model(s) 2. Enter the numbers of the forms that are to be false in the model separated by spaces or commas below. (Ten forms maximum) The list of forms to be FALSE in the model(s) • A .pdf file giving the text of any form together with all of its equivalents. • A list of forms containing a single word or phrase Number of Visitors
{"url":"http://consequences.emich.edu/CONSEQ.HTM","timestamp":"2014-04-20T23:27:08Z","content_type":null,"content_length":"4378","record_id":"<urn:uuid:bd8d7377-9c05-4d7b-b6f7-cf5ca4e90e76>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Positions and Midpoints in Two Dimensions 5.2: Positions and Midpoints in Two Dimensions Created by: CK-12 A digital graphic artist is designing a new company logo. Currently she is sketching a sample on graph paper to show her client, and needs to find the center of the text she has drawn so she can line up the artwork properly. If she knows that the first letter starts 10 boxes up and 5 boxes over from the bottom-left corner of the page, and the last letter ends 12 boxes up and 32 boxes over from the same corner, how could she find the center? Watch This Embedded Video: - James Sousa: Ex: <Midpoint of a Segment Using the Coordinate System Efficiently In the past, as you used the coordinate system for graphing in Algebra and Geometry, you probably became very familiar with the x / y axes running left/right and up/down on the page, with 0 in the center. In more advanced mathematics, and in physics or other motion studies, you will find that it is often much simpler to move the graph to line up with one vector than to align all of the vectors you are calculating with the standard orientation. By aligning one of multiple vectors with either the x or y axis, and/or setting the origin of your graph at the start of a vector, you minimize the complexity of your calculations. Vectors Between Two Points A displacement vector represents motion beginning at one point and ending at another. In the diagram below, vector C starts at point A and ends at point B. This means that $\overrightarrow{A} + \ overrightarrow{C} = \overrightarrow{B}$$\overrightarrow{C} = \overrightarrow{B} - \overrightarrow{A}$$\overrightarrow{A} = \left \langle 1, 3 \right \rangle$$\overrightarrow{B} = \left \langle 3, 2 \ right \rangle$$\overrightarrow{C} = \left \langle (3 - 1), (2 - 3) \right \rangle = \left \langle 2, - 1 \right \rangle$ Vector to a Point Between Two Points Computer graphics artists frequently need to know the location of a point which lies midway between two other points. Once we know the position vectors for two discrete points, we can determine the midpoint between them using their coordinates. Specifically, the midpoint between points A and B is the “average” of the two positions, therefore the coordinates of the midpoint are given by $x_{mp} = \frac{1}{2} (x_A + x_B), \ y_{mp} = \frac{1}{2} (y_A + y_B)$$z_{mp} = \frac{1}{2} (z_A + z_B)$$P_{mp} = \left \langle x_{mp}, y_{mp}, z_{mp} \right \rangle$ Example A The motion of an object along an inclined plane is a very common problem in introductory physics. The diagram below shows one such situation. Stickman Beauford has taken his niece Brynna to the park and waves to her as she plays on the slide. Choose two coordinate systems that could be used to describe Brynna’s motion and identify the position vectors for points A and B in both coordinate systems. If we want to describe Brynna’s motion as she moves from point A to point B along the slide, we could use a standard horizontal and vertical coordinate system with the origin at the base of the slide’s ladder, but then the vector describing her motion would have components in both the x and y directions. Our mathematical description of her motion can be greatly simplified if we choose point A to be the origin and if we rotate the coordinate system such that the x-axis is parallel to the slide and the y-axis is perpendicular to the slide. Now Brynna’s motion from point A to point B is only along the x-axis. Note, other choices of origin are possible. Once we have identified an origin and coordinate axes for of our reference frames, we can use vector notation to identify the location of points A and B. The position vector for point A is the vector starting at the origin and ending at point A, $\overrightarrow{OA}$$\overrightarrow{OA}$$\overrightarrow{OB}$$\overrightarrow{OA} = 0$$\overrightarrow{OB} = \overrightarrow{AB}$ Example B Determine the coordinates and magnitude of the vector, D, beginning at the point $\overrightarrow{P_1} = \left \langle 12, 7 \right \rangle$$\overrightarrow{P_2} = \left \langle 8, 10 \right \rangle$ The displacement vector D is the difference between the two position vectors:. $D = \left \langle P_{2x} - P_{1x}, P_{2y} - P_{1y} \right \rangle = \left \langle 8 - 12, 10 - 7 \right \rangle = \left \langle -4, 3 \right \rangle$ The magnitude of the vector, D, can be found using the Pythagorean Theorem: $|\overrightarrow{D}| = \sqrt{(-4)^2 + (3)^2} = \sqrt{25} = 5$ Example C Determine the position vector identifying the midpoint between points $\overrightarrow{P_1} = \left \langle 12, 7 \right \rangle$$\overrightarrow{P_2} = \left \langle 8, 10 \right \rangle$ Since these two points are located in the x-y plane. The x-coordinate of the midpoint is given by $x_{mp} = \frac{1}{2} (x_A + x_B) = \frac{1}{2}(12 + 8) = 10$ and the y-coordinate of the midpoint is given by $y_{mp} = \frac{1}{2} (y_A + y_B) = \frac{1}{2}(7 + 10) = 8.5$ Therefore the position vector for this midpoint can be written as $P_{mp} = \left \langle 10,8.5 \right \rangle$ │ Concept question wrap-up │ │ │ │ "If she knows that the first letter starts 10 boxes up and 5 boxes over from the bottom-left corner of the page, and the last letter ends 12 boxes up and 32 boxes over from the same corner, how │ │ could she find the center?" │ │ │ │ This is a midpoint question, so the x coordinate calculation is: $x_{mp} = \frac{1}{2} (x_A + x_B) = \frac{1}{2}(5 + 32) = 18.5$ │ │ │ │ and the y-coordinate of the midpoint is given by: $y_{mp} = \frac{1}{2} (y_A + y_B) = \frac{1}{2}(10 + 12) = 11$ │ │ │ │ The coordinate of the center is: $18.5, 11$ │ Displacement vectors model the movement between one point and another on a coordinate plane. The midpoint of two vectors is the location in the center of their endpoints. A position vector describes the straight-line travel between a starting point (usually the origin) and the location of a 2nd point on a coordinate plane. Guided Practice 1) Identify the position vectors for the three points shown on the grid below. 2) The diagram shows two positions of a bicycle as it moves along a long straight road. Two possible coordinate systems for the motion are shown below. Determine the position vectors in each of the two coordinate systems for the bicycle at points A and B. Then determine the displacement vector from A to B in each case. (Not drawn to scale.) 3) Identify the position vectors for the three points shown in the diagram below. 1) The position vectors begin at the origin, (0, 0) and end at each point: $\overrightarrow{OA} = \left \langle -3, 1 \right \rangle, \ \overrightarrow{OB} = \left \langle 1, 2 \right \rangle$$\overrightarrow{OC} = \left \langle 2.5, 0 \right \rangle$ 2) The diagram shows two positions of a bicycle as it moves along a long straight road. Two possible coordinate systems for the motion are shown below. Determine the position vectors in each of the two coordinate systems for the bicycle at points A and B. Then determine the displacement vector from A to B in each case. For the upper coordinate system, the position vector of the bicycle at point A is given by $\overrightarrow{r_A} = \left \langle -300m, 0, 0 \right \rangle$$\overrightarrow{r_B} = \left \langle 100m, 0, 0 \right \rangle$$\overrightarrow{\triangle r_{A-B}} = \left \langle (100m - (-300m)), (0 - 0), (0 - 0)\right \rangle = \left \langle 400m, 0, 0 \right \rangle .$ For the upper coordinate system, the position vector of the bicycle at point A is given by $\overrightarrow{r_A} = \left \langle 100m, 0, 0 \right \rangle$$\overrightarrow{r_B} = \left \langle 500m, 0, 0 \right \rangle$$\overrightarrow{\triangle r_{A-B}} = \left \langle (500m - 100m), (0 - 0), (0 - 0)\right \rangle = \left \langle 400m, 0, 0 \right \rangle .$ The position vectors for the bicycle at point A are shown in red and the position vectors for point B are shown in blue. The displacement vector between points A and B is shown in gold. As you can see, the position vectors representing this motion depend on the choice of coordinate system, but the displacement vector is independent of the coordinate system. No matter how we define the origin, the bike moves 400 m in the +x direction and does not move in the y or z direction. 3) Identify the position vectors for the three points shown in the diagram below. $\overrightarrow{r_A} = \left \langle -2.63, 2.63, 0 \right \rangle, \ \overrightarrow{r_B} = \left \langle 3, 1.75, 0 \right \rangle, \ \overrightarrow{r_C} = \left \langle 0.25,1,0 \right \ 1. What is a displacement vector used for? Determine the coordinates and magnitude of the displacement vector, D, beginning at the point $\overrightarrow{P_1}$$\overrightarrow{P_2}$ 2. $\overrightarrow{P_1} = \left \langle 25, 3 \right \rangle$$\overrightarrow{P_2} = \left \langle 8, 11 \right \rangle$ 3. $\overrightarrow{P_1} = \left \langle 5, 3 \right \rangle$$\overrightarrow{P_2} = \left \langle 7, 9 \right \rangle$ 4. $\overrightarrow{P_1} = \left \langle 21, 18 \right \rangle$$\overrightarrow{P_2} = \left \langle 4, 15 \right \rangle$ 5. $\overrightarrow{P_1} = \left \langle 8, 5 \right \rangle$$\overrightarrow{P_2} = \left \langle 5, 8 \right \rangle$ 6. $\overrightarrow{P_1} = \left \langle 16, 25 \right \rangle$$\overrightarrow{P_2} = \left \langle 9, 11 \right \rangle$ 7. $\overrightarrow{P_1} = \left \langle 14, 3 \right \rangle$$\overrightarrow{P_2} = \left \langle 23, 20 \right \rangle$ 8. $\overrightarrow{P_1} = \left \langle 11, 4 \right \rangle$$\overrightarrow{P_2} = \left \langle 15, 11 \right \rangle$ 9. $\overrightarrow{P_1} = \left \langle 23, 13 \right \rangle$$\overrightarrow{P_2} = \left \langle 1, 17 \right \rangle$ Determine the position vector identifying the midpoint between points $\overrightarrow{P_1}$$\overrightarrow{P_2}$ 10. $\overrightarrow{P_1} = \left \langle 17, 6 \right \rangle$$\overrightarrow{P_2} = \left \langle 18, 12 \right \rangle$ 11. $\overrightarrow{P_1} = \left \langle 2, 5 \right \rangle$$\overrightarrow{P_2} = \left \langle 1, 9 \right \rangle$ 12. $\overrightarrow{P_1} = \left \langle 24, 7 \right \rangle$$\overrightarrow{P_2} = \left \langle 21, 10 \right \rangle$ 13. $\overrightarrow{P_1} = \left \langle 12, 9 \right \rangle$$\overrightarrow{P_2} = \left \langle 2, 20 \right \rangle$ 14. $\overrightarrow{P_1} = \left \langle 15, 17 \right \rangle$$\overrightarrow{P_2} = \left \langle 18, 1 \right \rangle$ 15. $\overrightarrow{P_1} = \left \langle 22, 14 \right \rangle$$\overrightarrow{P_2} = \left \langle 23, 8 \right \rangle$ 16. $\overrightarrow{P_1} = \left \langle 1, 7 \right \rangle$$\overrightarrow{P_2} = \left \langle 14, 21 \right \rangle$ 17. $\overrightarrow{P_1} = \left \langle 3, 9 \right \rangle$$\overrightarrow{P_2} = \left \langle 8, 1 \right \rangle$ We need you! At the moment, we do not have exercises for Positions and Midpoints in Two Dimensions. Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Math-Analysis-Concepts/r1/section/5.2/","timestamp":"2014-04-19T03:10:11Z","content_type":null,"content_length":"149367","record_id":"<urn:uuid:eceefd13-c18d-4328-925c-615d65c353e1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
CMS/CSHPM Summer 2005 Meeting Contributed Papers Session Org: Peter Hoffman (Waterloo) Berge defined a graph to be perfect if for every induced subgraph, the minimum number of colours required in a vertex colouring equals the maximum number of vertices in a clique. In 2002, Chudnovsky, Robertson, Seymour and Thomas proved Berge's 40-year-old Strong Perfect Graph Conjecture: a graph is perfect if and only if it contains no odd holes or odd antiholes. A "proof from the book" of this result might be a combinatorial polytime algorithm, which for any graph, finds a clique and colouring the same size, or else finds an odd hole or an odd antihole (or some other easily recognizable combinatorial obstruction to being perfect). In view of precedents, such an algorithm might be simpler than the Chudnovsky-Seymour and Cornuejols-Liu-Vuskovich algorithms for recognizing perfect graphs since it could end up giving a clique and a colouring of the same size in a non-perfect graph. I will report on some recent progress on special classes of graphs in joint work with Jack Edmonds, Elaine Eschen, Chinh Hoang and R. Sritharan. In this work we study a semilinear heat equation in a long cylindrical region for which the far end and the lateral surface are held at zero temperature and a nonzero temperature is applied at the near end. In other words, the specific domain we consider is a finite cylinder W: = D×[0,L], where D is a bounded convex domain in the (x[1],x[2])-plane with smooth boundary ¶D Î C^ 2,e, the generators of the cylinder are parallel to the x[3]-axis and its length is L. The specific problem we consider is the following initial boundary value problem ï x Î ¶W[L] È ¶W[lat], t Î (0,T), (1.1) í (1) where ¶W[0] : = D ×{0}, ¶W[L] : = D×{L}, ¶W[lat] : = ¶D ×(0,L). We also assume that h(x[1],x[2],t) is a prescribed non-negative function with h(x[1],x[2],0)=0 and f is a non-negative function satisfying the following conditions (1.2) \undersets® 0 lim f(s) s exists, f^¢ (s) £ p(s) and f^¢¢ (s) £ q(s) for s ³ 0, (2) where p(s) and q(s) are some non-decreasing function of s. We are interested on the spatial decay bounds for the solution of the semilinear heat equation (1.1) and on its continous dependence with respect to the data at the near end of the cylinder. Since the solution u(x,t) of the problem (1.1) can blow up at some point in space time, our aim is to derive sufficient conditions on the data which will guarantee that the solution remains bounded and moreover, under such conditions, we will obtain some explicit spatial decay bounds for the solution, its cross-sectional derivatives and its temporal derivative. We will also prove that the solution depends continously on the data h(x[1],x[2],t) at the near end of the cylinder. Let C(M) be the vector space of conformal Killing vectors defined on a pseudo-Riemannian manifold M of constant curvature. Consider the action of the isometry group I(M) on C(M). If we employ the method of infinitesimal generators, the problem of finding fundamental invariants and covariants reduces to solving a system of first order linear homogeneous PDEs. In theory, the method of characteristics may be used to find solutions, however in practice it proves ineffective due to the sheer size of the system. Alternatively, if the invariants or covariants may be represented by polynomials, the problem reduces further to solving a system of linear equations. The successful application of this alternative to find invariants for all such C(M) in dimensions 3, 4 and 5 will be discussed. In addition, the cases where these invariants have been used to distinguish between equivalence classes in certain C(M) will be shown. In this talk we will consider the nature of the generating series for different families of walks in the quarter plane. In particular, we consider combinatorial criteria which ensure the holonomy of the counting sequences, that is, when sequences satisfying linear recurrences with polynomial coefficients. We will also ponder the nature of series associated to different classes of formal languages. Work in collaboration with Mireille Bousquet-Melou and Mike Zabrocki. Pseudo-random numbers are a critical part of modern computing, especially for use in simulations and cryptography, and consequently there are a myriad of algorithms for creating uniform pseudo-random sequences. However, many simulations ultimately require non-uniform random sequences. In this talk we introduce a new method to directly generate, without transformation of rejection, some non-uniform pseudo-random sequences. This method is a group-theoretic analogue of linear congruential pseudo-random number generation. We provide examples of such sequences, involving computations in Jacobian groups of plane algebraic curves, that have both good theoretical and statistical properties. Consider the space X[G] of doubly-indexed sequences over a finite abelian group G satisfying for all integers s,t. The left and downward shift induce an action of Z^2 on X[G]. Recently, we could prove a conjecture by Ward stating that the periodic point data of X[G] determine the group G up to isomorphism. Our approach is to view X[G] as the set of sequences annihilated by T-(S+1) where S,T stand for the two shift actions, and to study algebraic sets of sequences via their annihilators in the polynomial ring Z[S,T]. We will sketch a proof of this theorem and show that our method extends to many other spaces defined by linear recurrences over groups. Key words are Galois Rings, Teichmuller systems and Wieferich primes. The p-adic representation of binomial and multinomial coefficients comes into play. "Delaunay surfaces" are translationally-periodic constant mean curvature (CMC) surfaces of revolution. We compute explicit conformal parametrizations of Delaunay surfaces in each of the three space forms Euclidean 3-space R^3, spherical 3-space S^3 and hyperbolic 3-space H^3 by using the generalized Weierstrass type representation for CMC surfaces established by J. Dorfmeister, F. Pedit and H. Wu. This method is commonly called the DPW method, and is a method based on integrable systems techniques. We show that these parametrizations are in full agreement with those of the more classical approach. The DPW method is certainly not the simplest way to derive such parametrizations, but the DPW gives a means to construct other CMC surfaces (such as trinoids and perturbed Delaunay surfaces) that the classical methods have not given. The Delaunay surfaces are an important base for constructing other CMC surfaces using the DPW method, so explicitly understanding how the DPW method makes Delaunay surfaces is valuable. Let X denote a Tychonoff space, C(X) denote its ring of real-valued continuous functions, and bX denote X re-topologized by using its zero-sets as a base for the open sets. Then C(bX) is a (von Neumann) regular ring. Let G(X) denote the smallest regular subring of C(bX) that contains C(X). Then X is called an RG-space if G(X) = C(bX). In this talk we discuss some recent results concerning RG-spaces. Here is a non-exhaustive sampling: (a) Countably compact RG-spaces, and "small" pseudocompact RG-spaces, must be compact (and hence scattered and of finite Cantor-Bendixon degree). (b) There exist almost compact spaces of Cantor-Bendixon degree 2 that are not compact. (c) An RG-space must have a dense subspace of "very weak P-points" (i.e., points not in the closure of any countable discrete set), but there exists a countable space that is not RG but consists entirely of very weak P-points. This talk summarizes joint research with M. Hrusak and R. Raphael. In this talk I will discuss an application of the inductive version of the moving frames method due to Irina Kogan to the invariant theory of Killing tensors. The method is successfully employed to solve the problem of the determination of isometry group invariants (covariants) of Killing tensors of arbitrary valence defined in the Minkowski plane. This is joint work with Roman Smirnov. We give a generalization of the classical Helly's theorem on intersection of convex sets in R^N for the case of manifolds of nonpositive curvature. In particular, we show that if any N+1 sets from a family of closed convex sets on N-dimensional Cartan-Hadamard manifold contain a common point, and at least one of set is compact then all sets from this family contain a common point. Our proof use a variational argument. In R^N this proof is rather straightforward yet it seems new to us. The generalization to manifolds of nonpositive curvature relies on tools for nonsmooth analysis on smooth manifolds that we developed recently. This is joint research with Yuri Ledyaev and Jay Treiman.
{"url":"http://cms.math.ca/Events/summer05/abs/CP.html","timestamp":"2014-04-18T03:10:51Z","content_type":null,"content_length":"26813","record_id":"<urn:uuid:aa255e9a-caf2-4d9d-b3c3-8d5d91881997>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
User Yuji Tachikawa bio website visits member for 4 years seen Apr 9 at 0:21 stats profile views 3,631 There's no string-theory-overflow? Come on... 1d awarded Yearling 31 awarded Notable Question Mar How can gauge theory techniques be useful to study when topological manifolds can be triangulated? 24 revised corrected the dimension 24 accepted How can gauge theory techniques be useful to study when topological manifolds can be triangulated? 23 asked How can gauge theory techniques be useful to study when topological manifolds can be triangulated? Mar Hall-Littlewood functions and functions on the nilpotent cone 13 comment Thank you very much, Ben, and sorry for not distinguishing the nilcone and T^*G/B ... Mar Hall-Littlewood functions and functions on the nilpotent cone 13 comment Thanks, that should be the correct mathematical statement of what I wanted to say :-p 13 accepted Hall-Littlewood functions and functions on the nilpotent cone 12 asked Hall-Littlewood functions and functions on the nilpotent cone Oct Pontryagin square on $Y\times S^1$ where $Y$ is three-dimensional 7 comment Of course I shouldn't have trusted what's available on the internet written by anonymous persons! Thank you very much, Oscar, for helping me. (The original question which I erased out of shame was why we don't have $n^2=Sq^1 n=v_1 n=0$. ) Oct Pontryagin square on $Y\times S^1$ where $Y$ is three-dimensional 7 comment Thanks, I erased the question because I thought I should study this basic piece of algebraic topology first...Somehow there are many places on the internet where the cupping with $v_k$ equals $Sq^k$ even when it doesn't land in the top degree, like in Wikipedia: en.wikipedia.org/wiki/Stiefel-Whitney_class#Wu_classes or nLab: ncatlab.org/nlab/show/Wu+class . Doing the computation in $\mathbb{RP}^n$ it's clear it only works when the product lands in the top degree. Hmm ... 3 accepted Complex structure of the Teichmüller space in terms of Fenchel-Nielsen coordinates 3 asked Complex structure of the Teichmüller space in terms of Fenchel-Nielsen coordinates Oct Cohomology of the classifying space of $Ss(4m)$ 3 revised a correction in the table of Sq^k on the generators of H^*(Ss(16m),Z/2) 2 awarded Self-Learner 2 accepted Cohomology of the classifying space of $Ss(4m)$ 2 answered Cohomology of the classifying space of $Ss(4m)$ 29 awarded Nice Answer Sep Cohomology of the classifying space of $Ss(4m)$ 26 comment Thank you very much for the reference. I added an update in the question, saying that I only want to know the structure up to degree 11. Sep revised Cohomology of the classifying space of $Ss(4m)$ 26 added a comment on the degree
{"url":"http://mathoverflow.net/users/5420/yuji-tachikawa?tab=activity","timestamp":"2014-04-18T21:19:26Z","content_type":null,"content_length":"46343","record_id":"<urn:uuid:d4247d0c-8c23-4601-b2b8-2db9d2bd9f9d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/firstfrostbyte/medals","timestamp":"2014-04-17T04:05:23Z","content_type":null,"content_length":"110668","record_id":"<urn:uuid:35a6e7b4-8254-47f7-b0a0-c169d4c1f687>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply The function: is very useful for calculating sums. That's because: If you want to calculate this: , you may express the f(x) = x^i as: where r(x) is the "remainder", which has less power than f(x). Example for f(x)=x^1: S_2 (x) = (x+1)^2-x^2=x^2+2x+1-x^2=2x+1 (S_2 (x))/2=x+1/2 f(x)=x^1=x=0.5 S_2(x) - 0.5 = 0.5(S_2(x)-1). There's also and binomial proof, which is more usable and universal, but it's harder too.
{"url":"http://www.mathisfunforum.com/post.php?tid=3146&qid=32698","timestamp":"2014-04-16T13:28:56Z","content_type":null,"content_length":"20610","record_id":"<urn:uuid:c8d01bd6-77ec-4c64-9fc4-9b155f6ecacc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
About 33 Bits See also: Sitemap This is a blog about my research on privacy and anonymity. The title refers to the fact that there are only 6.6 billion people in the world, so you only need 33 bits (more precisely, 32.6 bits) of information about a person to determine who they are. This fact has two related consequences. First, a lot of traditional thinking about anonymous data relied on the fact that you can hide in a crowd that’s too big to search through. That notion completely breaks down given today’s computing power: as long as the bad guy has enough information about his target, he can simply examine every possible entry in the database and select the best The second consequence is that 33 bits is not really a lot. If your hometown has 100,000 people, then knowing your hometown gives me 16 bits of entropy about you, and only 17 bits remain. But the real danger is that information about a person’s behavior, which was traditionally not considered personally identifying, can be used to cause serious privacy breaches in a variety of different This blog will announce, explain and elaborate on my research as it relates to the above theme. I will also use it as an outlet for my opinions on the broader technical, policy, business and social issues related to my work. To stay on top of future posts, subscribe to the RSS feed or follow me on Twitter or Google+. • Hello! Landed up here through a very strange series of steps (one of them being the fact that your name is the same as another friend of mine who’s doing a PhD). You say “Part of the reason why I started this blog is in the hope of accelerating this process by reaching out to people outside the computer science community.” but I’m afraid I didn’t understand anything much beyond your about page… Is there any reading you might recommend otherwise for those outside the community? Thanks. :) • Hi T, If you understood the About page, that’s already a step up from normal academic discourse :-) But yeah, I realize that my posts thus far are a bit heavy on the jargon. I have a few posts of a less technical nature coming soon, I promise! Meanwhile, there have been a few articles in the press about our work that you should be able to make sense of (example). And here is an article about the AOL search log incident. • Hi.. I’m a doctoral student at Nova Southeastern Universtiy working on privacy and ecommerce. Just found your blog. What’s the math behind the 32.6 bits? • It’s the logarithm of the world human population. Please see the definition of entropy. A quantity that has N possible values has an entropy of at most log_2(N) bits, by definition. • I can’t believe that a doctoral student just asked about “the math” behind this. You don’t even need to know about log2 in order to figure this out. 1 bit can hold two values 2 bits can hold four values how many bits do you need to hold 6 billion values? One can just start entering powers of 2 in his calculator until his reaches that sum!. I`m sorry if I come across as bashing but I’m genuinely freaked out. □ Relax, she didn’t say she was a computer scientist. It is perfectly reasonable for a non-CS/math/physics person not to be able to figure out entropy without any explanation. ☆ Great work, very interesting Arvind. Somewhat unrelated, but I’m hoping someone comes up with a “password meter” that shows actual bits of entropy. As you are no doubt aware, many folks choose, um, poorly. http://www.webresourcesdepot.com/10-password-strength-meter-scripts-for-a-better-registration-interface/ has varying levels; some are graphically great, but some call back to servers for parts of their functionality. I question the algorithms that are used to determine effectiveness; I’m imagining things like the old Pgp versions that used to show a calculated entropy in bits (now you know how I found your site!) in real time, as you typed what you hoped was a good pass phrase. ○ Thanks for your comment. This thread is too getting deep, I’ve replied to you below. • John, It would indeed be good to have such a script; at the same time, I think limiting the rate at which an attacker can guess passwords has a far greater effect on password security than making the user choose stronger passwords. The entropy of a password is only vaguely defined. It can only be measured relative to the algorithm that the attacker is going to use, which of course is unknown. I have a paper on password cracking, which might help explain what I’m talking about. • You probably should mention that it is 32.6 distinguished bytes of data. Although this should be clear to anybody ever having heard the term ‘entropy’ in the computer science context, comments above show that this is not the case for all of your readers. The actual truth is, that you need only 33 bits to encode an unique identity for every person. Needing only 33 bits to identify everyone sounds freakish alarming (considering that my name above in ASCII already takes up 32 bits. But it could be a little more clear, that you need very special 33 bits. Having said that, I envy you for having had the idea for that title, instead of me that is. □ 33 bits is the mathematical hard limit for the minimum number of bits of data you need to uniquely identify a person. Any amount of *data* that uniquely identifies a single person out of 6.6 billion equiprobable people, transfers 32.6 bits of *information*. The distinction between ‘data’ and ‘information’ is very important. I could send you a unique identification number in 33 bits of data, or I could send you a full name, date of birth and address in maybe 400 bits of data, and both of those messages would contain 32.6 bits of information for identifying the person. You can only fit 33 bits of information into 33 bits of data if every single bit eliminates exactly half of the possibilities. • Arvind Sahib, when you say that you only require 33 bits of information and that my hometown contributes 16 or so bits, you are implying that getting more of the relevant bits is as easy as finding my hometown. Also there might only be 6billion humans on this earth but it doesn’t mean we can’t have 6000billion identities, right? □ No, I’m not implying that. Entropy is simply a mathematical construct that describes how much information there is to be gained from a piece of data. It says nothing about how easy it is to find data about a person. The fact that is is easy to find auxiliary sources of data in order to determine someone’s identity is an empirical observation. “Also there might only be 6billion humans on this earth but it doesn’t mean we can’t have 6000billion identities, right?” That is true; however – 1. unless you make sure your behavior under each identity is completely independent of the others, your various identities can still be tied to each other. 2. the effort needed to create even a single believable alternate virtual identity is too high for most people to bother with. 3. going from 6 billion to 6000 billion identities only increases the entropy requirement from 33 to 43 bits, which is a negligible increase! • Your 33 bits research is very well illustrated by the following blog post: Death Note: L, Anonymity & Eluding Entropy Describes how in the movie Death Note the super detective is able to narrow down the killer from the worlds population to one individual. PS: Death Note is an interesting Japanese movie. • Just to give a bit of information theory background for readers who don’t have much exposure to computer science or mathematics, when we say a “bit” we mean a binary digit–a piece of information that can have one of two values (which we express in binary–the base 2 number system–as 1 or 0). Depending on the context, we often name those value pairs true/false, yes/no, on/off, or high/low, but they can represent any single choice between two options. We can then encode data by representing the choices it took to describe that data. For example, let’s say we want to represent different types of people using binary. We want to encode whether they are male or female, an American citizen or not, and a native speaker of English or not. I know some people don’t identify as either male or female, but let’s assume for simplicity that the above three parameters all are binary choices. We now want to find out the number of distinct people we can categorize with these three parameters. Since there are two options for each choice, each new choice doubles the number of people we can categorize. So one choice lets us encode 2 people (either male or female). Two choices let us encode 2*2=4 people (either male American, female American, male non-American, or female non-American). Three choices let us encode 2*2*2=8 people (two sets of the above four people, with one set speaking English natively and the other not). We can represent this using exponents as 2^n where n is the number of choices. So when there are three choices we can differentiate between 2^3=8 different people. To see this easily, we can assign each type of person a three-digit binary number, where the first digit represents their gender (0 is male and 1 is female), the second represents if they are American (0) or not (1), and the third represents if they speak English natively (0) or not (1). So we have 000, 001, 010, 011, 100, 101, 110, and 111. These work out to be the binary representations of the base ten numbers 0 through 7, which is a range of eight distinct values. So we know the math worked out. To summarize so far, three binary digits can represent eight different values because three yes/no choices can distinguish between eight different people. In math terms, 2^3=8. Likewise, 2^4=16, so four binary digits can represent 16 different values–the numbers from 0 to 15 or just sixteen different people. So how did the author of the article know that we would need 33 bits to differentiate between the 6.6 billion people (I think it’s over 7 billion now, but we’ll stick to the data in the article) in the world? Put another way, how many choices between two options must we make to differentiate between 6.6 billion people? Using the math above, we can translate this into the equation 2^x = 6.6 billion, where x is the number of choices or binary digits or bits of information. To solve for x, mathematicians take what’s called the logarithm–the inverse of an exponent. Computer scientists represent the base 2 logarithm with the notation lg. So to use our above examples, to solve for x in 2^x=8, we can say x = lg(8) = 3. In English, that’s “the base 2 logarithm of 8 is 3″ or “the number of bits required to represent eight distinct things is 3.” Likewise, lg(16) = 4 and lg(32) = 5. So getting back to the equation 2^x = 6.6 billion, we can rewrite it as x = lg(6.6 billion), which my calculator gives me as 32.61981887845735. So we’d need 33 bits to represent up to 2^33 = 8,589,934,592 distinct people. Hope that makes this blog accessible to laypeople! Trackback this post | Subscribe to the comments via RSS Feed
{"url":"http://33bits.org/about/","timestamp":"2014-04-21T07:37:12Z","content_type":null,"content_length":"50707","record_id":"<urn:uuid:a798b07a-156f-45c6-af8f-7ac0de4ca4b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Family of Enriques surfaces and GRR, Part 2 up vote 3 down vote favorite As I mentioned in my previous post, I am studying the article Moduli of Enriques surfaces and Grothendieck-Riemann-Roch. The Grothendieck-Riemann-Roch theorem is applied there to show that, for any family of Enriques surfaces $f:Y\longrightarrow T$, the line bundle $$\mathcal{L} := R^0 f_\ast \Big ((\Omega^2_{Y/T})^{\ otimes 2}\Big)$$ is a torsion line bundle, i.e., some tensor power $\mathcal{L}^{\otimes n}$ is isomorphic to the structure sheaf on $T$. To this extent, one applies GRR to the morphism $f$ and the structure sheaf $\mathcal{O}_Y$. The problem I have now is with the "relative tangent sheaf". I am guessing this is the the quotient sheaf $$f^\ast \mathcal{T}_T/\mathcal{T}_Y.$$ Q1. Why is this well-defined? That is, why do we have an injection $ \mathcal{T}_Y \longrightarrow f^\ast \mathcal{T}_T$? Q2. How can one determine the Chern classes of $\mathcal{T}_f$ by means of the fibres? That is, can one use the structure on the fibres (Enriques surfaces) to determine $c_i(\mathcal{T}_f)$? [New questions] Q3. Let $E$ be a fibre of $f:Y\longrightarrow T$ with injection $i:E\longrightarrow Y$. Is the ringmorphism $i^\ast:A^\cdot Y \longrightarrow A^\cdot E$ injective? If not, is it injective after tensoring with $\mathbf{Q}$? Let $c_i=c_i(T_f)$. Q4. We have two formulas from the GRR. The first is $1 = \frac{1}{12} f_\ast(c_1^2+c_2).$ This is the degree 0 part. The second comes from the degree 1 part and reads $0 = \frac{1}{24}f_\ast(c_1\cdot c_2).$ Now, why is $f_\ast(c_1^2) = 0$ as is suggested by the article? add comment 2 Answers active oldest votes Q1: It is the other way round. For a smooth family the differential $T_Y \to f^\ast T_T$ is surjective and the relative tangent is the kernel, so you have an exact sequence $0 \to T_f \to T_Y \to f^\ast T_T \to 0$. In this way the tangent to $f$ actually restricts to the tangent of the fibers. Q2: I don't think that the classes $c_i(T_f)$ are determined by the fibers alone; they depend on the family. It does not even make sense to say that $c_i(T_f)$ are determined by the fibers since these classes live in $H^{2i}(Y)$ anyway, so you have to know at least the total space. But since $T_f$ restricts to the tangent of the fibers, you know, by naturality of the Chern classes, that if $i \colon E \to Y$ is the inclusion of a fiber $i^\ast c_i(T_f) = c_i(T_E)$. up vote 3 down vote And these you can compute using the fact that $E$ is Enriques. Namely $2 c_1(T_E) = 0$ since twice the canonical is trivial and $c_2(T_E) = \chi(E) = 12$. Q3: Surely it is not injective in the top degree, for trivial dimensional reasons. I do not see any reason why it should be in other degrees. Q4: As is written in the article, this follows from $f_\ast c_2 = 12$. This is more or less clear in cohomology. In this case $f_*$ is the integration along fibers, and since $c_2(T_E)$ is $12$ times the fundamental class of $E$ for all fibers $E$ (see Q2), that integral is $12$. To translate this in the Chow language, I think the folllowing will do. Let $D$ be a cycle representing $c_2(T_f)$. Since $Y$ is smooth, we can compute the intersection number $D \cdot E = c_2(T_f) \cap E = c_2(T_E) \cap E = 12$. So $D$ intersectts the generic fiber in $12$ points, and the morphism $D \to T$ has degree $12$. In follows that $f_\ast D = 12 [T]$, which is what you want. So, clearly $c^2_1(T_E)=\frac{1}{4}(2c_1(T_E))^2=0$ in the Chow ring of $E$ tensored with $\Q$ for every fibre E. Can one conclude from this that $c^2_1(T_f)=0$ in the Chow ring of $Y$ tensored with $\Q$? The reason I'm asking is the following. The author of the article says that Noether's formula implies that $f_\ast(c_2(T_f))=12$ in the Chow ring of $T$. All I can see though is that $$f_\ast(c^2_1(T_f)+c_2(T_f))=12.$$ (I take the degree 2 part of the GRR identity.) So shouldn't $c^2_1(T_f)$ be zero? Another question: is the ringmorphism $i^\ ast:A(Y)\rightarrow A(E)$ injective? – Ari Mar 4 '10 at 18:51 that should be "tensored with $\mathbf{Q}$". – Ari Mar 4 '10 at 19:00 Yes, we take $c_i$ to be the $c_i$ of the relative tangent sheaf $\mathcal{T}_f$. Then, for any inclusion $i:E \longrightarrow Y$ of a fibre of $f$, we know that $i^\ast\mathcal{T}_f = \mathcal{T}_E$. Therefore, $i^\ast c_1^2 = 0$ in $A^2(E)\otimes_\mathbf{Z} \mathbf{Q}$. Now, I do not see why this would imply $c_1^2 =0$ or $f_\ast(c_1^2) = 0$... – Ari Mar 5 '10 at By the way, I think you should post separate questions, if you have some more. – Andrea Ferretti Mar 5 '10 at 13:12 add comment This is an answer to Q1: The relative tangent bundle is the vector bundle on $Y$ whose fibre at a point is the tangent space to the fibre of $f$ passing through that point. up vote 2 How do we tell if a tangent vector at $y$ is pointing along the fibre through $y$? Because it is killed by the derivative mapping $Df_y:\mathrm{T}Y_y \to \mathrm{T}T_{f(y)}.$ We can down vote organize all these maps into a single map $Df:\mathcal{T}_Y \to f^*\mathcal{T}_T,$ and the relative tangent sheaf is then the kernel of this map. (The dual picture with differentials may be more familiar: in that picture we have $df:f^{\*}\Omega_{T} \to \Omega_Y,$ and the relative differentials are the cokernel of this map.) add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry intersection-theory kt.k-theory-homology or ask your own question.
{"url":"http://mathoverflow.net/questions/16984/family-of-enriques-surfaces-and-grr-part-2","timestamp":"2014-04-17T04:20:52Z","content_type":null,"content_length":"62678","record_id":"<urn:uuid:6d134368-7e6e-4623-a11b-890749e708ea>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Christine Doughty Christine Doughty's Publications Peer Reviewed Journal Publications 1. Tsang, C.-F., T.A. Buscheck, and C. Doughty, Aquifer thermal energy storage: a numerical simulation of Auburn University field experiments, Water Resour. Res., 17, 3, 647-658, 1981. 2. Doughty, C., G. Hellstrom, C.-F. Tsang, and J. Claesson, A dimensionless parameter approach to the thermal behavior of an aquifer thermal energy storage system, Water Resour. Res., 18, 3, 571-587, 1982. 3. Buscheck, T.A., C. Doughty, and C.-F. Tsang, Prediction and analysis of a field experiment on a multilayered aquifer thermal energy storage system with strong buoyancy flow, Water Resour. Res., 19, 5, 1307-1315, 1983. 4. Tsang, C.-F., D.C. Mangold, C. Doughty, and M.J. Lippmann, Prediction of reinjection effects in the Cerro Prieto geothermal system, Geothermics, 13, 1/2, 141-162, 1984. 5. Doughty, C. and C.-F. Tsang, A comparative study of a heat and fluid flow problem using three models of different levels of sophistication, Mathematical Modelling, 8, 412-418, 1987. 6. Doughty, C. and K. Pruess, A semianalytical solution for heat pipe effects near high-level nuclear waste packages buried in partially saturated geological media, Intl. Journal of Heat and Mass Transfer, 31, 1, 79-90, 1988. 7. Doughty, C. and K. Pruess, A similarity solution for two-phase fluid and heat flow near high-level nuclear waste packages emplaced in porous media, Intl. Journal of Heat and Mass Transfer, 33, 6, 1205-1222, 1990. 8. Doughty, C. and K. Pruess, A similarity solution for two-phase water, air, and heat flow near a linear heat source in a porous medium, Journal of Geophysical Res., 97 (B2), 1821-1838, 1992. 9. Nir, A., C. Doughty, and C.-F. Tsang, Validation of design procedure and performance modeling of a heat and fluid transport field experiment in the unsaturated zone, Advances in Water Resources, 15, 153-166, 1992. 10. Amistoso, A.E., B.G. Aquino, Z.P. Aunzo, O.T. Jordan, F.X.M. Sta. Ana, G.S. Bodvarsson, and C. Doughty, Reservoir analysis of the Palinpinon geothermal field, Negros Oriental, Philippines, Geothermics, 22, (5/6), 555-574, 1993. 11. Doughty, C., J.C.S. Long, K. Hestir, and S.M. Benson, Hydrologic characterization of heterogeneous geologic media with an inverse method based on iterated function systems, Water Resour. Res., 30, 6, 1721-1745, 1994. 12. Liu, H.H., C. Doughty, and G.S. Bodvarsson, An active fracture model for unsaturated flow and transport in fractured rocks, Water Resour. Res., 34, 10, 2633-2646, 1998. 13. Doughty, C., Investigation of conceptual and numerical approaches for evaluating moisture, gas, chemical, and heat transport in fractured unsaturated rock, Journal of Contaminant Hydrology, 38, 1-3, 69-106, 1999. 14. Vasco, D.W., K. Karasaki, and C. Doughty, Using surface deformation to image reservoir dynamics, Geophysics, 65,1,1-16, 2000. 15. Johnson, T.M., R.C. Roback, T.L. McLing, T.D. Bullen, D.J. DePaolo, C. Doughty, R.J. Hunt, R.W. Smith, L.D. Cecil, and M.T. Murrell, Groundwater “fast paths” in the Snake River Plain aquifer: Radiogenic isotope ratios as natural groundwater tracers, Geology, 28, 10, 871-874, 2000. 16. Faybishenko, B., C. Doughty, M. Steiger, J.C.S. Long, T.R. Wood, J.S. Jacobsen, J. Lore, and P.T. Zawislanski, Conceptual model of the geometry and physics of water flow in a fractured basalt vadose zone, Water Resour. Res., 36, 12, 3499-3520, 2000. 17. Doughty, C., Numerical model of water flow in a fractured basalt vadose zone, Box Canyon site, Idaho, Water Resour. Res., 36, 12, 3521-3534, 2000. 18. Salve, R., J.S.Y. Wang, and C. Doughty, Liquid-release tests in unsaturated fractured welded tuffs: I. Field investigations, Journal of Hydrology, 256, 1-2, 60-79, 2002. 19. Doughty, C., R. Salve, and J.S.Y. Wang, Liquid-release tests in unsaturated fractured welded tuffs: II. Numerical modeling, Journal of Hydrology, 256, 1-2, 80-105, 2002. 20. Doughty, C. and K. Karasaki, Flow and transport in hierarchically fractured rock, Journal of Hydrology, 263, 1-4, 1-22, 2002. 21. Tsang, C.-F. and C. Doughty, A particle-tracking approach to simulating transport in a complex fracture, Water Resour. Res., 39, 7, 1174, doi:10.1029/2002WR001614, 2003. 22. Tsang, C.-F. and C. Doughty, Multirate flowing fluid electric conductivity logging method, Water Resour. Res., 39, 12, 1354, doi:10.1029/2003WR002308, 2003. 23. Doughty, C. and K. Karasaki, Modeling flow and transport in saturated fractured rock to evaluate site characterization needs, IAHR Journal of Hydraulics, 42, extra issue, 33-44, 2004. 24. Doughty, C. and K. Pruess, Modeling supercritical carbon dioxide injection in heterogeneous porous media, Vadose Zone Journal, 3, 3, 837-847, 2004. 25. Doughty, C. and C.-F. Tsang, Signatures in flowing fluid electric conductivity logs, Journal of Hydrology, 310, 1-4, 157-180, 2005. 26. Doughty, C., S. Takeuchi, K. Amano, M. Shimo, and C.-F. Tsang, Application of multi-rate flowing fluid electric conductivity logging method to Well DH-2, Tono Site, Japan, Water Resour. Res., 41, W1041, doi:10.1029/2004WR003708, 2005. 27. Hovorka, S.D., S.M. Benson, C. Doughty, B.M. Freifeld, S. Sakurai, T.M. Daley, Y.K. Kharaka, M.H. Holtz, R.C. Trautz, H.S. Nance, L.R. Myer, and K.G. Knauss, Measuring permanence of CO[2] storage in saline formations: the Frio experiment, Environmental Geosciences, 13, 2, 1-17, 2006. 28. Doughty, C., Modeling geologic storage of carbon dioxide: comparison of hysteretic and non-hysteretic curves, Energy Conversion and Management, 48, 6, 1768-1781, doi:10.1016/ j.enconman.2007.01.022, 2007. 29. Doughty, C., B.M. Freifeld, and R.C. Trautz, Site characterization for CO[2] geologic storage and vice versa – the Frio brine pilot, Texas, USA as a case study, Environmental Geology, 54, 8, 1635-1656, doi: 10.10007/s00254-007-0942-0, 2008. 30. Finsterle, S., C. Doughty, M.B. Kowalsky, G.J. Moridis, L. Pan, T. Xu, Y. Zhang, and K. Pruess, Advanced vadose zone simulation using TOUGH, Vadose Zone Journal, 7, 601–609, doi:10.2136/ vzj2007.0059, 2008. 31. Doughty, C., C.-F. Tsang, K. Hatanaka, S. Yabuuchi, and H. Kurikami, Application of direct-fitting, mass-integral, and multi-rate methods to analysis of flowing fluid electric conductivity logs from Horonobe, Japan, Water Resour. Res., 44, W08403, doi:10.1029/2007WR006441, 2008. 32. Tsang, C.-F., C. Doughty, and M. Uchida, Simple model representations of transport in a complex fracture and their effects on long-term predictions, Water Resour. Res., 44, W08445, doi:10.1029/ 2007WR006632, 2008. 33. Doughty, C., Estimating plume volume for geologic storage of CO[2] in saline aquifers, Ground Water, 46, 6, 810-813, 2008. 34. Doughty, C., Investigation of CO2 plume behavior for a large-scale pilot test of geologic carbon storage in a saline formation, Transport in Porous Media, special issue on geologic carbon storage, doi:10.1007/S112423-009-9396-z, 2009. 35. Tsang, C.-F. and C. Doughty, Insight from simulations of single-well injection-withdrawal tracer tests on simple and complex fractures, submitted to Water Resour. Re., June, 2009. 36. Doughty, C. and C.-F. Tsang, Analysis of three sets of SWIW tracer-test data using a two-population complex fracture model, submitted to Geophysical Research Letters, June, 2009. 37. Doughty, C., C.-F. Tsang, S. Yabuuchi, and T. Kunimaru, Simultaneous determination of hydraulic properties of multiple fractures in borehole PB-V01, Horonobe, Japan and identification of fractures with natural flow, in preparation, June, 2009. Books and Book Chapters 1. Javandel, I., C. Doughty, and C.-F. Tsang, Groundwater Transport: handbook of mathematical models, 228 pp., Water resources monograph 10, American Geophysical Union, Washington D.C., 1984. 2. Long, J.C.S., C. Doughty, K. Hestir, and S. Martel, Modeling heterogeneous and fractured reservoirs with inverse methods based on Iterated Function Systems, in Reservoir characterization III, Bill Linville, Editor, PennWell Books, Tulsa, Oklahoma, 1993. 3. Long, J.C.S., C. Doughty, A. Datta-Gupta, K. Hestir, and D.W. Vasco, Component characterization: An approach to fracture hydrogeology, in Subsurface flow and transport: a stochastic approach, G. Dagan and S.P. Neuman, Editors, Cambridge University Press, New York, 1997. 4. Benito, P.H., P.J. Cook, B. Fayishenko, B. Freifeld, and C. Doughty, Cross-well air-injection parcker tests for the assessment of pneumatic connectivity in fractured, unsaturated basalt, in Rock mechanics for industry, Proceedings of the 37^th U.S. Rock Mechanics Symposium, Vail, Colorado, USA, June 6-9, 1999, B. Amadei, R.L. Kranz, G.A. Scott and P.H. Smealie, Editors, 843-851, A.A. Balkema, Rotterdam, 1999. 5. Doughty, C. and B. Faybishenko, Modeling of water flow and tracer breakthrough curves in fractured basalt (lessons learned and future investigations), in Vadose zone science and technology solutions, B.B. Looney and R.W. Falta, Editors, Battelle Memorial Institute, Columbus, Ohio, 2000. 6. Faybishenko, B., P. A. Witherspoon, C. Doughty, J.T. Geller, T.R. Wood, and R.K. Podgorney, Multi-scale investigations of liquid flow in a fractured basalt vadose zone, in Flow and transport through unsaturated fractured rock, second edition, D.D. Evans, T.J. Nicholson, and T.C. Rasmussen, Editors, Geophysical Monograph 42, 161-182, American Geophysical Union, Washington D.C., 2001. 7. Hovorka, S.D., C. Doughty, S.M. Benson, K. Pruess, and P.R. Knox, The impact of geological heterogeneity on CO2 storage in brine formations: a case study from the Texas Gulf Coast, In Geological storage of carbon dioxide, S.J. Baines and R.H. Worden, Editors, Special Publication 233, 147-163, Geological Society, London, 2004. 8. Tsang, C.-F., C. Doughty, J. Rutqvist, and T. Xu, Modeling to understand and simulate physico-chemical processes of CO2 geological storage, In Carbon capture and geologic sequestration: Integrating technology, monitoring, and regulation, E.J. Wilson and D. Gerard, Editors, Blackwell Publishing, Ames, Iowa, 2007. 9. Doughty, C. and L.R. Myer, Scoping calculations on leakage of CO2 in geologic storage, in Science and technology of carbon sequestration, B. McPherson and E. Sundquist, Editors, American Geophysical Union, Washington DC, in press, 2009. Thesis and Dissertation 1. Doughty, C., Two phase fluid and heat flow in fractured/porous media: a similarity solution, M.Sc. Thesis, Department of Materials Science and Mineral Engineering, University of Calif., Berkeley, 2. Doughty, C., Estimation of hydrologic properties of heterogeneous geologic media with an inverse method based on iterated function systems, Ph.D. Dissertation, Department of Materials Science and Mineral Engineering, University of Calif., Berkeley, 1995 (LBL-38136). Conference Papers and Presentations 1. Tsang, C.-F., T.A. Buscheck, and C. Doughty, Aquifer thermal energy storage - recent parameter and site-specific studies, in Proceedings, International Conference: Seasonal Thermal and Compressed Air Energy Storage, Seattle, Washington, October 19-21, 1981. 2. Tsang, C.-F., D.C. Mangold, C. Doughty, and M.J. Lippmann, The Cerro Prieto reinjection tests: studies of a multilayer system, in Proceedings, Third Symposium on the Cerro Prieto Geothermal Field, San Francisco, Calif., March 24-26, 1981. 3. Tsang, C.-F. and C. Doughty, A non-isothermal well test analysis method, in Proceedings, ASME-JSME Thermal Engineering Conference, Honolulu, Hawaii, March 20-24, 1983. 4. Doughty, C. and C.-F. Tsang, Control of the movement of a fluid plume by injection and production procedures in Proceedings, ASME-JSME Thermal Engineering Conference, Honolulu, Hawaii, March 20-24, 1983. 5. Tsang, C.-F., D.C. Mangold, C. Doughty, and I. Javandel, A study of contaminant plume control in fractured-porous media, in Proceedings, National Water Well Convention, Columbus, Ohio, May 22, 6. Doughty, C., A. Nir, C.-F. Tsang, and G.S. Bodvarsson, Heat storage in unsaturated soils - initial theoretical analysis of storage design and operational methods, in Proceedings, International Conference on Subsurface Heat Storage in Theory and Practice, Stockholm, Sweden, June 6-8, 1983. 7. Tsang, C.-F. and C. Doughty, Detailed validation of a liquid and heat flow code against field performance, SPE-13503,in Proceedings, Eighth SPE Symposium on Reservoir Simulation, Dallas, Texas, Feb. 10-13, 1985. 8. Doughty, C. and C.-F. Tsang, Investigation of the vertical-flow aquifer thermal energy storage concept and numerical simulation of the Dorigny field experiment, in Proceedings, Third International Conference on Energy Storage for Building Heating and Cooling, Toronto, Canada, Sept. 22-26, 1985. 9. Nir, A., C. Doughty, and C.-F. Tsang, Seasonal heat storage in unsaturated soils: example of design study, in Proceedings, 21st Intersociety Energy Conversion Engineering Conference, San Diego, Calif., August 25-29, 1986. 10. Bensabat, J., C. Doughty, E. Korin, A. Nir, and C.-F. Tsang, Validation experiments of seasonal thermal energy storage models in unsaturated soils, presented at Jigastock 88, Journees internationales sur le stockage de l’energie thermique et la geothermie appliquee, Versailles, France, October 17-20, 1988. 11. Doughty, C. and K. Pruess, A similarity solution for two-phase fluid and heat flow near high-level nuclear waste packages emplaced in porous media, presented at Fall AGU Meeting, San Francisco, December, 1988. 12. Doughty, C. and K. Pruess, Verification of TOUGH2 against a semianalytical solution for transient two-phase fluid and heat flow in porous media, presented at The TOUGH Workshop, Lawrence Berkeley Lab., Berkeley, Calif., September 13-14, 1990. 13. Doughty, C., C.-F. Tsang, E. Korin, and A. Nir, Seasonal storage of thermal energy in unsaturated soils: modeling, simulation, and field validation, presented at Thermastock ‘91, fifth International Congress on Thermal Energy Storage, Scheveningen, The Netherlands, May 13-16, 1991. 14. Doughty, C. and K. Pruess, A mathematical model for two-phase water, air, and heat flow around a linear heat source emplaced in a permeable medium, presented at The 1991 ASME/AIChE National Heat Transfer Conference, Minneapolis, Minnesota, July 28-31, 1991, Rep. LBL-30050, Lawrence Berkeley Lab., Berkeley, Calif., 1991. 15. Doughty, C., J.C.S. Long, and K. Hestir, Characterization of heterogeneous geologic media using inverse methods on models with hierarchical structure, presented at Fall AGU Meeting, San Francisco, December, 1991. 16. Doughty, C., Hydrological inversions using Iterated Function Systems, Invited talk, SIAM Conference on Mathematical and Computational Issues in the Geosciences, Houston, Texas, April 19-21, 1993. 17. Doughty, C., J.C.S. Long, E.L. Majer, T.M. Daley, J.E. Peterson Jr., and L.R. Myer, LBL/Industry heterogeneous reservoir performance definition project - Gypsy site, presented at BPO Contractor Review Conference, Fountainhead, Oklahoma, July 18-22, 1993. 18. Doughty, C. and J.T. Geller, Effects of degassing on aqueous flow in fractures: dynamic versus equilibrium behavior, Invited presentation, Two-phase Flow in Fractures Workshop, Berkeley, Calif., November 3-4, 1993. 19. Merzlyakov, E., C. Doughty, and A. Nir, Analytical approximation of a design of a seasonal thermal energy storage in a semi-arid zone, in Proceedings, Sixth International Conference on Thermal Energy Storage, Espoo, Finland, August 22-25, 1994. 20. Long, J.C.S., C. Doughty, D.W. Vasco, A. Datta-Gupta, K. Hestir, E.L. Majer, and J.E. Peterson Jr., Fractured reservoir characterization through inverse analysis of well-test data and seismic imaging, presented at SEG 64th Annual Meeting, Los Angeles, Calif., October 23-28, 1994. 21. Doughty, C. and J.C.S. Long, Characterization of heterogeneous geologic media at the scale of interest for applications, presented at Fall AGU Meeting, San Francisco, December, 1994. 22. Doughty, C., Flow reduction due to degassing and redissolution phenomena, presented at The TOUGH Workshop, Lawrence Berkeley National Lab., Berkeley, Calif., March 20-22, 1995. 23. Geller, J.T., C. Doughty, and J.C.S. Long, Two-phase flow in regionally saturated fractured rock near excavations, presented at the 6th Annual International High-Level Radioactive Waste Management Conference and Exposition, Las Vegas, Nevada, April 30-May 5, 1995. 24. Datta-Gupta, A., E.L. Majer, J.E. Peterson Jr., D.W. Vasco, C. Doughty, J.C.S. Long, J. Queen, P.S. D’Onfro, and W.D. Rizer, An integrated approach to characterization of fractured reservoirs, presented at SEG 65th Annual Meeting, Houston, Texas, October, 1995. 25. Faybishenko, B., J.C.S. Long, C. Doughty, R. Salve, P. Zawislanski, J. Jacobsen, and J.B. Sisson, Investigations of scale effects and preferential flow in the vadose zone of fractured basalt at Box Canyon analog site in Idaho, presented at GSA Fall Meeting, Denver, Colorado, October, 1996. 26. Faybishenko, B., J.C.S. Long, J.B. Sisson, C. Doughty, R. Salve, K. Williams, P. Zawislanski, and J. Jacobsen, Field ponded infiltration test in fractured basalt at Box Canyon analog site in Idaho: Summary of preliminary results, presented at AGU Fall Meeting, San Francisco, December, 1996. 27. Doughty, C., Hydrogeologic characterization using the iterated function system (IFS) inverse method, presented at the Joint USAF/Army Contractor/Grantee Meeting, Panama City, Florida, January 14-17, 1997. 28. Wood, T.R., T.M. Stoops, B. Faybishenko, C. Doughty, and J.S. Jacobsen, A conceptual model of tracer transport in fractured basalt: Large scale infiltration test revisited, presented at GSA Fall Meeting, Salt Lake City, Utah, October, 1997. 29. Faybishenko, B., C. Doughty, J.C.S. Long, and T.R. Wood, Conceptual model of geometry and physics of liquid flow in unsaturated fractured basalt at Box Canyon site, presented at AGU Fall Meeting, San Francisco, December, 1997. 30. Doughty, C., Numerical modeling of hot air injection and ponded infiltration tests in unsaturated fractured basalt at the Box Canyon site, presented at AGU Fall Meeting, San Francisco, December, 31. Oldenburg, C.M. and C. Doughty, Data fusion and inverse modeling for SELECT, in Proceedings, Air Force Office of Scientific Research Annual Review, Snowbird, Utah, May, 1998. 32. Doughty, C., Numerical modeling of field tests in unsaturated fractured basalt at the Box Canyon site, presented at TOUGH Workshop ’98, Berkeley, Calif., May, 1998, Rep. LBNL-41920, Lawrence Berkeley National Lab., Berkeley, Calif., 1998. 33. Sahoo, D., T.M. Johnson, and C. Doughty, Utilizing natural Sr isotope ratios to determine preferential flow paths in subsurface aquifers on a regional scale, presented at AGU Spring Meeting, Boston, May, 1998. 34. Johnson T.M., D. Sahoo, T.L. McLing, C. Doughty, D.J. DePaolo, and R.W. Smith, EM/ER project: Investigation of groundwater flow paths through combined inversion of strontium isotope ratios and hydraulic head data, U.S. Dept. of Energy Environmental Management Science Program Workshop, Chicago, IL, July, 1998. 35. Faybishenko, B., P.A. Witherspoon, C. Doughty, T.R. Wood, R.K. Podgorney, and J.T. Geller, Multi-scale conceptual approach to describe flow in a fractured vadose zone, presented at AGU Fall Meeting, San Francisco, December, 1998. 36. Li, J.H. and C. Doughty, Forward and backward particle tracking on earth-ocean-atmosphere joint simulation, presented at AGU Fall Meeting, San Francisco, December, 1999. 37. Hovorka, S.D., C. Doughty, P.R. Knox, C.T. Green, K. Pruess, and S.M. Benson, Evaluation of brine-bearing sands of the Frio Formation, upper Texas Gulf Coast for geological sequestration of CO [2], presented at First National Conference on Carbon Sequestration, National Energy Technology Lab., Washington DC, May 14-17, 2001. 38. Doughty, C., K. Pruess, S.M. Benson, S.D. Hovorka, P.R. Knox, and C.T. Green, Capacity investigation of brine-bearing sands of the Frio Formation for geologic sequestration of CO2, presented at First National Conference on Carbon Sequestration, National Energy Technology Lab., Washington DC, May 14-17, 2001. 39. Doughty, C. and K. Karasaki, Modeling flow and transport in saturated fractured rock to evaluate site characterization needs, presented at 2002 IAHR International Groundwater Symposium, Berkeley, Calif., March 25-28, 2002. 40. Salve, R., C. Doughty, and J.S.Y. Wang, Measuring and modeling flow in welded tuffs, presented at 2002 IAHR International Groundwater Symposium, Berkeley, Calif., March 25-28, 2002. 41. Myer, L.R., S.M. Benson, C. Byrer, D. Cole, C. Doughty, W. Gunter, G.M. Hoversten, S. Hovorka, J.W. Johnson, K. Knauss, A. Kovscek, D. Law, M.J. Lippmann, E.L. Majer, B. van der Meer, G. Moline, R.L. Newmark, C.M. Oldenburg, F.M. Orr, Jr., K. Pruess, C.-F. Tsang, The GEO-SEQ project; A status report, presented at GHGT-6 Conference, Kyoto, Japan, September 30 – October 4, 2002. 42. Doughty, C., S.M. Benson, and K. Pruess, Capacity investigation of brine-bearing sands for geologic sequestration of CO2, presented at GHGT-6 Conference, Kyoto, Japan, September 30 – October 4, 43. Doughty, C. and K. Karasaki, Using borehole temperature profiles to constrain regional groundwater flow, presented at AGU Fall Meeting, San Francisco, December, 2002. 44. Doughty, C. and K. Karasaki, Constraining hydrologic models using thermal analysis, presented at Rock Mechanics Symposium, Japan Society of Civil Engineers, Tokyo, Jan. 23-24, 2003. 45. Knox, P.R., C. Doughty, and S.D. Hovorka, Impacts of buoyancy and pressure gradient on field-scale geological sequestration of CO2 in saline aquifers, presented at AAPG Annual Meeting, Salt Lake City, May 11-14, 2003. 46. Doughty, C., K. Pruess, and S.M. Benson, Development of a well-testing program for a CO2 sequestration pilot in a brine formation, presented at Second National Conference on Carbon Sequestration, National Energy Technology Lab., Alexandria, Virginia, May 5-8, 2003. 47. Myer, L.R., S.M. Benson, C. Doughty, S.D. Hovorka, G.M. Hoversten, E.L. Majer, K. Pruess, K. Knauss, T. Phelps, D. Cole, P. Knox, W. Gunter, R. Newmark, D. Vasco, W. Foxall, Monitoring and verification at the Frio pilot test, presented at Second National Conference on Carbon Sequestration, National Energy Technology Lab., Alexandria, Virginia, May 5-8, 2003. 48. Doughty, C. and K. Pruess, Modeling supercritical CO2 injection in heterogeneous porous media, presented at TOUGH Symposium 2003, Lawrence Berkeley National Lab., Berkeley, Calif., May 12-14, 49. Doughty, C. and C.-F. Tsang, Hydrologic characterization of fractured rock using flowing fluid electric conductivity logs, presented at Second International Symposium on Dynamics of Fluids in Fractured Rock, Lawrence Berkeley National Lab., Berkeley, Calif., February 10-12, 2004. 50. Doughty, C. and K. Karasaki, Constraining a fractured-rock groundwater flow model with pressure-transient data from an inadvertent well test, presented at Second International Symposium on Dynamics of Fluids in Fractured Rock, Lawrence Berkeley National Lab., Berkeley, Calif., February 10-12, 2004. 51. Takeuchi, S., M. Shimo, C. Doughty, and C.-F. Tsang, Identification of the water-conducting features and evaluation of hydraulic parameters using fluid electric conductivity logging, presented at Second International Symposium on Dynamics of Fluids in Fractured Rock, Lawrence Berkeley National Lab., Berkeley, Calif., February 10-12, 2004. 52. Holtz, M.H., C. Doughty, J. Yeh, and S.D. Hovorka, Modeling of CO2 saline aquifer sequestration and the effects of residual phase saturation, presented at AAPG Annual Meeting, Dallas, April 18-21, 2004. 53. Doughty, C., K. Pruess, S.M. Benson, B.M. Freifeld, and W.D. Gunter, Hydrological and geochemical monitoring for a CO[2] sequestration pilot in a brine formation, presented at Third National Conference on Carbon Sequestration, National Energy Technology Lab., Alexandria, Virginia, May 3-6, 2004. 54. Myer, L.R., S.M. Benson, D. Cole, T. Daley, C. Doughty, A. Dutton, B. Freifeld, W. Gunter, M. Holtz, S. Hovorka, M. Hoversten, B.M. Kennedy, Y. Kharaka, K. Knauss, P. Knox, E. Majer, T. Phelp, K. Pruess, J. Robinson, Subsurface monitoring and verification at the Frio pilot test, presented at Seventh International Conference on Greenhouse Gas Control Technologies (GHGT-7), IEA Greenhouse Gas R&D Programme, Vancouver, Canada, September 5-9, 2004. 55. Hovorka, S.D., C. Doughty, and M.H. Holtz, Testing efficiency of storage in the subsurface: Frio brine pilot experiment, presented at Seventh International Conference on Greenhouse Gas Control Technologies (GHGT-7), IEA Greenhouse Gas R&D Programme, Vancouver, Canada, September 5-9, 2004. 56. Freifeld, B.M., C. Doughty, R.C. Trautz, S.D. Hovorka, L.R. Myer, and S.M. Benson, The Frio brine pilot CO2 sequestration test – comparison of field data and predicted results, presented at Chapman Conference, The science and technology of carbon sequestration: methods and prospects for verification and assessment of sinks for anthropogenic carbon dioxide, San Diego, CA, January 16-20, 2005. 57. Doughty, C. and L.R. Myer, Bounding calculations on leakage of CO2 in geologic storage, presented at Chapman Conference, The science and technology of carbon sequestration: methods and prospects for verification and assessment of sinks for anthropogenic carbon dioxide, San Diego, CA, January 16-20, 2005. 58. Trautz, R., B. Freifeld, and C. Doughty, Comparison of single and multiphase tracer test results from the Frio CO2 pilot study, Dayton Texas, presented at Fourth National Conference on Carbon Capture and Sequestration, National Energy Technology Lab., Alexandria, Virginia, May 2-5, 2005. 59. Doughty, C., K. Pruess, and S.M. Benson, Flow modeling for the Frio brine pilot, presented at Fourth National Conference on Carbon Capture and Sequestration, National Energy Technology Lab., Alexandria, Virginia, May 2-5, 2005. 60. Hovorka, S.D., C. Doughty, S. Sakurai, and M.H. Holtz, Frio brine pilot: Field validation of numerical simulation of CO2 storage, invited presentation at AAPG Annual Meeting, Calgary, June 19-22, 61. Doughty, C., Flow modeling for CO2 sequestration: The Frio brine pilot, presented at AGU Fall Meeting, San Francisco, December, 2005. 62. Doughty, C., Site characterization for CO2 geologic storage and vice versa – The Frio brine pilot as a case study, presented at International Symposium on Site Characterization for CO2 Geological Storage, Lawrence Berkeley National Lab., Berkeley, Calif., Berkeley, CA, March 20-22, 2006. 63. Benson, S.M. and C. Doughty, Estimation of field-scale relative permeability using pressure-transient data, presented at International Symposium on Site Characterization for CO2 Geological Storage, Lawrence Berkeley National Lab., Berkeley, Calif., Berkeley, CA, March 20-22, 2006. 64. Doughty, C. and S.M. Benson, Strategies for optimization of pore volume utilization for CO2 storage projects in saline formations, presented at Fifth National Conference on Carbon Capture and Sequestration, National Energy Technology Lab., Alexandria, Virginia, May 8-11, 2006. 65. Doughty, C., Modeling geologic storage of carbon dioxide: comparison of non-hysteretic and hysteretic characteristic curves, presented at TOUGH Symposium 2006, Lawrence Berkeley National Lab., Berkeley, Calif., May 15-17, 2006. 66. Freifeld, B., C. Doughty, J. Walker, L. Kryder, K. Gilmore, S. Finsterle, and J. Sampson, Evidence of rapid localized groundwater transport in volcanic tuffs beneath Yucca Mountain, Nevada, Abstract submitted to AGU Fall Meeting, San Francisco, December 11-15, 2006. 67. Pruess, K., C. Doughty, K. Zhang, The Role of dissolution-induced aqueous phase convection in the long-term fate of CO2 stored in saline formations, Invited paper presented at AGU Fall Meeting, San Francisco, December 11-15, 2006. 68. Zhang, K., C. Doughty, Y.-S. Wu, K. Pruess, Efficient parallel simulation of CO2 geologic sequestration in saline aquifers, SPE 106026, presented at 2007 SPE Reservoir Simulation Symposium, Houston, TX, February 26-28, 2007. 69. Freifeld, B. C. Doughty, J. Walker, L. Kryder, K. Gilmore, S. Finsterle, and J. Sampson, Characterization of rapid, localized groundwater transport in the Crater Flat tuffs, Yucca Mountain, Nevada, presented at Devil’s Hole Workshop, Death Valley, CA, May 2-3, 2007. 70. Hovorka, S.D., B.M. Freifeld, T.M. Daley, J. Kane, Y.K. Kharaka, S.M. Benson, T.J. Phelps, G. Pope, C. Doughty, Testing interactions of buoyancy, multiphase flow, and geochemical reactions: preliminary results for the Frio II test, presented at Sixth National Conference on Carbon Capture and Sequestration, National Energy Technology Lab., Pittsburgh, PA, May 7-10, 2007. 71. Benson, S., L. Miljkovic, L. Tomutsa, C. Doughty, Relative permeability and capillary pressure controls on CO2 migration and brine displacement—Elucidating fundamental mechanisms by laboratory experiments and simulation, presented at Sixth National Conference on Carbon Capture and Sequestration, National Energy Technology Lab., Pittsburgh, PA, May 7-10, 2007. 72. Doughty, C. and C. Oldenburg, Westcarb Phase 3 Modeling, presented at Westcarb Annual Meeting, Seattle, November 26-28, 2007. 73. Oldenburg, C., C. Doughty, M. Reagan, Y. Zhang, Westcarb Modeling Overview, Inter-Partnership Modeling Group Meeting, Salt Lake City, November 9, 2007. 74. Ajo-Franklin, J., C. Doughty, and T.M. Daley, Integration of continuous active-source seismic monitoring and flow modeling for CO[2] sequestration: The Frio II brine pilot, presented at AGU Fall Meeting, San Francisco, December 10-14, 2007. 75. Ajo-Franklin, J., C. Doughty, and T.M. Daley, Integration of Real-Time Seismic Monitoring and Flow Modeling for CO2 Sequestration: The Frio II Brine Pilot, presented at Seventh National Conference on Carbon Capture and Sequestration, National Energy Technology Lab., Pittsburgh, PA, May 5-8, 2008. 76. Oldenburg, C.M. and C. Doughty, Overview of reservoir simulation and risk assessment for WESTCARB’s Kimberlina Phase III pilot, presented at Westcarb Annual Business Meeting, Anchorage, October 1-2, 2008. 77. Daley, T.M., J. Ajo-Franklin, and C. Doughty, Integration of crosswell CASSM (Continuous active source seismic monitoring) and flow modeling for imaging of a CO2 plume in a brine aquifer, SEG Annual Meeting, Las Vegas, November, 2008. 78. Doughty, C., L.R. Myer, and C.M. Oldenburg, Predictions of long-term behavior of a large-volume pilot test for CO[2] geological storage in a saline formation in the Central Valley, California, presented at GHGT-9 Conference, Washington D.C., November 16-20, 2008. 79. Jordan, P. and C. Doughty, Sensitivity of CO[2] migration estimation on reservoir temperature and pressure uncertainty, presented at GHGT-9 Conference, Washington D.C., November 16-20, 2008. 80. Myer, L., T. Surles, C. Oldenburg, C. Doughty, and J. Wagoner, WESTCARB Large Volume CCS Test, presented at GHGT-9 Conference, Washington D.C., November 16-20, 2008. Technical Reports 1. Doughty, C., D.G. McEdwards and C.-F. Tsang, Multiple well variable rate well test analysis of data from the Auburn University thermal energy storage program, Rep. LBL-10194, Lawrence Berkeley Lab., Berkeley, Calif., 1979. 2. Doughty, C., G. Hellstrom, C.-F. Tsang, and J. Claesson, Steady flow model user’s guide, Rep. PUB-3044, Lawrence Berkeley Lab., Berkeley, Calif., 1984. 3. Doughty, C., C.-F. Tsang and I. Javandel, Development of RESSQ: a semianalytical model for two-dimensional contaminant transport in groundwater, in Earth Sciences Division Annual Report 1984, Rep. LBL-18496, Lawrence Berkeley Lab., Berkeley, Calif., 1985. 4. Doughty, C. and K. Pruess, Heat pipe effects in nuclear waste isolation: a review, Rep. LBL-20738, Lawrence Berkeley Lab., Berkeley, Calif., 1985. 5. Doughty, C. and G.S. Bodvarsson, Some design considerations for the proposed Dixie Valley tracer test, Rep. LBL-25971, Lawrence Berkeley Lab., Berkeley, Calif., 1988. 6. Amistoso, A.E., B.G. Aquino, Z.P. Aunzo, O.T. Jordan, F.X.M. Sta. Ana, G.S. Bodvarsson and C. Doughty, Reservoir analysis and numerical modelling of the Palinpinon Geothermal Field, Negros Oriental, Philippines, UN-DTCD Project PHI/86/006, PNOC-EDC Geothermal Division, Manila, Philippines, 1990. 7. Doughty, C., Users guide for SIMSOL, Rep. LBL-28384, Lawrence Berkeley Lab., Berkeley, Calif., 1991. 8. Doughty, C., A. Nir and C.-F. Tsang, Seasonal thermal energy storage in unsaturated soils: model development and field validation, Rep. LBL-29166 Rev., Lawrence Berkeley Lab., Berkeley, Calif., 9. Doughty, C. and A. Thompson, LBL/Industry heterogeneous reservoir performance definition project: review and assessment of Gypsy Pilot-site hydrologic data, Rep. LBID-1987, Lawrence Berkeley Lab., Berkeley, Calif., 1993. 10. Doughty, C., S. Finsterle, C.H. Lai, and J.C.S. Long, Theoretical degassing studies, in Hard Rock Laboratory Project Annual Report, 1993, Lawrence Berkeley Lab., Berkeley, Calif., 1993. 11. Adams, M.C., J.N. Moore, W.R. Benoit, C. Doughty, and G.S. Bodvarsson, Chemical tracer test at the Dixie Valley geothermal field, Nevada, Rep. DOE/EE/12929-H1, Geothermal Reservoir Technology Research Program, U.S. Dept. of Energy, Washington, DC, 1993. 12. Geller, J.T., C. Doughty, J.C.S. Long, and R.J. Glass, Disturbed zone effects: Two-phase flow in regionally water-saturated fractured rock: FY94 Annual Report, Rep. LBL-36848, Lawrence Berkeley Lab., Berkeley, Calif., 1995. 13. Long, J.C.S., C. Doughty, B. Faybishenko, et al., Analog site for fractured rock characterization, Annual Report FY 1995, Rep. LBL-38095, Lawrence Berkeley Lab., Berkeley, Calif., 1995. 14. Doughty, C. and G.S. Bodvarsson, Investigation of conceptual and numerical approaches for evaluating gas, moisture, heat and chemical transport, in Development and calibration of the 3D site-scale unsaturated zone model of Yucca Mountain, Bodvarsson and Bandurraga, Eds., Rep. LBNL-39315, Lawrence Berkeley Lab., Berkeley, Calif., 1996. 15. Doughty, C. and G.S. Bodvarsson, Investigation of conceptual and numerical approaches for evaluating moisture flow and chemical transport, in The site-scale unsaturated zone model of Yucca Mountain, Nevada, for the viability assessment, Bodvarsson, Bandurraga, and Wu, Eds., Rep. LBNL-40376, Lawrence Berkeley Lab., Berkeley, Calif., 1997. 16. Sonnenthal, E.L., J.T. Birkholzer, C. Doughty, T. Xu, J. Hinds, and G.S. Bodvarsson, Post-emplacement site-scale thermohydrology with consideration of drift-scale processes, YMP Level 4 Milestone SPLE2M4, Lawrence Berkeley Lab., Berkeley, Calif., 1997. 17. Najita, J.S. and C. Doughty, Using TRINET for simulating flow and transport in porous media, Rep. LBNL-42158, Lawrence Berkeley National Lab., Berkeley, Calif., 1998. 18. Faybishenko, B., C. Doughty, J.T. Geller, S. Borglin, B. Cox, J.E. Peterson Jr., M. Steiger, K. Williams, T.R. Wood, R.K. Podgorney, T.M. Stoops, S.W. Wheatcraft, M. Dragila, and J.C.S. Long, A chaotic dynamical conceptual model to describe fluid flow and contaminant transport in a fractured vadose zone – Annual Report 1997, Rep. LBNL-41223, Lawrence Berkeley National Lab., Berkeley, Calif., 1998. 19. Faybishenko, B., R. Salve, P. Zawislanski, C. Doughty, K.H. Lee, P. Cook, B. Freifeld, J.S. Jacobsen, B. Sisson, J. Hubbell, and K. Dooley, Ponded infiltration test at the Box Canyon site: Data report and preliminary analysis, Rep. LBNL-40183, Lawrence Berkeley National Lab., Berkeley, Calif., 1998. 20. Benito, P.H., P. Cook, B. Faybishenko, B. Freifeld, and C. Doughty, Analog site for fractured rock characterization: Box Canyon pneumatic connectivity study, preliminary data analysis, Rep. LBNL-42359, Lawrence Berkeley National Lab., Berkeley, CA, 1998. 21. Salve, R., C. Doughty, J.P. Fairley, P.J. Cook, and J.S.Y. Wang, Fracture/matrix test in alcove 6, in Progress report on fracture flow, drift seepage and matrix imbibition tests in the exploratory studies facilities, J.S.Y. Wang et al., YMP Milestone SP33PBM4, Lawrence Berkeley National Lab., Berkeley, Calif., 1998. 22. Doughty, C., J.S. Najita, T.M. Johnson, and D. Sahoo, Hydrogeologic characterization using the iterated function system (IFS) inverse method, in Earth Sciences Division Annual Report 1997, Rep. LBNL-42452, Lawrence Berkeley National Lab., Berkeley, CA, 1998. 23. Geller, J.T., P. K. Seifert, K. T. Nihei, L. R. Myer, C. Doughty, S. Finsterle, J. Najita, C. M. Oldenburg, A. L. James, E. McKone and K. L. Revzan, Characterization and remediation strategies for unconsolidated aquifers, final report to the Air Force Office of Sponsored Research, Lawrence Berkeley National Lab., Berkeley, Calif., October, 1998. 24. Doughty, C., Mathematical modeling of a ponded infiltration test in unsaturated fractured basalt at Box Canyon, Idaho, Rep. LBNL-40630, Lawrence Berkeley National Lab., Berkeley, Calif., 1999. 25. Doughty, C., C.M. Oldenburg, and A.L. James, Site S-7 VOC transport modeling for the vadose zone monitoring system (VZMS), McClellan AFB, 1999 semi-annual report, Rep. LBNL-43526, Lawrence Berkeley National Lab., Berkeley, Calif., 1999. 26. Doughty, C. and K. Karasaki, Using an effective continuum model for flow and transport in fractured rock: The H-12 flow comparison, Rep. LBNL-44966, Lawrence Berkeley National Laboratory, Berkeley, Calif., 1999. 27. Zawislanski, P.T., C.M. Oldenburg, C. Doughty, and B.M. Freifeld, Site S-7 Vadose zone monitoring system: final report for McClellan AFB, Rep. LBNL-44325, Lawrence Berkeley National Lab., Berkeley, Calif., 1999. 28. Doughty, C. and C.-F. Tsang, BORE II - A code to compute dynamic wellbore electrical conductivity logs with multiple inflow/outflow points including the effects of horizontal flow across the well, Rep. LBNL-46833, Lawrence Berkeley National Lab., Berkeley, Calif., 2000. 29. Doughty, C. and K. Karasaki, Evaluation of uncertainties due to hydrogeological modeling and groundwater flow analysis (2): LBNL effective continuum model using TOUGH2, Rep. LBNL-48151, Lawrence Berkeley National Lab., Berkeley, Calif., 2001. 30. Doughty, C. and C.-F. Tsang, Inflow and outflow signatures in flowing wellbore electrical-conductivity logs, Rep. LBNL-51468, Lawrence Berkeley National Lab., Berkeley, Calif., 2002. 31. Doughty, C. and K. Karasaki, Evaluation of uncertainties due to hydrogeological modeling and groundwater flow analysis: Steady flow, transient flow, and thermal studies, Rep. LBNL-51894, Lawrence Berkeley National Lab., Berkeley, Calif., 2002. 32. Doughty, C. and M. Uchida, PA calculations for Feature A with third-dimension structure based on tracer test calibration, Rep. IPR-04-33, Swedish Nuclear Fuel and Waste Management Co., Stockholm, January, 2003. 33. Doughty, C., K. Ito, and K. Karasaki, 3. Evaluation of uncertainties due to hydrogeological modeling and groundwater flow analysis: data-flow analysis, Report to JNC, October, 2003. 34. Doughty, C., and C.-F. Tsang, Flowing FEC logging of Horonobe Well HDB-6 using BORE II, Report to JNC, January, 2004. 35. Benson, S.M., L.R. Myer, J.G. Blencoe, M.D. Cakici, D. Cole, W. Daily, T. Daley, C. Doughty, S. Fisher, W. Foxall, W. Gunter, M. Holtz, J. Horita, G.M. Hoversten, S. Hovorka, K. Jessen, J.W. Johnson, B.M. Kennedy, K.G. Knauss, A. Kovscek, D. Law, M.J. Lippmann, E.L. Majer, B. van der Meer, G. Moline, R.L. Newmark, C.M. Oldenburg, F.M. Orr, Jr., A.V. Palumbo, J.C. Parker, T.J. Phelps, K. Pruess, A. Ramirez, S. Sakurai, C.-F. Tsang, Y. Wang, J. Zhu, The GEO-SEQ project results, Rep. LBNL/Pub-901, Lawrence Berkeley National Lab., Berkeley, Calif., 2004. 36. Benson, S.M., L.R. Myer, C.M. Oldenburg, C.A. Doughty, K. Pruess, J. Lewicki, M. Hoversten, E. Gasperikova, T. Daley, E. Majer, M. Lippmann, C.-F. Tsang, K. Knauss, J. Johnson, W. Foxall, A. Ramirez, R. Newmark, D. Cole, T.J. Phelps, J. Parker, A. Palumbo, J. Horita, S. Fisher, G. Moline, L. Orr, T. Kovscek, K. Jessen, Y. Wang, J. Zhu, M. Cakici, S. Hovorka, M. Holtz, S. Sakurai, B. Gunter, D. Law, and B. van der Meer, GEO-SEQ best practices manual. Geologic Carbon Dioxide Sequestration: Site Evaluation to Implementation. Rep. LBNL-56623, Lawrence Berkeley National Lab., Berkeley, Calif., 2004. 37. Doughty, C., K. Karasaki, and K. Ito, 9x9 model of Tono Area, 2004 Project Report, Report to JNC, June, 2004. 38. Doughty, C., K. Karasaki, and K. Ito, Evaluation of uncertainties due to hydrogeological modeling and groundwater flow analysis: 9x9 km dual-porosity model of the Tono site, 2005 Project Report, Report to JNC, August, 2005. 39. Doughty, C., K. Karasaki, and K. Ito, Evaluation of uncertainties due to hydrogeological modeling and groundwater flow analysis: Progress Report on Complete 9x9 km Model of the Tono Site, model from a subset of wells, and strategy for characterizing a new site, Report to JAEA, May, 2006. 40. Freifeld, B.M., C. Doughty, and S. Finsterle, Preliminary estimates of specific discharge and transport velocities near Borehole NC-EWDP-24PB, Rep. LBNL-60740, Lawrence Berkeley National Lab., Berkeley, Calif., 2006. 41. Doughty, C. and K. Karasaki, Evaluation of uncertainties due to hydrogeological modeling and groundwater flow analysis: Strategy for characterizing a new site, in Karasaki, K., J. Apps, C. Doughty, H. Gwatney, C. Tiemi Onishi, R. Trautz, and C.-F. Tsang, Feature Detection, Characterization and Confirmation Methodology: Final Report, NUMO-LBNL Collaborative Research Project Report to JAEA, March, 2007. 42. Tsang, C.-F. and C. Doughty, Some Insights from Simulations of SWIW Tests on a Complex Fracture, Rep. LBNL-63564, Lawrence Berkeley National Lab., Berkeley, Calif., 2007 (also available as Rep. SKI-INSITE TRD-07-06, Swedish Nuclear Power Inspectorate, Stockholm, Sweden, 2007). 43. Daley, T. M., B.M. Freifeld, J.B. Ajo-Franklin, C. Doughty, S.M. Benson, Frio II Brine Pilot: Report on GEOSEQ Activities. LBNL-63613. Lawrence Berkeley National Laboratory, Berkeley, CA. 2007.
{"url":"http://esd.lbl.gov/about/staff/christinedoughty/publications.html","timestamp":"2014-04-18T19:22:28Z","content_type":null,"content_length":"60707","record_id":"<urn:uuid:5b856334-90c9-47c0-b074-71cdda735033>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Cryptology The study of cryptography and cryptanalysis scientifically is known as Cryptology. It is math. For instance, number theory and algorithm that support cryptography and cryptanalysis. We will focus here on some of key math concepts at the back of the cryptography. To secure the data for storage or transmission, it must be transformed in such a method that is for any unauthorized person would be hard to determine its true meaning. To do the said, certain math equation are applied. The difficulty level for solving the given equation is recognized as its intractability. The basis of cryptography is formed by these types of equations. Most of the important Discrete Logarithm problem The most excellent way to explain this problem is to show its inverse concept mechanism first. Suppose a prime number P (a number which is not dividable apart from 1 and itself, P). Over 300 digits this P is a large prime number. Let us now suppose that we have two other integers more, a and b. Now we need to find out the value of N, the value can be found by the following formula: N = a^b mod P, when 0 <= N <= (P · 1) This is recognized as discrete exponentiation and is very simple to calculate. Though, the reverse is true when we reverse it. If it is given P, a and N and is required to locate b so that the equation is legal, then we confront an incredible level of struggle. This problem figures the basis for several public key infrastructure algorithms, like Diffie-Hellman and EIGamal. For many years this puzzle has been studied and is the base of cryptography. It has survived many forms of attack. The integer Factorization problem This is a very simple idea. If one has two prime numbers P1 and P2, both are “large”. And then we multiply these two primes to generate the product, N. The difficulty occurs when, being given N, we strive and locate the original P1 and P2. To a great degree simplification of this matter, the product N is the public key and the P1 and P2 both numbers are together the private key. In all mathematical concepts this puzzle is one of the main basic. It has deeply been studied for the previous 20 years and the agreement appears to be that some of the mathematics laws are not proven or not discovered that forbids any single shortcuts. It is said, the simple fact that it has being studied and very much direct others to worry if one way or another breakthrough may be The Elliptic Curve Discrete Logarithm Problem A new cryptographic procedure based upon a logically well-known mathematical puzzle. For centuries the properties of elliptic curves have been familiar, but only recently their application to the field of cryptography has been taken on. Imagine a giant piece of paper first, on that a series of vertical and horizontal lines are printed. With the vertical lines every line symbolizes an integer forming X class element. We get a pair of coordinates (x,y) with intersection of horizontal and vertical lines. The elliptic curve is defined by the below highly simplified equation example: y^2 + y = x^3 · x^2 (to use in a real life application this is very small, but it will exemplify the general idea. On the affirmative side, the puzzle comes to be quite difficult, having need of a shorter key length for equivalent security levels as measure up to the Integer Factorization Problem and the Discrete Logarithm Problem. On the depressing side, critics challenged that this problem, since it has recently begun to put into practice in cryptography, has not had the strong study of many years that is necessary to give it a satisfactory level of trust as being secure. The cryptography software is generally known as Encryption Software. Data Protection How Can Cryptography Technology Can Protect You from a DPA Attack DPA stands for differential power analysis. This is a type of attack on cryptographically secured devices to access sensitive information. … [read more...] What is the Data Encryption Standard (DES) The Data encryption standard (DES) as clear from its very name is a standard that needs to be followed for … [read more...]
{"url":"http://cryptographicsoftware.com/index.php/cryptography/basic-cryptology/","timestamp":"2014-04-19T04:33:46Z","content_type":null,"content_length":"23467","record_id":"<urn:uuid:ef728989-6c58-45ce-82f2-23db5e444d23>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
November 6th 2005, 03:06 PM Payments of $2600, due 50 days ago, and $3100, due in 40 days, are to be replaced by $3000 today and another payment in 30 days. What must the second payment be if the payee is to end up in an equivalent financial position? Money now earns 8.25%. Use 30 days from now as the focal date. The answer is $2719.68 I really don't know what to do here.... June 18th 2010, 02:43 PM Solved in only 5 years :D Lets find the value of the first set of payments, 30 days from now. V = 2600(1+i)^(80/365) + 3100(1+i)^(-10/365) We want him in the same position onder the alternate payments 3000(1+i)^(30/365) + P = V Gives P = 2719.23, which i assume is rounding error, or due to the question writer assuming there are not exactly 365 days in a year.
{"url":"http://mathhelpforum.com/business-math/1231-interest-print.html","timestamp":"2014-04-20T07:05:35Z","content_type":null,"content_length":"3966","record_id":"<urn:uuid:eb9437b2-29e7-4ca2-8370-d53af4cdaa76>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - CKM matrix parameters I understand that a 3x3 unitary matrix needs 9 real parameters to be specified (18 real parameters to start with then 9 equations of constraint arising from unitarity), but what I'm struggling to understand is how we can make phase changes of the form: [tex] \mathrm{e}^{-i\beta_I} V_{IJ} \mathrm{e}^{\alpha_J} [/tex] to make the first row and column of [itex]V_{IJ} [/itex] real leaving us with only 9-5=4 independent parameters [itex]\theta_1,\theta_2,\theta_3, \delta [/itex], is there an easy way to see this can be done?
{"url":"http://www.physicsforums.com/showpost.php?p=3809339&postcount=1","timestamp":"2014-04-21T07:14:32Z","content_type":null,"content_length":"8923","record_id":"<urn:uuid:587a8aa0-adbb-46d2-b7c6-08e717d728c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Arithmetic operators perform operations of change-sign (negate), do-no-change-sign, logical AND logical OR, add, subtract, multiply and divide. Note that a value or an expression may fall between two of these operators, either of which could take it as its left or right argument, as in In such cases three rules apply: 1. * and / bind to their neighbors more strongly than + and −. Thus the above expression is taken as with * taking b and c and then + taking a and b * c. 2. + and − bind more strongly than &&, which in turn is stronger than ||: is taken as 3. When both operators bind equally strongly, the operations are done left to right: is taken as Parentheses may be used as above to force particular groupings. +a (no rate restriction) a + b (no rate restriction) where the arguments a and b may be further expressions. The arguments of + can be scalar values or k-rate one dimensional arrays (vectors), or any combination. If one of the arguments is an array, so is the value. Here is an example of the + operator. It uses the file adds.csd. Example 27. Example of the + operator. See the sections Real-time Audio and Command Line Flags for more information on using command line flags. ; Select audio/midi flags here according to platform -odac ;;;RT audio out ;-iadc ;;;uncomment -iadc if RT audio input is needed too ; For Non-realtime ouput leave only the line below: ; -o adds.wav -W ;;; for file output any platform sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 ; add unipolar square to oscil kamp = p4 kcps = 1 itype = 3 klfo lfo kamp, kcps, itype printk2 klfo asig oscil 0.7, 440+klfo, 1 outs asig, asig ;sine wave. f 1 0 32768 10 1 i 1 0 2 1 ;adds 1 Hz to frequency i 1 + 2 10 ;adds 10 Hz to frequency i 1 + 2 220 ;adds 220 Hz to frequency
{"url":"http://csounds.com/manual/html/adds.html","timestamp":"2014-04-17T13:10:44Z","content_type":null,"content_length":"9175","record_id":"<urn:uuid:e0ce17c9-a781-403d-9104-8966216672dd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Category Theoretic Modal Logic Posted by David Corfield I’ve mentioned before that it hasn’t been easy selling category theory to philosophers. At first glance this may be a little surprising when you consider what effects changes to the foundational language of mathematics circa 1900 had on philosophy over subsequent years via the influence of Bertrand Russell. Perhaps one of the most important factors was that predicate logic could be applied straightforwardly to natural language in a way that appeared to resolve certain metaphysical questions. Just look at the baldness of the present King of France. Now something which is dearly beloved of analytic philosophers of the very purest kind is modal logic. So were category theory to become an indispensable tool in modal logic, we might be onto a winner. This has partially motivated a couple of posts of mine (here and here). Let me see if I can sum up the confused state of my mind at present. Even if you don’t want to think about modal logic, I’ve put in a couple of questions which I’d love to hear answers to. First, we have the coalgebraic approach to modal logic, where we build on the duality between Stone spaces and Boolean algebras. As described in Stone coalgebras we can derive an endofunctor on the category of Boolean algebras, $BA$, from a modal operator, algebras for which are modal algebras (think Lindenbaum algebra of a propositional logic having the necessary operator $\Box$ applied). On the other side of the duality, we have the Vietoris functor on Stone spaces, and coalgebras for this functor are descriptive general frames. So we have modal algebras and descriptive general frames in duality. Kripke frames are a particular case of these general frames. Now for the first-order case, we might use hyperdoctrines. In one form of these, typed classical first-order hyperdoctrines, we have a functor from $B^{op}$ to $BA$ with certain properties, where $B$ is a category with finite products corresponding to the types of the theory. The extension to modal logic should then involve a modal hyperdoctrine, a functor $B^{op}$ to $MA$, the category of modal algebras. Such a modal hyperdoctrine is described in Logically Possible Worlds and Counterpart Semantics for Modal Logic in the special case where $B^{op}$ is the category of natural numbers and functions. It also crops up in the paper First-order modal logic, but I don’t have access to that at the moment. Over on the semantics side, Dion Coumans’ slides describe duals for Boolean hyperdoctrines as indexed Stone Spaces, certain functors from $B$ to $Stone \cong BA^{op}$. One might imagine then that in the passage to modal semantics we would have functors $H: B \to DGF \cong MA^{op}$, which we might call indexed descriptive general frames. Now, there exists something like this already in the literature in the particular case where $B$ is the opposite of natural numbers with functions. These are called metaframes. In this case where $B^{op}$ is the natural numbers with mappings, on the syntax side, sitting above $n$ there is the Boolean or modal algebra of logically equivalent formulas whose free variables are contained in the set $\{x_1, ..., x_n\}$. Then on the dual semantic side, above $0$ we have a Stone space of worlds, $H(0)$, and above $1$ a space of individuals, $H(1)$. The map $1 \to 0$ gives a fibring of individuals onto worlds. $H(n)$ are similarly fibred over worlds. Particular attention has been paid to cases where a fibre of $H(n)$ is the $n$-fold product of the fibre of $H(1)$. These metaframes are called cartesian, and it is known, according to Kracht and Kutz, that all modal predicate logics are complete with respect to cartesian metaframes. Using different categories for $B$ would allow for typed modal logics. The frame aspect of first-order modal logic captures ‘counterpart’ relations between individuals. This is a notion due to the philosopher David Lewis, who took possible world talk very seriously. If ‘I might have had eggs for breakfast’ is true in this world, it is made true by the existence of a close world in which my counterpart did have eggs for breakfast. The sorts of complexity one would need to capture include an individual in this world having no counterparts or many of them in an accessible world. With some homotopic nontriviality thrown in, one could imagine a loop in the space of possible worlds along which taking the counterpart relation permutes individuals. In this ‘cartesian’ case where $H(1)$ determines $H(n)$, all of the information is presumably encoded in the ‘bundle’ $H(1) \to H(0)$, where the image of the counterpart relation for individuals is the accessibility relation for worlds. This would explain interest in Kripke sheaves. According to the Kracht and Kutz paper (p. 26), Hyperdoctrinal Semantics and General Metaframes for modal logic are treated in Hiroyuki Shirasu, Duality in Superintuitionistic and Modal Predicate Logics, Advances in Modal Logic, volume 1 (Marcus Kracht et. al., editor), CSLI Publications, Stanford, 1998, pp. 223-236. Again I don’t have access to that. Now, what about models? Qu. 1: Do you typically characterise the models of hyperdoctrines via maps to a special hyperdoctrine, e.g., in the case of Boolean ones to the hyperdoctrine $P: Set^{op} \to BA$, where $P$ is power set? You would think that you could also address models for first-order modal logic via the syntactic category route. Mike gave me a hint as to the relation between hyperdoctrines and syntactic categories here. But I’d like to know the answer to Qu. 2: Is there a systematic way of translating between hyperdoctrines and syntactic categories for classical first-order logic? As far as classical first-order theories go, syntactic categories are Boolean coherent. So, then, Awodey and Forssell’s duality between the category of ($\kappa$-small) Boolean pretoposes and Stone topological groupoids might possibly provide the equivalent of Stone duality for the coalgebraic approach. Somehow I feel all this ought to tie in with the other approach to the semantics of first-order modal logic, I have mentioned in earlier posts, Awodey and Kishida’s sheaf semantics. Here the modal logic is S4. Were there to be a connection to the coalgebraic approach above, we might need a notion of interior hyperdoctrine, $B^{op}$ to $IA$, the category of interior algebras. Anyway, Awodey and Kishida explain how what they are doing can be thought to be extending from Kripke sheaves to sheaves over general topological spaces, rather than merely those generated using the Alexandroff topology from a preorder. Noticing that they are dealing with a sort of interior endofunctor on $Set^X$, for some set $X$, it struck me that there was a strong resemblance to Richard Garner’s paper on ionads, with his comonads on $Set^X$, although his paper doesn’t mention modal logic. [EDIT: See below.] Interestingly, one of Garner’s examples of an ionad ((5) from p. 8) deals with sets (of isomorphism classes) of models for a coherent theory. Hmm, I see how muddled is my mind on all this, so Qu 3: Am I glimpsing anything worthwhile here? Posted at April 4, 2011 4:44 PM UTC Re: Category Theoretic Modal Logic I feel like the answer to questions 1 and 2 ought to be that there is an adjunction $Sem : \text{hyperdoctrines} \;\rightleftarrows\; \text{categories} : Sub$ in which the right adjoint $Sub$ sends a category to its hyperdoctrine of subobjects, and is probably fully faithful. Then a model of a hyperdoctrine $D$ in a category $C$ could equivalently be defined as a hyperdoctrine morphism $D \to Sub(C)$ (the powerset hyperdoctrine $Set^{op}\to BA$ being $Sub(Set)$), or as a functor $Sem(D) \to C$; so $Sem(D)$ is the “universal category containing a $D$-model”. And if one starts with a more syntactically presented theory, one ought to be able to construct its syntactic category by first constructing its syntactic hyperdoctrine, then applying the left adjoint $Sem$. I don’t recall seeing anything quite like this written down anywhere, though. But I’m not that well up on the literature of hyperdoctrines, so it might be well known, or I suppose it might be false. I think I know one way to define the functor $Sem$, though: first construct an allegory from the hyperdoctrine, then split its comonads (= coreflexives). Posted by: Mike Shulman on April 5, 2011 6:11 AM | Permalink | PGP Sig | Reply to this Re: Category Theoretic Modal Logic Thanks, Mike. Am I right to think hyperdoctrines fell out of favour relative to the syntactic category approach? I see these slides – Introduction to categorical logic – by Tom Hirschowitz mentions (p. 4) the triangle, hyperdoctrines/categories/allegories, treats the first two, and describes the hyperdoctrine approach as a “naive idea” (p. 6). Hmm, I see on p. 86 an adjunction between the category of hyperdoctrines and that of first-order theories. I’ve no time right now to check how the latter is constructed. Posted by: David Corfield on April 5, 2011 9:39 AM | Permalink | Reply to this Re: Category Theoretic Modal Logic I wouldn’t presume to say for sure, but it does also seem to me that hyperdoctrines are sometimes ignored. The Elephant doesn’t mention them at all, for instance. It took me a while to find out about Tom’s slides construct the adjunction $\text{syntactic theories} \;\rightleftarrows\; \text{hyperdoctrines}$ which fits on the left of the one I was proposing; the construction is, I think, fairly straightforward. And I just noticed that in appendix B of Categories, Allegories, the functor $\text{syntactic theories} \to \text{allegories}$ is constructed, which I am saying ought to factor through hyperdoctrines. Posted by: Mike Shulman on April 5, 2011 4:49 PM | Permalink | Reply to this Re: Category Theoretic Modal Logic A correspondent pointed out to me that hyperdoctrines have been subsumed by fibred categories. Is there a general statement: hyperdoctrines are fibred categories satisfying such and such properties? And where do indexed categories fit in? Posted by: David Corfield on April 6, 2011 10:41 AM | Permalink | Reply to this Re: Category Theoretic Modal Logic That seems to me like an odd thing to say. Fibered categories are an equivalent way of describing indexed categories; which one you use is a matter of preference. A hyperdoctrine is a (fibered category or indexed category) with various additional properties; the notion of hyperdoctrine doesn’t seem to me to depend on whether you pick “fibered” or “indexed” to start with. Posted by: Mike Shulman on April 6, 2011 4:46 PM | Permalink | Reply to this Re: Category Theoretic Modal Logic That seems to me like an odd thing to say. I’m not quite sure what is odd. My correspondent didn’t mention indexed categories. I only brought them up because I wanted to hear what their relationship is to fibred categories. So now I know they’re equivalent. Good, one less thing to worry about. Posted by: David Corfield on April 6, 2011 7:12 PM | Permalink | Reply to this Re: Category Theoretic Modal Logic So now I know they’re equivalent. …in the presence of Choice. :) Or using any of the standard methods to get around not having Choice, like profunctors/distributors, anafunctors. Posted by: David Roberts on April 6, 2011 11:12 PM | Permalink | Reply to this Re: Category Theoretic Modal Logic I just meant, since hyperdoctrines are a specific sort of fibered category, I don’t see how they could be “subsumed” by them. I guess maybe your correspondent meant that nowadays people prefer to say “fibered category satisfying X, Y, and Z” rather than “hyperdoctrine”, which is certainly reasonable. Posted by: Mike Shulman on April 7, 2011 1:38 AM | Permalink | Reply to this Re: Category Theoretic Modal Logic Thanks to David for reminding me of this thread here. Meanwhile I had been collecting relevant material in an $n$Lab entry titled relation between type theory and category theory. My impression is that the most comprehensive results for traditional theory here are due to Seely, who in two articles in 1987 establishes both an equivalence (not just an adjunction) $FirstOrderTheories \simeq Hyperdoctrines$ as well as $DependentTypeTheories \simeq LocallyCartesianClosedCategories \,.$ Posted by: Urs Schreiber on May 16, 2012 12:18 PM | Permalink | Reply to this Re: Category Theoretic Modal Logic Seely’s result apparently doesn’t deal sufficiently carefully with the coherence question: substitution in type theory is strict, but pullback in a category is not. This was rectified by Hoffmann, but perhaps the full equivalence was only proven in this recent paper. Posted by: Mike Shulman on May 16, 2012 4:38 PM | Permalink | Reply to this Re: Category Theoretic Modal Logic Thanks, yes, we overlapped with posting this. I have now tried to work it all into the entry. But even given this flaw, now fixed, I am a bit puzzled by the status of the literature on this point. For instance that, as David pointed out above, Tom Hirschowitz states fairly recently here an adjunction where more than two decades ago Seely claimed already an equivalence. This discrepancy cannot be due to the subtlety of categorical interpreation of substitution, unless I am missing something. What is going on here? Posted by: Urs Schreiber on May 16, 2012 5:43 PM | Permalink | Reply to this Re: Category Theoretic Modal Logic Concerning Q2 and Mike Shulman’s comment, I think what he denotes by “Sem: hyperdoctrines -> categories” is the tripos-to-topos construction, originally due to Hyland-Johnstone-Pitts. The adjunction between triposes (hyperdoctrines with higher-order structures) and toposes is worked out in Frey’s recent preprint “A 2-Categorical Analysis of the Tripos-to-Topos Construction” in The construction works for first-order hyperdoctrines as well, and there is an adjunction between first-order hyperdoctrines and Heyting categories. But I do not know if it is worked out in the NB: these adjunctions do not form equivalences, since different hyperdoctrines can give rise to equivalent toposes (or Heyting categories). Sorry if I misunderstand anything. Posted by: Yoshihiro Maruyama on May 22, 2013 10:05 PM | Permalink | Reply to this Re: Category Theoretic Modal Logic Posted by: Mike Shulman on May 23, 2013 5:51 AM | Permalink | Reply to this Re: Category Theoretic Modal Logic Interesting! So across the relevant ‘Stone’ duality, is there an analogous duality between some kind of indexed spaces and groupoids of models, the left hand vertical line here? Posted by: David Corfield on May 24, 2013 9:11 AM | Permalink | Reply to this Re: Category Theoretic Modal Logic The people gathering for Topology, Algebra, and Categories in Logic in Marseilles this July ought to be able to answer Question 3, especially if the program committee attends. Posted by: David Corfield on April 5, 2011 9:42 AM | Permalink | Reply to this Re: Category Theoretic Modal Logic Surely you must need some additional structure/conditions/something to define modal hyperdoctrines beyond just letting them be arbitrary functors $B^\mathrm{op} \to \mathit{MA}$? If we think of the category $B$ as a category of contexts and substitutions, then $B$’s functoriality ensures that formulas’ truth-values are well-behaved with respect to substitution, since substitutions induce modal algebra homomorphisms. However, it seems quite common for the domain of quantification to depend on the world. For example, we may wish the domain of the quantifier “for all dogs d” to range over the set of dogs in each world. It’s not immediately obvious to me how to adapt hyperdoctrines to this case. Is this not interesting, or simply not part of the definition of first-order modal logic you are using? Posted by: Neel Krishnaswami on April 5, 2011 4:15 PM | Permalink | Reply to this Re: Category Theoretic Modal Logic I certainly didn’t mean general functors from $B^{op}$ to $MA$. I took Kracht and Kurz’s word that the case was similar to Boolean hyperdoctrines: A modal hyperdoctrine is a covariant functor $H$ from $\Sigma$ [natural numbers and mappings between them] into the category $MA$ of modal algebras. $H(n)$ may be thought of as the algebra of meanings of formulae containing $n$ free variables. To be well–defined, $H$ must satisfy among other the so–called Beck-Chevalley-condition, which ensures that cylindrification has the same meaning on all levels. (p. 43) Are you saying more conditions should apply? Going over to the semantics side (with some sort of spec operation here?), there’s certainly debate as to quantification in a frame. Some want you to be able to quantify from a world over all individuals of all worlds, i.e., all possible individuals. Others want you to be able to quantify only over individuals in one world. Yet others have objects as traces (‘bundle sections’) across all worlds. So, yes, I ought to work out if one form of quantification works better with modal hyperdoctrines. Posted by: David Corfield on April 5, 2011 5:21 PM | Permalink | Reply to this Re: Category Theoretic Modal Logic Yes, I think you need more than the Beck-Chevalley conditions (which I forgot to mention). Beck-Chevalley corresponds to the fact substitution goes through the quantifiers – i.e., that $[t/x](\forall y.\;\phi) = \forall y.\;([t/x]\phi)$ (and similarly for existentials). The case I was thinking of was that in Kripke semantics, the forcing condition for quantification is often written as: $w \models_\gamma \forall x.\;\phi[x] \iff \forall v \in I(w).\; w \models_{(\gamma,x \mapsto v)} \phi[x]$ Here $w$ is a world, $\gamma$ is a context assigning individuals to variables, and $I : \mathrm{World} \to \mathrm{Set}$ is a function that tells you what the individuals available in $w$ are. So this condition states that $\forall x.\;\phi$ holds in $w$ when $\phi$ holds for every $v \in I(w)$. It’s the fact that $I$ is world-dependent that makes me concerned about this, since a hyperdoctrine uses products in $B$ to model the environment, but the forcing condition gives you a product of a whole bunch of different collections of individuals. I’m sure that there must be some nice categorical description of this, but I don’t know what it should be offhand. Some googling turns up Hilken and Rydeheard’s “A First Order Modal Logic and its Sheaf Models”, whose abstract (and a quick skim) seems to suggest it is concerned with this problem: Abstract: We present a new way of formulating rst order modal logic which circumvents the usual difficulties associated with variables changing their reference on moving between states. This formulation allows a very general notion of model (sheaf models). The key idea is the introduction of syntax for describing relations between individuals in related states. This adds an extra degree of expressiveness to the logic, and also appears to provide a means of describing the dynamic behaviour of computational systems in a way somewhat dierent from traditional program logics. I don’t know what its connection to Awodey and Kishida is though. Posted by: Neel Krishnaswami on April 6, 2011 9:11 AM | Permalink | Reply to this Re: Category Theoretic Modal Logic “I don’t know what its connection to Awodey and Kishida is though.” the former (from 1999) is superseded by the latter (from 2008): we clean up and generalize the sheaf semantics given there, and prove a strengthened version of the completeness theorem wished for in that paper. Posted by: Steve Awodey on April 6, 2011 6:38 PM | Permalink | Reply to this Re: Category Theoretic Modal Logic Hi David, I haven’t read the Awodey and Kishida paper you’ve mentioned. yet. I just ‘discovered’ this thread. but there are at least two other approaches to categorical modal logic that I like ‘better’. One is mine with Gavin Bierman, for S4, you can read it in “On an Intuitionistic Modal Logic”, Studia Logica (65):383-416, 2000 and follow-up work like “Categorical and Kripke Semantics for Constructive S4 Modal Logic” (with Natasha Alechina and and Michael Mendler and Eike Ritter). In Proc. of Computer Science Logic (CSL’01), LNCS 2142, ed L. Fribourg. 2001.. (ok, I guess I’m a little biased…) but joking apart, even if you don’t buy into the whole idea of extending the categorical Curry-Howard isomorphism to modal types (which I and lots of other people do) there is also the work of Claudio Hermida, A categorical outlook on relational modalities and simulations http://maggie.cs.queensu.ca/chermida/papers/sat-sim-IandC.pdf that you should/could check up. Very best, Valeria de Paiva Posted by: Valeria de Paiva on October 12, 2011 7:05 AM | Permalink | Reply to this Re: Category Theoretic Modal Logic “This paper connects coalgebra with a long discussion in the foundations of game theory on the modeling of type spaces. We argue that type spaces are coalgebras, that universal type spaces are final coalgebras, and that the modal logics already proposed in the economic theory literature are closely related to those in recent work in coalgebraic modal logic. In the other direction, the categories of interest in this work are usually measurable spaces or compact (Hausdorff) topological spaces. A coalgebraic version of the construction of the universal type space due to Heifetz and Samet [Journal of Economic Theory 82 (2) (1998) 324–341] is generalized for some functors in those categories.” I think in order to be foundational (more truly descriptive of reality in a broader sense) it has to be this way. Posted by: Stephen Harris on October 17, 2011 8:27 AM | Permalink | Reply to this Re: Category Theoretic Modal Logic I wrote in the post that Richard Garner’s paper on ionads doesn’t mention modal logic. This was true of the version available on the ArXiv at the time (v2, 8 Dec 2009), however, looking at the latest version (v3, 15 Oct 2011) of the paper Ionads, we read These, then, are two quite general grounds for preferring toposes over ionads; yet there remain good reasons for having the notion of ionad available to us. The first is that some particular applications of topos theory may be more perspicuously expressed in the language of ionads than of toposes: two examples that come to mind are the sheaf-theoretic semantics for first-order modal logic given in [1], and the generalised Stone duality of [5]. [1] is the Awodey and Kishida paper, and [5] Forssell’s PhD thesis, partially written up in the Awodey and Forssell paper, referred to in the post. Posted by: David Corfield on March 10, 2013 3:27 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2011/04/category_theoretic_modal_logic.html","timestamp":"2014-04-19T22:14:47Z","content_type":null,"content_length":"73978","record_id":"<urn:uuid:f4f620ef-ce71-41ac-bcb0-3bdc00a75f4c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Nth term for diagonals/sides April 29th 2010, 12:10 PM #1 Nth term for diagonals/sides Find the nth term formula for the number of sides in a shape against how many diagonals this shape has. I have the answer but is there I need workings because I do not know how to approach this. Suppose a polygon has n vertices (and sides). The number of diagonals from a single vertex is 3 less the the number of vertices or sides, or (n-3). So n-3 diagonals can be drawn from each vertex. But each diagonal has two ends, so we would be counting each one twice. Dividing by two gives the actual number of diagonals. Number of diagonals = $\frac{n(n-3)}{2}$ On $n$ vertices there are $\binom{n}{2}-n$ diagonals. April 29th 2010, 12:38 PM #2 A riddle wrapped in an enigma Jan 2008 Big Stone Gap, Virginia April 29th 2010, 12:38 PM #3
{"url":"http://mathhelpforum.com/algebra/142161-nth-term-diagonals-sides.html","timestamp":"2014-04-20T18:00:56Z","content_type":null,"content_length":"38625","record_id":"<urn:uuid:a97a75f4-8816-49cd-a283-dd7a684208c3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
A molecular vibration occurs when atoms in a molecule are in periodic motion while the molecule as a whole has constant translational and rotational motion. The frequency of the periodic motion is known as a vibration frequency, and the typical frequencies of molecular vibrations range from less than 10^12 to approximately 10^14 Hz. In general, a molecule with N atoms has 3N – 6 normal modes of vibration, but a linear molecule has 3N – 5 such modes, as rotation about its molecular axis cannot be observed.^[1] A diatomic molecule has one normal mode of vibration. The normal modes of vibration of polyatomic molecules are independent of each other but each normal mode will involve simultaneous vibrations of different parts of the molecule such as different chemical bonds. A molecular vibration is excited when the molecule absorbs a quantum of energy, E, corresponding to the vibration's frequency, ν, according to the relation E = hν (where h is Planck's constant). A fundamental vibration is excited when one such quantum of energy is absorbed by the molecule in its ground state. When two quanta are absorbed the first overtone is excited, and so on to higher To a first approximation, the motion in a normal vibration can be described as a kind of simple harmonic motion. In this approximation, the vibrational energy is a quadratic function (parabola) with respect to the atomic displacements and the first overtone has twice the frequency of the fundamental. In reality, vibrations are anharmonic and the first overtone has a frequency that is slightly lower than twice that of the fundamental. Excitation of the higher overtones involves progressively less and less additional energy and eventually leads to dissociation of the molecule, as the potential energy of the molecule is more like a Morse potential. The vibrational states of a molecule can be probed in a variety of ways. The most direct way is through infrared spectroscopy, as vibrational transitions typically require an amount of energy that corresponds to the infrared region of the spectrum. Raman spectroscopy, which typically uses visible light, can also be used to measure vibration frequencies directly. The two techniques are complementary and comparison between the two can provide useful structural information such as in the case of the rule of mutual exclusion for centrosymmetric molecules. Vibrational excitation can occur in conjunction with electronic excitation (vibronic transition), giving vibrational fine structure to electronic transitions, particularly with molecules in the gas Simultaneous excitation of a vibration and rotations gives rise to vibration-rotation spectra. Vibrational coordinates[edit] The coordinate of a normal vibration is a combination of changes in the positions of atoms in the molecule. When the vibration is excited the coordinate changes sinusoidally with a frequency ν, the frequency of the vibration. Internal coordinates[edit] Internal coordinates are of the following types, illustrated with reference to the planar molecule ethylene, • Stretching: a change in the length of a bond, such as C-H or C-C • Bending: a change in the angle between two bonds, such as the HCH angle in a methylene group • Rocking: a change in angle between a group of atoms, such as a methylene group and the rest of the molecule. • Wagging: a change in angle between the plane of a group of atoms, such as a methylene group and a plane through the rest of the molecule, • Twisting: a change in the angle between the planes of two groups of atoms, such as a change in the angle between the two methylene groups. • Out-of-plane: a change in the angle between any one of the C-H bonds and the plane defined by the remaining atoms of the ethylene molecule. Another example is in BF[3] when the boron atom moves in and out of the plane of the three fluorine atoms. In a rocking, wagging or twisting coordinate the bond lengths within the groups involved do not change. The angles do. Rocking is distinguished from wagging by the fact that the atoms in the group stay in the same plane. In ethene there are 12 internal coordinates: 4 C-H stretching, 1 C-C stretching, 2 H-C-H bending, 2 CH[2] rocking, 2 CH[2] wagging, 1 twisting. Note that the H-C-C angles cannot be used as internal coordinates as the angles at each carbon atom cannot all increase at the same time. Vibrations of a Methylene group (-CH[2]-) in a molecule for illustration[edit] The atoms in a CH[2] group, commonly found in organic compounds, can vibrate in six different ways: symmetric and asymmetric stretching, scissoring, rocking, wagging and twisting as shown here: Symmetrical Asymmetrical Scissoring (Bending) stretching stretching Rocking Wagging Twisting (These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H Symmetry-adapted coordinates[edit] Symmetry-adapted coordinates may be created by applying a projection operator to a set of internal coordinates.^[2] The projection operator is constructed with the aid of the character table of the molecular point group. For example, the four(un-normalised) C-H stretching coordinates of the molecule ethene are given by $Q_{s1} = q_{1} + q_{2} + q_{3} + q_{4}\!$ $Q_{s2} = q_{1} + q_{2} - q_{3} - q_{4}\!$ $Q_{s3} = q_{1} - q_{2} + q_{3} - q_{4}\!$ $Q_{s4} = q_{1} - q_{2} - q_{3} + q_{4}\!$ where $q_{1} - q_{4}$ are the internal coordinates for stretching of each of the four C-H bonds. Illustrations of symmetry-adapted coordinates for most small molecules can be found in Nakamoto.^[3] Normal coordinates[edit] The normal coordinates, denoted as Q, refer to the positions of atoms away from their equilibrium positions, with respect to a normal mode of vibration. Each normal mode is assigned a single normal coordinate, and so the normal coordinate refers to the "progress" along that normal mode at any given time. Formally, normal modes are determined by solving a secular determinant, and then the normal coordinates (over the normal modes) can be expressed as a summation over the cartesian coordinates (over the atom positions). The advantage of working in normal modes is that they diagonalize the matrix governing the molecular vibrations, so each normal mode is an independent molecular vibration, associated with its own spectrum of quantum mechanical states. If the molecule possesses symmetries, it will belong to a point group, and the normal modes will "transform as" an irreducible representation under that group. The normal modes can then be qualitatively determined by applying group theory and projecting the irreducible representation onto the cartesian coordinates. For example, when this treatment is applied to CO[2], it is found that the C=O stretches are not independent, but rather there is an O=C=O symmetric stretch and an O=C=O asymmetric stretch. • symmetric stretching: the sum of the two C-O stretching coordinates; the two C-O bond lengths change by the same amount and the carbon atom is stationary. Q = q[1] + q[2] • asymmetric stretching: the difference of the two C-O stretching coordinates; one C-O bond length increases while the other decreases. Q = q[1] - q[2] When two or more normal coordinates belong to the same irreducible representation of the molecular point group (colloquially, have the same symmetry) there is "mixing" and the coefficients of the combination cannot be determined a priori. For example, in the linear molecule hydrogen cyanide, HCN, The two stretching vibrations are 1. principally C-H stretching with a little C-N stretching; Q[1] = q[1] + a q[2] (a << 1) 2. principally C-N stretching with a little C-H stretching; Q[2] = b q[1] + q[2] (b << 1) The coefficients a and b are found by performing a full normal coordinate analysis by means of the Wilson GF method.^[4] Newtonian mechanics[edit] Perhaps surprisingly, molecular vibrations can be treated using Newtonian mechanics to calculate the correct vibration frequencies. The basic assumption is that each vibration can be treated as though it corresponds to a spring. In the harmonic approximation the spring obeys Hooke's law: the force required to extend the spring is proportional to the extension. The proportionality constant is known as a force constant, k. The anharmonic oscillator is considered elsewhere.^[5] $\mathrm{Force}=- k Q \!$ By Newton’s second law of motion this force is also equal to a reduced mass, μ, times acceleration. $\mathrm{Force} = \mu \frac{d^2Q}{dt^2}$ Since this is one and the same force the ordinary differential equation follows. $\mu \frac{d^2Q}{dt^2} + k Q = 0$ The solution to this equation of simple harmonic motion is $Q(t) = A \cos (2 \pi u t) ;\ \ u = {1\over {2 \pi}} \sqrt{k \over \mu}. \!$ A is the maximum amplitude of the vibration coordinate Q. It remains to define the reduced mass, μ. In general, the reduced mass of a diatomic molecule, AB, is expressed in terms of the atomic masses, m[A] and m[B], as $\frac{1}{\mu} = \frac{1}{m_A}+\frac{1}{m_B}.$ The use of the reduced mass ensures that the centre of mass of the molecule is not affected by the vibration. In the harmonic approximation the potential energy of the molecule is a quadratic function of the normal coordinate. It follows that the force-constant is equal to the second derivative of the potential energy. $k=\frac{\partial ^2V}{\partial Q^2}$ When two or more normal vibrations have the same symmetry a full normal coordinate analysis must be performed (see GF method). The vibration frequencies,ν[i] are obtained from the eigenvalues,λ[i], of the matrix product GF. G is a matrix of numbers derived from the masses of the atoms and the geometry of the molecule.^[4] F is a matrix derived from force-constant values. Details concerning the determination of the eigenvalues can be found in.^[6] Quantum mechanics[edit] In the harmonic approximation the potential energy is a quadratic function of the normal coordinates. Solving the Schrödinger wave equation, the energy states for each normal coordinate are given by $E_n = h \left( n + {1 \over 2 } \right)u=h\left( n + {1 \over 2 } \right) {1\over {2 \pi}} \sqrt{k \over m} \!$, where n is a quantum number that can take values of 0, 1, 2 ... In molecular spectroscopy where several types of molecular energy are studied and several quantum numbers are used, this vibrational quantum number is often designated as v.^[7]^[8] The difference in energy when n (or v) changes by 1 is therefore equal to $hu$, the product of the Planck constant and the vibration frequency derived using classical mechanics. For a transition from level n to level n+1 due to absorption of a photon, the frequency of the photon is equal to the classical vibration frequency $u$ (in the harmonic oscillator approximation). See quantum harmonic oscillator for graphs of the first 5 wave functions, which allow certain selection rules to be formulated. For example, for a harmonic oscillator transitions are allowed only when the quantum number n changes by one, $\Delta n = \pm 1$ but this does not apply to an anharmonic oscillator; the observation of overtones is only possible because vibrations are anharmonic. Another consequence of anharmonicity is that transitions such as between states n=2 and n=1 have slightly less energy than transitions between the ground state and first excited state. Such a transition gives rise to a hot band. In an infrared spectrum the intensity of an absorption band is proportional to the derivative of the molecular dipole moment with respect to the normal coordinate.^[9] The intensity of Raman bands depends on polarizability. See also[edit] Further reading[edit] • Sherwood, P. M. A. (1972). Vibrational Spectroscopy of Solids. Cambridge University Press. ISBN 0521084822. External links[edit]
{"url":"http://www.digplanet.com/wiki/Molecular_vibration","timestamp":"2014-04-21T13:09:34Z","content_type":null,"content_length":"69835","record_id":"<urn:uuid:9db2767d-d898-41f3-b7ba-9001cf33ad4f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Homotopy theory Paths and cylinders Homotopy groups $(\infty,1)$-Category theory Basic concepts Universal constructions Local presentation Higher category theory Basic concepts Basic theorems Universal constructions Extra properties and structure 1-categorical presentations The notion of $\infty$-groupoid is the generalization of that of group and groupoids to higher category theory: an $\infty$-groupoid – equivalently an (∞,0)-category – is an ∞-category in which all k-morphisms for all $k$ are equivalences. The collection of all $\infty$-groupoids forms the (∞,1)-category ∞Grpd. Special cases of $\infty$-groupoids include groupoids, 2-groupoids, 3-groupoids, n-groupoids, deloopings of groups, 2-groups, ∞-groups. There are many ways to present the (∞,1)-category ∞Grpd of all $\infty$-groupoids, or at least obtain its homotopy category. A simple and very useful incarnation of $\infty$-groupoids is available using a geometric definition of higher categories in the form of simplicial sets that are Kan complexes: the $k$-cells of the underlying simplicial set are the k-morphisms of the $\infty$-groupoid, and the Kan horn-filler conditions encode the fact that adjacent $k$-morphisms have a (non-unique) composite $k$-morphism and that every $k$-morphism is invertible with respect to this composition. See Kan complex for a detailed discussion of how these incarnate $\infty$-groupoids. The (∞,1)-category of all $\infty$-groupoids is presented along these lines by the Quillen model structure on simplicial sets, whose fibrant-cofibrant objects are precisely the Kan complexes: $\infty Grpd \simeq (sSet_{Quillen})^\circ \,.$ One may turn this geometric definition into an algebraic definition of ∞-groupoids by choosing horn-fillers . The resulting notion is that of an algebraic Kan complex that has been shown by Thomas Nikolaus to yield an equivalent (∞,1)-category of $\infty$-groupoids. There are various model categories which are Quillen equivalent to $sSet_{Quillen}$. For instance the standard model structure on topological spaces, a model structure on marked simplicial sets and many more. All these therefore present ∞Grpd. Moreover, the corresponding homotopy category of an (∞,1)-category $Ho(\infty Grpd)$, hence a category whose objects are homotopy types of $\infty$-groupoids, is given by the homotopy category of the category of presheaves over any test category. See there for more details. Every other algebraic definition of omega-categories is supposed to yield an equivalent notion of $\infty$-groupoid when restricted to $\omega$-categories all whose k-morphisms are invertible. This is the statement of the homotopy hypothesis, which is for Kan complexes and algebraic Kan complexes a theorem and for other definitions regarded as a consistency condition. Notably in Pursuing Stacks and the earlier letter to Larry Breen, Alexander Grothendieck initiated the study of $\infty$-groupoids and the homotopy hypothesis with his original definition of Grothendieck weak infinity-groupoids, that has recently attracted renewed attention. Strict $\infty$-groupoids One may also consider entirely strict $\infty$-groupoids, usually called $\omega$-groupoids or strict ω-groupoids. These are equivalent to crossed complexes of groups and groupoids. Relation to $\infty$-groups 0-connected $\infty$-groupoids are the delooping $\mathbf{B}G$ of ∞-groups (see looping and delooping). These are presented by simplicial groups. Notably abelian simplicial groups are therefore a model for abelian $\infty$-groupoids. Under the Dold-Kan correspondence these are equivalent to non-negatively graded chain complexes, which therefore also are a model for abelian $\infty$-groupoids. This way much of homological algebra is secretly the study of special $\infty$-groupoids. Formulations in homotopy type theory include See also at category object in an (infinity,1)-category for more along these lines.
{"url":"http://ncatlab.org/nlab/show/infinity-groupoid","timestamp":"2014-04-19T17:04:15Z","content_type":null,"content_length":"62622","record_id":"<urn:uuid:ea081e43-e8a2-4140-8d23-8be3e7af5bce>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
The Science of Programming/Working on the Chain Gang Sometimes, as SPT in CME, Chapter IX, points out, you find yourself puzzling over how to differentiate something complicated like: $y = 3 (x^2 + 17)^2$ The approach is to make the expression simpler by abstracting away the detail. Let a be the polynomial: $a = x^2 + 17$ We can represent this using a sum of terms: Now, y can be rewritten as: $y = 3 a^2$ To find the derivative of y with respect to x, we use the chain rule: $\frac{dy}{dx} = \frac{dy}{da} * \frac{da}{dx}$ In other words, the differential of y with respect to x is equal to the differential of (the rewritten) y with respect to (the new) a multiplied by the differential of (the new) a with respect to x. Programmatically, we have: var a = term(1,:x,2) plus term(17,:x,0); var y = term(3,:a,2); var dy/dx = y . diff(:a) times a . diff(:x); Looking at what dy/dx represents, we have: sway> a . toString(); STRING: x^2 + 17 sway> y . toString(); STRING: 3x^2 sway> dy/dx . toString(); STRING: 6a * 2x Doing the final substitution by hand ($a = x^2 + 17$), we get the final answer for dx/dy: dy/dx = 6 * (x^2 + 17) * 2x = (6x^2 + 102) * 2x = 12x^3 + 204x Implementing the chain ruleEdit It would be nice to have the chain rule substitution step automatically done for us, but doing so requires a bit of work, both conceptually and programmatically. We begin by extending the idea of abstracting the variable of a term. Recall that, at first, we hard-wired the term variable as x. Next, we allowed the caller of the term constructor to pass in the independent variable as a Sway symbol. The next step in the abstraction is to allow the term variable to be a term (or sum of terms or whatever) itself. If we did so, then a term's diff method would become the chain rule: function diff(wrtv) term(a * n,iv,n - 1) times iv . diff(wrtv); Obviously, the independent variable iv can no longer be a symbol, but must be instead an object with a diff method. So, to represent a term of the form: $a x^b$ we would need to use an object to represent the variable x. The constructor for such a variable object would look similar to term and plus constructors. That is, it must have value, toString, and diff methods^[1]: function variable(name) function value(x) { x; } function toString() { "" + name; } function diff(wrtv) if (wrtv == name) The rule for finding the derivative of a simple variable is: if the with-respect-to variable matches, the result is one. If not, the result is zero. We will use constants to represent the numbers zero and one; in this way, every item in our system, including numbers, has toString, value, and diff methods. To make our lives simpler, we can add the following logic to the body of the term constructor. If a symbol is passed in as the independent variable, we will convert it into a variable object.^[2] In this way, we can pass in a symbol as before. Here is a mock-up of the new term constructor: function term(a,iv,n) function value(x) { ... } function toString() { ... } function diff(wrtv) if (n == 0) term(a * n,iv,n - 1) times iv . diff(wrtv); if (iv is :SYMBOL, iv = variable(iv)); We will also need to modify term's toString method to call iv's visualization. Here is the new non-simplifying version: function toString() "" + a + iv . toString() + "^" + n; Let's test our modified system: var t = term(4,:x,3); var t' = t . diff(:x); sway> t . toString(); STRING: 4x^3 sway> t . iv; OBJECT: <OBJECT 1958> sway> t' . toString(); STRING: 12x^2 It seems to be working so far for simple variables. Now let's try our original problem: $y = 3 (x^2 + 17)^2$ First, we make our polynomial: var a = term(1,:x,2) plus term(17,:x,0); var y = term(3,a,2); // not :a Now, we visualize it: sway> y . toString(); STRING: 31x^2 + 17x^0^2 Ouch! What did we do wrong? We need to parenthesize the visualization of iv: function toString() "" + a + "(" + iv . toString() + ")" + "^" + n; Remaking a and y with term's new visualization yields: var a = term(1,:x,2) plus term(17,:x,0); var y = term(3,a,2); sway> y . toString(); STRING: 3(1x^2 + 17x^0)^2 If you use a simplifying toString method for terms, you should get: 3(x^2 + 17)^2 exactly as desired! Now let's differentiate y (you will need your times constructor up and running): var y' = y . diff(); sway> y' . toString(); STRING: 6(x^2 + 17) * 2x We still have two little problems. The first is that the above result is not in its simplest form. Unfortunately, producing the simplest form is rather a complex process (compounded by the fact that is is not always clear which form is the simplest). So we will stop at this point and be happy. The other little problem occurs when we visualize a: sway> a . toString(); STRING: (x)^2 + 17 We've gone overboard with the parentheses. Clearly, when the independent variable is a complex object, we want to use parens. When it is a simple variable, however, we should eschew parens. This task is left as an exercise. 1. Explain why the diff method for terms no longer needs to test whether or not the with-respect-to variable matches the independent variable. 2. Implement the one function. 3. Modify the simplifying toString method for terms to print out parentheses only when the independent variable is complex and ether the coefficient or the exponent is not equal to one. Hint: Create a term method that adds parentheses around iv's visualization if it is complex but simply returns iv's visualization if it is not. Call this method from toString where appropriate. 4. CME p. 100, 1, 8 using sway 5. CME p. 100, 2, 3, 5, 8 using pencil and paper 1. ↑ This is the heart of the object-oriented approach to programming: related objects have the same methods, but the methods are customized for the particular object. 2. ↑ This little trick illustrates an important principle in the design of the computer programs: do as much for the user of your code as possible. We could force the user to pass in a variable object or we could allow the user to pass in a symbol, as before, and do the work ourselves. Last modified on 28 October 2010, at 22:36
{"url":"http://en.m.wikibooks.org/wiki/The_Science_of_Programming/Working_on_the_Chain_Gang","timestamp":"2014-04-20T08:19:54Z","content_type":null,"content_length":"24075","record_id":"<urn:uuid:f4cc943b-2bd5-4160-8bc6-99081b6d05de>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Number bases From Uncyclopedia, the content-free encyclopedia Number bases are convenient ways of counting things invented by Indian mathematicians. All number and all forms of counting come originally from India. The most important thing to know is that the Indians invented all numbers, numerals, and mathematics. Digit Placement Systems All of the following number systems vary in a number of different aspects, however one of the most important factors to recall when looking at the various numeral systems is that they almost all use digit placement as a method of determining value. For instance, quaternary expands by digit placement, even though it appears limited by the lack of digits. Where you put the digit dramatically increases or decreases the value. Placing a digit in one hole can cause a positive reaction, whereas placing digit in another hole will cause a negative reaction, and an inability to place said digit in the first hole that we were discussing again. The reasoning behind this should be obvious. Digit placement in quaternary works thus: 0, 1, 2, 3, Lots, Lots-1, Lots-2, Lots-3, 2-Lots, 2-Lots-1, 2-Lots-2, 2-Lots-3, 3-Lots, 3-Lots-1, 3-Lots-2, 3-Lots-3, Heaps A famous example of using digit placement relates to the boy putting his digit in the dyke in an attempt to save his hometown from flooding. No doubt, he saved the town in this instance, but often putting your digit in a dyke will have a very different value. Base 0 - The Nunnery system Symbols Used The Nunnery System Base Zero (or Nunnery system, Nonery system, or Nonetal) is the most Zen like of all counting systems. The Nunnery system was named after the amount of sexual activity that mathematicians generally get (None in the morning, none at night...) Where all other systems have $x$ amount of digits, the Nunnery system has none. This would mean that at any stage of your life you would have a bank balance of $ with which you could use it top purchase items from $ to $ . It also means that you would have to work days until your retirement. The Indians invented the number zero. Nobody else wants to claim responsibility. Base 1 - The Urinary system Symbols Used The Urinary system is based only on this one digit. │ Base 1, or the Urinary system (often misquoted as the Unary system) is the basis of all number systems and is widely believed to be the first number system used. It was made up of a single digit that is used repeatedly. The Urinary number system works as follows: 1.│ 2.││ 3.│││ 4.││││ 5.│││││ 6.││││││ 7.│││││││ 8.││││││││ 9.│││││││││ 10.││││││││││ 11.Lots Although the Chinese, Japanese and Korean people have tried to claim the creation of a Urinary system (as evidenced by their Tally system this has been predated by the Brahmi (Indian) numerical symbols ( , , , etc.) that have been discovered as early as 3rd Century BCE. This proves the mathematical superiority of the Indian people. Base 2 - the Binary System Symbols Used Ye olde style I/O button. 1 0 or ◯ │ Base 2 is twice as complicated as the Urinary system to count things. Also known as Binary, this was an invention by women who decided that through years of frustration and anguish, they needed a system to be able to indicate to men whether they were on, or off the mark. Base 2 is responsible for many woes of the modern world, including fuzzy logic, Microsoft, and the internet According to Binary mathematicians, there are only 10 types of people in the world, 01. Those that understand binary 10. Those that don't And a varying scale between the two extremes, leaving the bulk of the population in the second and third quartile. Computers occasionally use binary. In the early days of computing, computers had an on/off switch, usually along the lines of "Get that for me, will you Igor." This was slowly replaced by the symbolised on/off switch, so that one end of the switch displays a │ while the other end displays a ◯. This became confusing and was then replaced by what is referred to as an I/O switch. This then became even further confusingly and was then replace by a single symbol that had these symbols combined. This is referred to as the "ring of power." Now of course the bulk of computers don't do anything as pedestrian as turning on and off, and instead prefer to sit in "User available access mode", "Hibernation for the winter mode", or "Input/output device unready mode." Each place in Binary is referred to as a bit. Fractions are referred to as "a bit on the side." In the 11th Century, an arrangement of the hexagrams of the I Ching, using a two symbol system, was developed by the Chinese scholar and philosopher Shao Yong, however there is no evidence that he knew really what he was doing. In fact, many scholars believe that he was just making pretty pictures. The Indian writer Pingala (c. 200 BC) developed advanced mathematical concepts for describing prosody, and in doing so presented the first known description of a binary numeral system. Notice again where he comes from. That's right! Base 4 - The Quaternary System Symbols Used The fantastic four, the champions of the Quaternary. 0 1 2 3 The Base 4 system, or The Quaternary System or Four play, uses four digits, meaning that for every 2 bits of binary information you would only have 1 Quaternary place. Having said all of that, of course this is just a two-bit system. There has been some debate about the origin of this number system. Many Rock drummers find this the easiest form of counting. Many or all of the Chumashan languages originally used a base 4, and as everyone knows the Chumashan are all American Indians. Base 8 - The Octal system Symbols Used Octagon, a movie raising awareness of the often forgotten Octal system. 0 1 2 3 4 5 6 7 Base 8, or Octal, is a general purpose counting system for stuff that comes in eights, such as beer. Unless you get beer is six packs. Or four packs. Or in cases of 24. Or one at a time. If you have 8 of something, count it in Octal. If you have more or less than 8, use another Base. Base 8 has more than seven digits, but is well known for having less than nine. The legend is that this was originally a counting system of nine digits, but 7, the most feared digit of all, was hungry one evening, and when the rest of the digits woke up they found out that seven ate nine. This is likely apocryphal. The Yuki language in California and the Pamean languages in Mexico have octal systems because the speakers count using the spaces between their fingers rather than the fingers themselves. The Indians think this is stupid and they just count but ignore thumbs, thus proving that although they may not have invented it, they perfected it. Base 10 - the Decimal System Symbols Used The basis of the decimal system. 0 1 2 3 4 5 6 7 8 9 The most popular number base is Base 10. It is often believe that this is due to science following nature, as a marijuana leaf is made up of nine fronds and one stem, giving ten points, and most mathematical concepts and precepts are created while the mathematician is enjoying the effects of excessive marijuana consumption. By amazing coincidence, the numbers in Base 10 coincide with the numbers we all use for such common tasks as counting peas, children, trees, misfortunes, and marijuana leaves. By assiduous use of fingers and toes, it has been shown that one can represent almost any quantity of stuff or more of the above digits used in close conjunction with each other. For numbers below the value of the lowest digit, fractions of digits are used, often called "nail clippings." The Decimal system was invented by American librarian, educator, and humanitarian, Melvil Dewey. However. it appears that he based his inventions on the inventions of the Indian people from many years prior. Nowhere else is this as evident as seen in the comparison of the digits used. , , , , , , , , 1, 2, 3, 4, 5, 6, 7, 8, 9 See, it's obvious that the Indian system of numerals came first, otherwise why else would the Brahmi (Indian) numerals be on top? Base 16 - The Hexadecimal System Symbols Used Modern computers often use hexadecimal numbers. 0 1 2 3 4 5 6 7 8 9 A B C D E F Base 16 is popularly known as hexadecimal. The use of the word 'hex' obviously indicates the use of magic in the application of the Base 16 counting system. When quoting IP address on the Internet Experts often use the Base 10 equivalent of the binary addresses, however when they quote the MAC address of the Computer they often read them in Hex. When questioned on the reason why, they often say that this is 248. If they didn't do this they would be 57005. What's the problem, are you 57007? Amongst Internet Experts this is considered high humour. Many Internet Experts come from India. Apparently, this numerical system is starting to 64222 from use. Other number systems Base I - The ROMAN numeral system Symbols Used Julius, once head of the ROMAN Empire, trying to work out how they get V to mean five. I V X L C D M The ROMAN Numeral system is still extremely popular today by copywriters, as it makes more logical sense to say that a futuristic sci-fi like Return of the Killer Tomatoes! is © MCMLXXXVIII than Rather than deriving value from the more traditional digit placement, Roman Numerals are all of a particular value, so that an I is the equivalent value of 1, V is 5, and so on, as shown below. I → 1 V → 5 X → 10 L → 50 C → 100 D → 500 M → 1000 However, when in ROME , always place digits like the ROMANS do. in order to have the number 4, for example, rather than having IIII, the ROMANS have instead chosen to use IV, thus making this a more one I'd system. Due to the limitations of the system, often numerical suffixes were introduced, most commonly K which represented multiplied by 1,000, to get to numbers higher than one thousand. This was compoundable, so that KK was multiply by 1,000,000. KKK, however, is just absurd. It was considered high praise to relate to people by numbers instead of names. Caesar, being above most others, was often referred to as 599,000. This practise is still in place today in some motorcycle clubs, so ensure that when you next see a motorcyclist to refer to him as a DICK. The ROMANS stole all their inventions from the Greeks, who in turn stole all their inventions from the Indians. Base £ - The Imperial system Symbols Used Leader of the Imperial system ' yd fath fur in " mi µin mi[(naut., U.S., U.K.)] rd Not about to be outdone by the ROMANS, the British Empire also introduced it's own numbering system. The Imperial system is extremely simple to understand. It works on a system of a varying base dependant on what it is you are counting or measuring. As an example the Imperial measurement for length is an inch. Once you have 12 inches you have a foot, and once you have 3 feet you have a yard. Unless of course you were at sea then you had 6.08 feet to a fathom. . Of course, in practise a fathom was actually 2 yards, which was 6 feet. And 100 fathoms made a chain. Unless you were talking about fathoms in practice, which as we said before they were only 6 feet instead of 6.08 feet, as a chain would be 608 feet, which would be 100 true fathoms, or 101.3333 practical fathoms, as this is an easier measurement. Approximately. And, of course, there is also the link, which is 7.92 inches, or 0.66 of a foot. 25 links would make a pole, which is also known as a rod or a perch. This would mean that a pole would be the equivalent of 5.5 yards, or 2.71382 true fathoms, or 2.75 practical fathoms. And a chain would be 4 perches, or 0.10855 cables, or 792 inches. Now a furlong is 220 yards, or 660 feet, or 110 practical fathoms, or 108.552632 true fathoms, or 1.085526 chains. But this of course would mean that you would have to be on land. 8 furlongs would make a mile, which would also be the length of 5280 feet, 868.421056 true fathoms, not to be confused with 880 practical fathoms. Of course if you were naughty you would use the nautical mile, which is 6080 feet, or 1000 true fathoms, or 1013.3333 practical fathoms. And a League is 2605.26316 true fathoms, or 240 chains. And that's just length. The Indians used Imperial measurements during the colonisation by the British. They gave it back. Obviously the Indians invented every number system, as well as everything to do with numbers. Indians invented Algebra, Differential Calculus, Imaginary numbers, factorials, happy numbers, and magic numbers. In fact, every time that you have ever been given any test in school relating to mathematics there's a strong possibility that it was written by Indians. When you think about it, Indians are bastards, aren't they? Featured Article (read another featured article) Featured version: 6 September 2009 This article has been featured on the front page. — You can vote for or nominate your favourite articles at
{"url":"http://uncyclopedia.wikia.com/wiki/Number_bases?direction=prev&oldid=5470630","timestamp":"2014-04-19T18:32:23Z","content_type":null,"content_length":"75389","record_id":"<urn:uuid:132324af-4500-4d32-b3a7-0ec8a84fede5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: modeling PRA in the physical world Vladimir Sazonov sazonov at logic.botik.ru Sat Mar 6 03:23:01 EST 1999 Stephen G Simpson wrote: > In my posting of 1 Mar 1999 23:35:34 I suggested the following > approach to f.o.m.: > > to argue that PRA is consistent because the physical world provides > > a model of it, and then to justify at least a significant fragment > > of mathematics by reducing it to PRA, a la Hilbert's program of > > finite reductionism. > Raatikainen 3 Mar 1999 19:49:54 asked > > isn't it the received view in it that the universe is finite ? > > PRA, on the other hand, requires an infinite universe of discourse. It seems better to say that the universe is bounded, but possibly infinite. (Something like what we can see in non-standard models of arithmetic). Otherwise, we could ask whether the number of electrons in the universe is even or odd. This also shows that it is sometimes meaningless to use our mathematical terminology in non-mathematical contexts. > The universe is finite, but Aristotle posits a distinction between > actual and potential infinity. The earth has rotated around the sun a > finite number of times, but this number is potentially infinite. A > physical line segment may be finite yet potentially infinite by > division, i.e. capable of being subdivided an unlimitedly large number > of times. What would say about this physicists? > One can argue that PRA is implicit in the physical fact is this physical *fact*?? > that many kinds of discrete acts can be repeated a potentially > infinite (or at least an unlimitedly large finite) number of times. > The act of adding 1 can be repeated indefinitely, and this gives > addition. The act of addition can be repeated indefinitely, and this > gives multiplication. Etc. PRA has been identified with finitism. > Hilbert described finitism as an unproblematic part of mathematics, > which eschews the infinite and is indispensable for all scientific > reasoning. Thus, despite the physical world is bounded (finite?) it is concluded that it models PRA?? I would say, that potential infinity (our ability to distract from resource bounds) is peoples *invention*. It does no hold in our universe in any direct sense. May be only in the weakest sense: it is unclear where is the exact bound of our abilities. Then we, mathematicians, *postulate* that successor operation always gives a new number. We do this because such a *decision* allows to develop mathematics in a reasonable convenient way. Convenience is something different from truth or falsity in the real world. Yes, we may postulate more: our ability to *iterate* any operation which has been defined. Thus we really come to the notion of *total* arithmetical operations of addition, multiplication, exponential, and primitive recursive functions. But this happens not in the physical world, but in our thought. I believe that we should make clear distinctions. In principle, this is not the only possible way to abstract from reality. We could reason as "realists" and take just "negation" of the abstraction of potential infinity (feasibility): we should not distract from resource bounds, but rather always *relativise* our reasoning to these bounds. This would lead us to another version of arithmetic with the maximal natural number postulated. It is natural that the exact value of this maximal number (a possible resource bound; say, the memory of some concrete computer) is not specified in such an arithmetic. It can be demonstrated (by a theorem) that this approach leads to a theory of polynomial time computability which is in a sense equivalent to bounded arithmetic. Let me recall also that Mycielski have demonstrated how the first basic concepts and results of Analysis can be developed in such kind of "finite" arithmetic. Recall also his related result that any consistent theory (say ZFC) is "isomorphic" (in a reasonable sense) to another theory whose each finite fragment has a (possibly very large) finite model. (It seems he called such theories locally Who knows which could be other possibilities to start constructing some (imaginary, but applicable) mathematical world. Essentially only one of such possibility was worked out extensively by mathematicians. > I'm aware that much more needs to be said, but this seems like a > promising line for f.o.m. > -- Steve Vladimir Sazonov P.S. By the way, as Mycielski was mentioned above, I think it will be interesting to FOMers his short abstract of the BEST 7 conference talk which is related to some discussions on FOM and which I have red with the great pleasure: "To FOM or not to FOM, that is the question" It is very pity that he does not participate more explicitly in FOM ("not to FOM?"). More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1999-March/002748.html","timestamp":"2014-04-16T22:10:21Z","content_type":null,"content_length":"7585","record_id":"<urn:uuid:eea55a16-ec2c-404f-8399-7acf798d6232>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Explain these please August 1st 2012, 03:59 AM #1 Jul 2012 Explain these please Q1.Reduce $sin^{4}\theta$ to an expression involving only function of multiples of $\theta$,raised to the first power. Q2.How $2cos^{2}\theta=1+cos2\theta$ I would also like to know how to reduce any function to the first power Re: Explain these please are you not familiar with the derivation of the double angle identity for cosine ? $\cos(2t) = \cos(t + t) = \cos{t} \cdot \cos{t} - \sin{t} \cdot \sin{t} = \cos^2{t} - \sin^2{t}$ power reduction identities derived from this double angle identity for cosine ... $\cos^2{t} = \frac{1+\cos(2t)}{2}$ $\sin^2{t} = \frac{1-\cos(2t)}{2}$ August 1st 2012, 04:08 AM #2
{"url":"http://mathhelpforum.com/trigonometry/201596-explain-these-please.html","timestamp":"2014-04-16T15:03:44Z","content_type":null,"content_length":"34656","record_id":"<urn:uuid:c3fc678e-5c2f-4b9b-a648-c50caa5a1037>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Exponential Smoothing Explained Exponential Smoothing Explained. When people first encounter the term Exponential Smoothing they may think “that sounds like a hell of a lot of smoothing . . . whatever smoothing is". They then start to envision a complicated mathematical calculation that likely requires a degree in mathematics to understand, and hope there is a built-in Excel function available if they ever need to do it. The reality of exponential smoothing is far less dramatic and far less traumatic. The truth is, exponential smoothing is a very simple calculation that accomplishes a rather simple task. It just has a complicated name because what technically happens as a result of this simple calculation is actually a little complicated. To understand exponential smoothing, it helps to start with the general concept of “smoothing” and a couple of other common methods used to achieve smoothing. What is smoothing? Smoothing is a very common statistical process. In fact, we regularly encounter “smoothed” data in various forms in our day-to-day lives. Any time you use an average to describe something, you are using a smoothed number. If you think about why you use an average to describe something, you will quickly understand the concept of smoothing. For example, we just experienced the warmest winter on record. How are we able to quantify this? Well we start with datasets of the daily high and low temperatures for the period we call “Winter” for each year in recorded history. But that leaves us with a bunch of numbers that jump around quite a bit (it’s not like every day this winter was warmer than the corresponding days from all previous years). We need a number that removes all this “jumping around” from the data so we can more easily compare one winter to the next. Removing the “jumping around” in the data is called smoothing, and in this case we can just use a simple average to accomplish the smoothing. In demand forecasting, we use smoothing to remove random variation (noise) from our historical demand. This allows us to better identify demand patterns (primarily trend and seasonality) and demand levels that can be used to estimate future demand. The “noise” in demand is the same concept as the daily “jumping around” of the temperature data. Not surprisingly, the most common way people remove noise from demand history is to use a simple average—or more specifically, a moving average. A moving average just uses a predefined number of periods to calculate the average, and those periods move as time passes. For example, if I’m using a 4-month moving average, and today is May 1st, I’m using an average of demand that occurred in January, February, March, and April. On June 1st, I will be using demand from February, March, April, and May. Weighted moving average. When using an “average” we are applying the same importance (weight) to each value in the dataset. In the 4-month moving average, each month represented 25% of the moving average. When using demand history to project future demand (and especially future trend), it’s logical to come to the conclusion that you would like more recent history to have a greater impact on your forecast. We can adapt our moving-average calculation to apply various “weights” to each period to get our desired results. We express these weights as percentages, and the total of all weights for all periods must add up to 100%. Therefore, if we decide we want to apply 35% as the weight for the nearest period in our 4-month “weighted moving average”, we can subtract 35% from 100% to find we have 65% remaining to split over the other 3 periods. For example, we may end up with a weighting of 15%, 20%, 30%, and 35% respectively for the 4 months (15 + 20 + 30 + 35 = 100). Exponential smoothing. If we go back to the concept of applying a weight to the most recent period (such as 35% in the previous example) and spreading the remaining weight (calculated by subtracting the most recent period weight of 35% from 100% to get 65%), we have the basic building blocks for our exponential smoothing calculation. The controlling input of the exponential smoothing calculation is known as the smoothing factor (also called the smoothing constant). It essentially represents the weighting applied to the most recent period’s demand. So, where we used 35% as the weighting for the most recent period in the weighted moving average calculation, we could also choose to use 35% as the smoothing factor in our exponential smoothing calculation to get a similar effect. The difference with the exponential smoothing calculation is that instead of us having to also figure out how much weight to apply to each previous period, the smoothing factor is used to automatically do that. So here comes the “exponential” part. If we use 35% as the smoothing factor, the weighting of the most recent period’s demand will be 35%. The weighting of the next most recent period’s demand (the period before the most recent) will be 65% of 35% (65% comes from subtracting 35% from 100%). This equates to 22.75% weighting for that period if you do the math. The next most recent period’s demand will be 65% of 65% of 35%, which equates to 14.79%. The period before that will be weighted as 65% of 65% of 65% of 35%, which equates to 9.61%, and so on. And this goes on back through all your previous periods all the way back to the beginning of time (or the point at which you started using exponential smoothing for that particular item). You’re probably thinking that’s looking like a whole lot of math. But the beauty of the exponential smoothing calculation is that rather than having to recalculate against each previous period every time you get a new period’s demand, you simply use the output of the exponential smoothing calculation from the previous period to represent all previous periods. Are you confused yet? This will make more sense when we look at the actual calculation Typically we refer to the output of the exponential smoothing calculation as the next period “forecast”. In reality, the ultimate forecast needs a little more work, but for the purposes of this specific calculation, we will refer to it as the forecast. The exponential smoothing calculation is as follows: The most recent period’s demand multiplied by the smoothing factor. PLUS The most recent period’s forecast multiplied by (one minus the smoothing factor). D = most recent period’s demand S = the smoothing factor represented in decimal form (so 35% would be represented as 0.35). F = the most recent period’s forecast (the output of the smoothing calculation from the previous period). OR (assuming a smoothing factor of 0.35) (D * 0.35) + ( F * 0.65) It doesn’t get much simpler than that. As you can see, all we need for data inputs here are the most recent period’s demand and the most recent period’s forecast. We apply the smoothing factor (weighting) to the most recent period’s demand the same way we would in the weighted moving average calculation. We then apply the remaining weighting (1 minus the smoothing factor) to the most recent period’s forecast. Since the most recent period’s forecast was created based on the previous period’s demand and the previous period’s forecast, which was based on the demand for the period before that and the forecast for the period before that, which was based on the demand for the period before that and the forecast for the period before that, which was based on the period before that . . . well, you can see how all previous period’s demand are represented in the calculation without actually going back and recalculating anything. And that’s what drove the initial popularity of exponential smoothing. It wasn’t because it did a better job of smoothing than weighted moving average, it was because it was easier to calculate in a computer program. And, because you didn’t need to think about what weighting to give previous periods or how many previous periods to use, as you would in weighted moving average. And, because it just sounded cooler than weighted moving average. In fact, it could be argued that weighted moving average provides greater flexibility since you have more control over the weighting of previous periods. The reality is either of these can provide respectable results, so why not go with easier and cooler sounding. Exponential Smoothing in Excel Let’s see how this would actually look in a spreadsheet with real data. In Figure 1A, we have an Excel spreadsheet with 11 weeks of demand, and an exponentially smoothed forecast calculated from that demand. I’ve used a smoothing factor of 25% (0.25 in cell C1). The current active cell is Cell M4 which contains the forecast for week 12. You can see in the formula bar, the formula is =(L3*$C1)+(L4*(1-$C1)) . So the only direct inputs to this calculation are the previous period’s demand (Cell L3), the previous period’s forecast (Cell L4), and the smoothing factor (Cell C1, shown as absolute cell reference $C1). When we start an exponential smoothing calculation, we need to manually plug the value for the 1st forecast. So in Cell B4, rather than a formula, we just typed in the demand from that same period as the forecast. In Cell C4 we have our 1st exponential smoothing calculation =(B3*$C1)+(B4*(1-$C1)) . We can then copy Cell C4 and paste it in Cells D4 through M4 to fill the rest of our forecast You can now double-click on any forecast cell to see it is based on the previous period’s forecast cell and the previous period’s demand cell. So each subsequent exponential smoothing calculation inherits the output of the previous exponential smoothing calculation. That’s how each previous period’s demand is represented in the most recent period’s calculation even though that calculation does not directly reference those previous periods. If you want to get fancy, you can use Excel’s “trace precedents” function. To do this, click on Cell M4, then on the ribbon tool bar (Excel 2007 or 2010) click the Formulas tab, then click “Trace Precedents”. It will draw connector lines to the 1st level of precedents, but if you keep clicking Trace Precedents it will draw connector lines to all previous periods to show you the inherited relationships. Now let’s see what exponential smoothing did for us. Figure 1B shows a line chart of our demand and forecast. You case see how the exponentially smoothed forecast removes most of the jaggedness (the jumping around) from the weekly demand, but still manages to follow what appears to be an upward trend in demand. You’ll also notice that the smoothed forecast line tends to be lower than the demand line. This is known as “trend lag” and is a side effect of the smoothing process. Any time you use smoothing when a trend is present; your forecast will lag behind the trend. This is true for any smoothing technique. In fact, if we were to continue this spreadsheet and start inputting lower demand numbers (making a downward trend) you would see the demand line drop, and the trend line move above it before starting to follow the downward trend. That’s why I previously mentioned the output from the exponential smoothing calculation that we call a forecast, still needs some more work. There is a lot more to forecasting than just smoothing out the bumps in demand. We need to make additional adjustments for things like trend lag, seasonality, known events that may effect demand, etc. But all that is beyond the scope of this article. You will likely also run into terms like double-exponential smoothing and triple-exponential smoothing. These terms are a bit misleading since you are not re-smoothing the demand multiple times (you could if you want, but that’s not the point here). These terms represent using exponential smoothing on additional elements of the forecast. So with simple exponential smoothing, you are smoothing the base demand, but with double-exponential smoothing you are smoothing the base demand plus the trend, and with triple-exponential smoothing you are smoothing the base demand plus the trend plus the seasonality. The other most commonly asked question about exponential smoothing is “where do I get my smoothing factor?” There is no magical answer here, you need to test various smoothing factors with your demand data to see what gets you the best results. There are calculations that can automatically set (and change) the smoothing factor. These fall under the term “adaptive smoothing”, but you need to be careful with them. There simply is no perfect answer and you should not blindly implement any calculation without thorough testing and developing a thorough understanding of what that calculation does. You should also run “what-if” scenarios to see how these calculations react to demand changes that may not currently exist in the demand data you are using for testing. The data example I used previously is a very good example of a situation where you really need to test some other scenarios. That particular data example shows a somewhat consistent upward trend. Many large companies with very expensive forecasting software got in big trouble in the not-so-distant past when their software settings that were tweaked for a growing economy didn’t react well when the economy started stagnating or shrinking. Things like this happen when you don’t understand what your calculations (software) is actually doing. If they understood their forecasting system, they would have known they needed to jump in and change something when there were sudden dramatic changes to their business. So there you have it; the basics of exponential smoothing explained. Want to know more about using exponential smoothing in an actual forecast, check out my book Inventory Management Explained. More Articles by Dave Piasecki. Dave Piasecki, is owner/operator of Inventory Operations Consulting LLC, a consulting firm providing services related to inventory management, material handling, and warehouse operations. He has over 25 years experience in operations management and can be reached through his website (http://www.inventoryops.com), where he maintains additional relevant information.
{"url":"http://www.inventoryops.com/articles/exponential_smoothing.htm","timestamp":"2014-04-18T03:08:37Z","content_type":null,"content_length":"26727","record_id":"<urn:uuid:bf390774-e0b6-4336-afae-5fac9c794f5b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
maps of heterogeneous media by photoacoustic « journal navigation Reconstruction of optical absorption coefficient maps of heterogeneous media by photoacoustic tomography coupled with diffusion equation based regularized Newton Method Optics Express, Vol. 15, Issue 26, pp. 18076-18081 (2007) We describe a novel reconstruction method that allows for quantitative recovery of optical absorption coefficient maps of heterogeneous media using tomographic photoacoustic measurements. Images of optical absorption coefficient are obtained from a diffusion equation based regularized Newton method where the absorbed energy density distribution from conventional photoacoustic tomography serves as the measured field data. We experimentally demonstrate this new method using tissue-mimicking phantom measurements and simulations. The reconstruction results show that the optical absorption coefficient images obtained are quantitative in terms of the shape, size, location and optical property values of the heterogeneities examined. © 2007 Optical Society of America 1. Introduction Biomedical photoacoustic tomography (PAT) is a potentially powerful imaging method for visualizing the internal structure of soft tissues with excellent spatial resolution and satisfactory imaging depth.^1–17 While conventional PAT can image tissues with high spatial resolution, it provides only the distribution of absorbed light energy density that is the product of both the intrinsic optical absorption coefficient and extrinsic optical fluence distribution. Thus the imaging parameter of conventional PAT is clearly not an intrinsic property of tissue. It is well known, however, that it is the tissue absorption coefficient that directly correlates with tissue physiological/functional information. These physiological parameters including hemoglobin concentration, blood oxygenation and water content are critical for accurate diagnostic decision-making. Several methods reported suggest that it is possible to recover optical property maps when conventional PAT is combined with a light transport model.^11–14 However, there are several limitations associated with these methods. First, in these methods, one has to know the exact boundary reflection coefficients as well as the exact strength and distribution of incident light source which require careful experimental calibration procedures. It is often difficult to obtain these initial parameters accurately. Second, the recovered results strongly depend on the accuracy of the distribution of absolute absorbed energy density from conventional PAT. As such, to overcome the limitations mentioned above, in this paper we propose a novel reconstruction approach that combines conventional PAT with diffusion equation based regularized Newton method for accurate recovery of optical properties. This work represents the first application of the diffusion equation based iterative nonlinear algorithms that couple the conventional Tikhonov regularization with a priori spatial information-based regularization schemes for reconstruction of absorption coefficient using tomographic photoacoustic measurements. We demonstrate this method using a series of phantom experiments. 2. Methods and materials In our reconstruction method, the absorbed optical energy density is first recovered by a finite element-based PAT reconstruction algorithm.^10,17 By incorporating the recovered absorbed energy density distribution into the photon diffusion equation, the absorption coefficient map is then extracted using a diffusion equation based regularized Newton method. The core procedure of our PAT algorithm can be described by the following two equations in which is the pressure wave; k [0] ω/c [0] is the wave number described by the angular frequency, and the speed of acoustic wave in the medium, c[0]; β is the thermal expansion coefficient; is the specific heat; Φ is absorbed light energy density that is the product of optical absorption coefficient, and optical fluence or photon density, Ψ (i.e., Φ= p ^o p ^0 [1] p ^0 [2] p ^0 [M] p ^c p [c] [1] p ^c [2] , where are observed and computed complex acoustic field data for i=1, 2…, boundary locations; Δ is the update vector for the absorbed optical energy density; ℑ is the Jacobian matrix formed by ∂p/∂Φ at the boundary measurement sites; is a Levenberg-Marquardt regularization parameter is the identity matrix. Thus here the image formation task is to update absorbed energy density distribution via iterative solution of Eqs. (1) so that a weighted sum of the squared difference between computed and measured acoustic data can be minimized. To recover the optical absorption coefficient µ[a](r) from the absorbed energy density, Φ, the photon diffusion equation as well as the Robin boundary conditions (BCs) can be written in consideration of Φ=µ[a]Ψ, For the inverse computation, the Tikhonov-regularization sets up a weighted term as well as a penalty term in order to minimize the squared differences between computed and measured absorbed energy density values,^16 is the regularization matrix or filter matrix, the regularization parameter and E [0] the initial guess of the inverse of optical absorption coefficient. Φ ^o Φ ^o [1] Φ ^o [2] Φ ^o [N] Φ ^c Φ ^c [1] Φ ^c [2] Φ ^c [N] , where Φ ^0 [i] is the normalized absorbed energy density obtained from PAT, and Φ ^c [i] is the absorbed energy density computed from Eqs. (3) =1, 2…, N locations within the entire PAT reconstruction domain. It should be noted that the reconstruction of the inverse of optical absorption coefficient using Eqs. (3) will make the inverse computation easier. The initial estimate of the inverse of absorption coefficient can be updated based on iterative Newton method as follows, is the Jacobian matrix formed by ∂Φ/∂E inside the whole reconstruction domain including the boundary zone. The practical update equation resulting from Eq. (6) is utilized with In addition to the usual Tikhonov regularization, the PAT image (absorbed energy density map) is used both as input data and as prior structural information to regularize the solution so that the ill-posedness associated with such inversion can be reduced. In our reconstruction scheme, we first segment the PAT image into different regions according to different heterogeneities or tissues types using commercial software. We then employ both the distribution of absorbed energy density in the entire imaging domain and segmented prior structural information for optical inversion. The segmented prior spatial information can be incorporated into the iterative process using the regularization filter matrix, shown in Eq. (7) . In this study, Laplacian-type filter matrix is employed and constructed according to the region or tissue type it is associated based on derived priors. This filter matrix is able to relax the smoothness constraints at the interface between different regions or tissues, in directions normal to their common boundary so that the co-variance of nodes within a region is basically realized. As such, the elements of matrix , is specified as follows is the total node number within one region or tissue. It should be noted the last term in Eq. (6) is not routinely used in the reconstruction and including the term would reduce the sharpness of known edges given a homogeneous initial guess. Thus the absorption coefficient distribution is reconstructed through the iterative procedures described by Eqs. (3) The image formation process described above is tested first using simulated data. The test geometry is shown in Fig. 1 a where a two-dimensional (2D) circular background region (50.8 mm in diameter) contained four circular targets (5.08 mm in diameter each). The optical properties for the targets were =0.04 mm =1.0 mm while optical properties for the background were =0.01 mm =1.0 mm . In the simulation, a homogeneous distributed area source is utilized to illuminate the whole imaging domain from its top surface, which is the same as in our experiments (see below). A total of 120 ultrasound receivers were equally distributed along the boundary of background region. While PAT signals carry a wide range of acoustic frequencies, only 50 frequencies (frequency range: 50~540 kHz) were used for our PAT reconstruction. For the experiments,^12 pulsed light from a Nd: YAG laser (wavelength: 532nm, pulse duration: 3–6ns) were coupled into the phantom via an optical subsystem and generated acoustic signals. The transducer (1MHz central frequency) and phantom were immersed in a water tank. A rotary stage rotated the receiver relative to the center of the tank. The incident optical fluence was controlled below 10mJ/cm^2 and the incident laser beam diameter was 5.0cm. For the first two experiments, we embedded two objects with a size ranging from 2.0–5.5 mm in diameter in a 50.8 or 40.0 mm-diameter solid cylindrical phantom. We then immersed the object-bearing solid phantom into a 110.6 mm-diameter water background. The phantom materials used consisted of Intralipid as scatterer and India ink as absorber with Agar powder (1–2%) for solidifying the Intralipid and India ink solution. The background phantom had µ[a]=0.01 mm^-1 and µ′[s]=1.0mm^-1 while the two targets had µ[a]=0.03 mm^-1 and µ′[s]=2.0 mm^-1 for test 1, and µ[a]=0.07 mm^-1 and µ′[s]=3.0mm^-1 for test 2. In the next two experiments, we placed a single-target-containing phantom into the water, aiming to test the capability of resolving target having different optical contrasts relative to the background phantom. The target size was 1.0 and 2.0mm in diameter for tests 3 and 4, respectively. The target had µ[a]=0.03 mm^ -1 and µ′[s]=2.0 mm^-1 for test 3, and µ[a]=0.015 mm^-1 and µ′[s]=2 mm^-1 for test 4. In the image reconstructions for the four experiments, we assumed scattering coefficient known as constant (1.0mm ^-1). The initial guesses of optical absorption coefficient for the target(s) and background medium were 0.02mm^-1 and 0.01mm^-1, respectively. Although a single transducer is used, the transducer has a bandwidth that allows us to use multiple frequencies by simply Fourier transforming the detected time domain acoustic signals. In this work, 50 frequencies (frequency range: 50~540 kHz) were used for our PAT reconstruction. It required about 30 minutes to finish the two-step reconstruction computation. 3. Results and discussion The results from simulated data are shown in Fig. 1 (a) provides the distribution of optical fluence, Fig. 1 (b) presents the reconstructed absorbed energy density image using the existing PAT algorithm, and Fig. 1 (c) displays the recovered absorption coefficient image with the regularized Newton method. We can see from Fig. 1 (c) that absorption coefficient image can be recovered quantitatively. It is also observed from Figs. 1 (a) and (b) that the influence of the inhomogeneous distribution of photon density on the PAT reconstruction is apparent. There is no linear relation existing between the absorbed energy density and optical absorption coefficient even if the incident distributed source is homogeneous, as demonstrated by Figs. 1 (b) and The results from the first two sets of experiments are shown in Fig. 2 Figs. 2 (a) and (b) present the reconstructed absorption coefficient images of two objects having a size of 2.0 and 3.0mm (test 1), and 5.5mm (test 2) in diameter, respectively, while the recovered absorbed energy density maps for experiments 1 and 2 are also plotted in Figs. 2 (c) and (d) for comparison. We see that the objects in each case are clearly detected. As shown in Table 1 , the recovered absorption coefficients of the target and background are quantitative compared to the exact values for both experiments. By estimating the full width half maximum (FWHM) of the absorption coefficient profiles, the recovered object size was found to be 1.8, 2.7, and 5.0 mm, which is also in good agreement with the actual object size of 2.0, 3.0, and 5.5 mm for experiments 1 and 2. The reconstructed absorption coefficient images for experiments 3 and 4 are shown in Figs. 3 (a) and (b). We immediately see that the different optical contrast levels of the objects relative to the background are quantitatively resolved. It is important to note that our reconstruction method does not need any calibration procedure due to the use of relative incident laser source strength and normalized absorbed energy density distribution where the normalization is performed by simply dividing the absorbed energy density at each nodal location with the maximum absorbed energy density (i.e., Φ=Φr)(r)/ ). An optimization scheme was then applied to search for the boundary conditions coefficient, and the relative source strength as described previously. As such, the reconstruction of optical properties with our algorithms does not depend on the absolute values of absorbed energy density and optical fluence as well as the boundary parameter. For example, even though the values/scales of the absorbed energy density for experiments 1 and 2 are very different as shown in Figs. 2 (c) and (d), the algorithm is still able to recover the absorption coefficient images quantitatively in terms of the location, size, and absorption coefficient value of the objects. In addition, our method is able to resolve the issue of negative absorbed energy density values often seen in conventional PAT. For our previous methods , the negative values must be specified as zero, which may affect the quantitative accuracy of the recovered absorption coefficient images. In this study we demonstrate experimental evidence that it is possible to obtain absolute optical absorption coefficient image using photoacoustic tomography coupled with diffusion equation based regularized Newton method. The methods described are able to quantitatively reconstruct absorbing objects with different sizes and optical contrast levels. This research was supported in part by a grant from the National Institutes of Health (NIH) (R01 CA90533). References and links 1. R.G.M. Kollkman, W. Steenbergen, and T. G. van Leeuwen, “Reflection mode photoacoustic measurement of seed of sound,” Opt. Express 15, 3291–3300 (2007). [CrossRef] 2. R. I. Siphanto, K. K. Thumma, R.G.M. Kolkman, T.G. van Leeuwen, F.F.M. de mul, J.W. van Neck, L.N.A. van Adrichem, and W. Steenbergen, “Serial noninvasive photoacoustic imaging of neovascularization in tumor angiogenesis,” Opt. Express 13, 89–95 (2005). [CrossRef] [PubMed] 3. Z. Chen, Z. Tang, and W. Wan, “Photoacoustic tomography imaging based on a 4f acoustic lens imaging systems” Opt. Express 15, 4966–4976 (2007). [CrossRef] [PubMed] 4. Z. Yuan, Q. Zhang, and H. Jiang, “Simultaneously reconstruction of acoustic and optical properties of heterogeneous medium by quantitative photoacoustic tomography,” Opt. Express 14, 6749–6753 (2006). [CrossRef] [PubMed] 5. R.A. Kruger, D. Reinecke, and G. Kruger, “Thermoacoustic computed tomography-technical considerations”, Med. Phys. 26, 1832–1837(1999). [CrossRef] [PubMed] 6. G. Paltauf, J. Viator, S. Prahl, and S. Jacques, “Iterative reconstruction algorithm for optoacoustic imaging”, J. Acoust. Soc. Am. 112, 1536–1544(2002). [CrossRef] [PubMed] 7. S.J. Norton and T. Vo-Dinh, “Optoacoustic diffraction tomography: analysis of algorihms”, J. Opt. Soc. Am. A 20, 1859–1866(2003). [CrossRef] 8. A.A. Karabutov, E. Savateeva, and A. Oraevsky, “Imaging of layered structures in biological tissues with opto-acoustic front surface transducer”, Proc. SPIE 3601, 284–295(1999). [CrossRef] 9. C.G.A. Hoelen, F.F. de Mul, R. Pongers, and A. Dekker, “Three-dimensional photoacoustic imaging of blood vessls in tissue”, Opt. Lett. 23, 648–650 (1998). [CrossRef] 10. H. Jiang, Z. Yuan, and X. Gu, “Spatially varying optical and acoustic property reconstruction using finite element-based photoacoustic tomography,” J. Opt. Soc. Am. A 23, 878–888 (2006). 11. J. Ripoll and V. Ntziachristos, “Quantitative point source photoacoustic inversion formulas for scattering and absorbing medium,” Phys. Rev. E 71, 031912 (2005). [CrossRef] 12. Z. Yuan and H. Jiang, “Quantitative photoacoustic tomography: recovery of optical absorption coefficient map of heterogeneous medium,” Appl. Phys. Lett. 88, 231101 (2006). [CrossRef] 13. B. Cox, S. Arridge, K. Kostli, and P. Beard, “2D quantitative photoacoustic image reconstruction of absorption distributions in scattering medium using a simple iterative method,” Appl. Opt. 45, 1866–1875 (2006). [CrossRef] [PubMed] 14. L. Yin, Q. Wang, Q. Zhang, and H. Jiang, “Tomographic imaging of absolute optical absorption coefficient in turbid medium using combing photoacoustic and diffusing light measurements,” Opt. Lett. 32, 2556–2558 (2007). [CrossRef] [PubMed] 15. N. Iftimia and H. Jiang, “Quantitative optical image reconstruction of turbid media by use of direct-current measurements,” Appl. Opt. 39, 5256–5261 (2000). [CrossRef] 16. P. Yalavarthy, H. Dehghani, B. Pogue, and K.D. Paulsen, “Wight-matrix structured regularization provide optimal generalized least-square in diffuse optical tomography,” Med. Phys. 34, 2085–2098 (2007). [CrossRef] [PubMed] 17. Z. Yuan and H. Jiang, “Three-dimensional finite element-based photoacoustic tomography: Reconstruction algorithm and simulations,” Med. Phys. 34, 538–546 (2007). [CrossRef] [PubMed] OCIS Codes (170.0110) Medical optics and biotechnology : Imaging systems (170.5120) Medical optics and biotechnology : Photoacoustic imaging ToC Category: Medical Optics and Biotechnology Original Manuscript: September 10, 2007 Revised Manuscript: December 12, 2007 Manuscript Accepted: December 13, 2007 Published: December 18, 2007 Virtual Issues Vol. 3, Iss. 1 Virtual Journal for Biomedical Optics Zhen Yuan, Qiang Wang, and Huabei Jiang, "Reconstruction of optical absorption coefficient maps of heterogeneous media by photoacoustic tomography coupled with diffusion equation based regularized Newton method," Opt. Express 15, 18076-18081 (2007) OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/vjbo/fulltext.cfm?uri=oe-15-26-18076&id=148546","timestamp":"2014-04-23T19:34:41Z","content_type":null,"content_length":"203755","record_id":"<urn:uuid:be6cf1f1-01b0-49f9-84d2-37d77180c124>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Climate Modelling doesn't attempt a systematic introduction to climate modelling, but focuses on the fundamental dynamics of the atmospheric and oceanic circulations, along with approaches to their mathematical formulation and numerical treatment. Some radiation balance models are covered, but there's nothing on atmospheric or oceanic chemistry, vegetation, ice sheets, or other components of the broader earth system. With that limitation, a broad range of models are treated, chosen both for their scientific significance and broader interest and to illustrate key concepts and mathematical tools. Though no implementation details are considered, the treatment is at a fairly low level and assumes familiarity with simple vector and multivariable calculus, differential equations, and fluid mechanics. ( Introduction to Climate Modelling is used as a text for a one semester graduate course, aimed at students who may have only a basic knowledge of climate science but who have a background in physics, or perhaps in engineering or applied mathematics.) A rapid introduction covers the basics of the climate system, the purpose and limitations of modelling and something of its history, some examples of current climate models, and the hierarchy of models (where "more complex" is by no means "better"). A basic zero-dimensional radiative balance model produces a simple analytic solution, which then illustrates the use of finite difference methods in finding "the numerical solution of an ordinary differential equation of first order". Context for this model is given in an overview of the climate sensitivity and the major feedbacks to greenhouse forcing. Turning to energy and matter transport, Stocker looks at equations for diffusion, advection, and advection-diffusion; numerical solutions to a simplified advection equation are then derived. This introduces numerical stability and the Courant–Friedrichs–Lewy criterion, as well as different discretization schemes — "Euler forward in time, centered in space" and so forth — and touches on the problems with non-physical "numerical diffusion" produced by approximations. Turning to energy transport on a larger scale, Stocker presents some simple meridional energy balance models. Atmospheric heat transport can be partitioned into a mean meridional flow and fluxes due to stationary and transient eddies; the ocean heat transport can be partitioned into ocean gyres, meridional overturning circulation, Ekman circulation, and eddy diffusivity. In both cases, "sub-scale transports need to be parametrised due to the limitations imposed by the grid resolution". With the large scale ocean circulation, Stocker considers the equations of motion and continuity, the special case of shallow water equations, and the Stommel model for flows driven by the wind. Some of the mathematical methods introduced include the use of different reference frames, initial and boundary value conditions, iterative methods, and grids; spectral models are briefly touched on. A simplified approach to the general circulation of the atmosphere shows how meridional flows can involve thermally direct (Hadley) and indirect (Ferrel) cells. The Lorenz-Saltzman model is presented as an example of a chaotic system that can generate spontaneous abrupt changes or self-sustained oscillations. Coupled atmosphere-ocean models require modelling of heat, water, and momentum boundary fluxes, with salinity as an additional complication; older models needed flux corrections to prevent drift. A final chapter looks at the possibilities of multiple equilibria — with "tipping points" between them — taking as an example the abrupt changes found in polar ice cores and their possible explanation by a bipolar seesaw in the Atlantic circulation, and applying coupled models to the latter's future under greenhouse warming. The possibility of multiple equilibria in a simple atmospheric energy balance model ("Snowball Earth" scenarios) is also touched on. Introduction to Climate Modelling could be used to lure students from other disciplines to research on circulation models, but is perhaps most valuable for offering a plunge — more than a shallow dip — into the topic for the much broader range of scientists who will use climate models in their work and could benefit from an understanding of their internal complexities. It also makes a nice approach for anyone else with the necessary background in physics and mathematics who wants a better understanding of an area of science which has taken on a role in important public policy August 2012 External links: - buy from Amazon.com or Amazon.co.uk - share this review on Facebook or tweet it to Twitter Related reviews: - books about climate + weather
{"url":"http://dannyreviews.com/h/Climate_Modelling.html","timestamp":"2014-04-19T06:52:10Z","content_type":null,"content_length":"9656","record_id":"<urn:uuid:b1218280-8985-4b54-8311-6234db11b50a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Optimization Problems: PLEASE HELP!!! Click here to go to the NEW College Discussion Forum Discus: SAT/ACT Tests and Test Preparation: October 2003 Archive: Calculus Optimization Problems: PLEASE HELP!!! 1. Find an equation of the line through the point (3,5) that cuts off the least area from the first quadrant. 2. The upper left-hand corner of a piece of paper 8in. long if folded over to the right-hand edge. How would you fold it so as to minimize the fold? In other words, how would you choose x to minimize 3. Where should the point P be chosen on the line sgment AB so as to maximize the angle theta? (figures in my profile) Ok I know it's really hard...i've figured out 3 now so I only need 1 and 2. Hey Delerious if you need math help you can go to sosmath.com Report an offensive message on this page E-mail this page to a friend
{"url":"http://www.collegeconfidential.com/discus/messages/69/29842.html","timestamp":"2014-04-19T19:44:54Z","content_type":null,"content_length":"10455","record_id":"<urn:uuid:b30b6e5c-08ec-44ab-a3aa-ace9d58f3b8e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
College Park, GA Math Tutor Find a College Park, GA Math Tutor ...I have tutored for more than 10 years and have helped hundreds of students improve their test scores and grades and I have been rewarded as a top 100 WyzAnt tutor nationwide (50,000+ tutors). I have a Ph.D. from one of the top universities in the country and have received several awards and scho... 19 Subjects: including calculus, algebra 1, algebra 2, geometry I have a BS and MS in Physics from Georgia Tech and a Ph.D. in Mathematics from Carnegie Mellon University. I worked for 30+ years as an applied mathematician for Westinghouse in Pittsburgh. During that time I also taught as an adjunct professor at CMU and at Duquesne University in the Mathematics Departments. 10 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have tutored several students in math in physics including Algebra and have seen very positive results. I earned a BS in math and physics from the University of Alabama in Huntsville and a MS in physics from Georgia Tech. I am currently working on a PhD in physics at Georgia Tech. 11 Subjects: including calculus, trigonometry, precalculus, algebra 1 ...I have taught all levels of math, from pre-algebra to AP calculus BC, including IB HL and SL Math, and AP Physics - Mechanics. I also have extensive experience in test preparation, including PSAT, SAT, GRE and GHSGT. I am respected and work well with my students, having been selected STAR teacher by our top student. 10 Subjects: including algebra 1, algebra 2, calculus, prealgebra Hello! My name is Ben and I am a 24-year-old student pursuing a degree in Computer Science. My strengths include math, writing, reading and basic computer software (Microsoft Office, Adobe Creative Suite).I have a Bachelor of Science in Communication and have worked for small marketing firms in the past, where I was responsible for writing and editing content. 14 Subjects: including algebra 2, writing, algebra 1, prealgebra Related College Park, GA Tutors College Park, GA Accounting Tutors College Park, GA ACT Tutors College Park, GA Algebra Tutors College Park, GA Algebra 2 Tutors College Park, GA Calculus Tutors College Park, GA Geometry Tutors College Park, GA Math Tutors College Park, GA Prealgebra Tutors College Park, GA Precalculus Tutors College Park, GA SAT Tutors College Park, GA SAT Math Tutors College Park, GA Science Tutors College Park, GA Statistics Tutors College Park, GA Trigonometry Tutors Nearby Cities With Math Tutor Atl, GA Math Tutors Atlanta Math Tutors Decatur, GA Math Tutors Douglasville Math Tutors Dunwoody, GA Math Tutors East Point, GA Math Tutors Forest Park, GA Math Tutors Hapeville, GA Math Tutors Mableton Math Tutors Morrow, GA Math Tutors Riverdale, GA Math Tutors Roswell, GA Math Tutors Sandy Springs, GA Math Tutors Smyrna, GA Math Tutors Union City, GA Math Tutors
{"url":"http://www.purplemath.com/college_park_ga_math_tutors.php","timestamp":"2014-04-19T09:47:30Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:374ab3fa-43ea-4ccd-a692-44fa41080f42>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Course Meeting Times 2 sessions per week / 1.5 hours per session 18.100 Analysis; and one of 18.06 Linear Algebra, 18.700 Linear Algebra, or 18.701 Algebra I. Course Description The two primary goals of many pure and applied scientific disciplines can be summarized as follows: i. Formulate/devise a collection of mathematical laws (i.e., equations) that model the phenomena of interest. ii. Analyze solutions to these equations in order to extract information and make predictions. The end result of i) is often a system of partial differential equations (PDEs). Thus, ii) often entails the analysis of a system of PDEs. This course will provide an application-motivated introduction to some fundamental aspects of both i) and ii). In order to provide a broad overview of PDEs, our introduction to i) will touch upon a diverse array of equations including a. The Laplace and Poisson equations of electrostatics; b. The diffusion equation, which models e.g. the spreading out of heat energy and chemical diffusion processes; c. The Schrödinger equation, which governs the evolution of quantum-mechanical wave functions; d. The wave equation, which models e.g. the propagation of sound waves in the linear acoustical approximation; e. The Maxwell equations of electrodynamics; and other topics as time permits. In our introduction to ii), we will study three important classes of PDEs that differ markedly in their quantitative and qualitative properties: elliptic, diffusive, and hyperbolic. In each case, we will discuss some fundamental analytical tools that will allow us to probe the nature of the corresponding solutions. Required Text Salsa, Sandro. Partial Differential Equations in Action: From Modelling to Theory. Springer, 2010. ISBN: 9788847007512. [Preview with Google Books] Homework Assignments Homework is perhaps the most important component of this course: it provides you with regular feedback on whether or not you are keeping up with the material, and it challenges you to creatively apply what you have already learned. There will be an assignment almost every week. Homework assignments will typically be posted on the course website on Thursday and due at the start of class on the following Thursday. No late assignments will be accepted. Your lowest homework score will not factor into your grade. For your own benefit, I encourage you to learn how to use the typesetting program LaTeX to type up your homework. However, you can turn in neatly handwritten assignments if you prefer. Your homework will be graded both on correctness and on the quality of your written Policy on collaboration: Collaboration is an important component of your mathematical and personal development, and I encourage you to work with your classmates. By "work with," I mean that every member of a collaborative effort is expected to be an active contributor. The version of the homework that you turn in must be written in your own words and your own writing style, and you must fully understand the written arguments; copying someone else's homework line by line is plagiarism. Also, at the top of every homework assignment in which you collaborate, write the names of the people you worked with. Policy on citations: It is natural to consult resources when you get stuck on a problem. If you use a resource (e.g. Wikipedia, a textbook, a journal article, etc.), you must cite it using one of the standard citation styles. Indicate the title, author, volume number, year, and page number (or web address if appropriate) of your references. There will be a single 90 minute midterm exam held in class. There will be no homework due that week. Both the midterm and the final will be closed book, closed notes. The final exam will be cumulative with a slight emphasis on the material covered after the midterm. The breakdown of your final grade is as follows: Grading criteria. Homework 30 % Midterm Exam 30 % Final Exam 40 %
{"url":"http://ocw.mit.edu/courses/mathematics/18-152-introduction-to-partial-differential-equations-fall-2011/Syllabus/","timestamp":"2014-04-18T16:23:55Z","content_type":null,"content_length":"32473","record_id":"<urn:uuid:aa5ecf51-963d-4b28-9677-d61dbaf4b0c4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Projectile Motion (while loop) - HELP NEEDED June 7th, 2007, 04:21 PM #1 Junior Member Join Date Jun 2007 Projectile Motion (while loop) - HELP NEEDED Hi everyone.... I'm new to the forum and new to Java programming.... I have a project with the following info. 1. A projectile is fired with a given initial velocity at a launch angle of 45 degrees. If the trajectory is too short a message will be displayed such that with the given gunnery the target is out of range. 2. If the trajectory overshoots, then the new launch angle will be half of the previous launch angle. 3. If the trajectory is too short (but not at the very first instance) then the new launch angle will be half way between the current angle and the last angle of overshooting. 4. The steps are repeated as long as the projectile hits the target, considering the given precision. 3 Classes (ProjectileMain - main method, does the calling, Projectile - calculations, Gunnery - handles the above algorithm and outputs message). My code is finished but it seems that the while loop in the Gunnery class (iterateShooting method) is not working properly. Any help would be appreciated. import java.util.*; import javax.swing.*; public class ProjectileMain { // The main method obtains the input values, instantiates a Gunnery object with the // input values and calls the checkRange method of the Gunnery class. public static void main(String [] args) { String input; // Input string used for initialVelocity, distanceToTarget, and precison. // Obtain initialVelocity, distanceToTarget, and precision utilizing a GUI input window. input = JOptionPane.showInputDialog (null, "Enter the initial velocity (feet/sec)," +"\nthe required precision of the hit," +"\nand the distance to the desired target (in feet)" +" seperated by spaces"); // Convert the string to tokens and instantiate a Gunnery object with the input values. // The checkRange method is called which results in either a error message or final output message. StringTokenizer st = new StringTokenizer(input); while (st.hasMoreTokens()) Gunnery gunnery = new Gunnery (Double.parseDouble(st.nextToken()), Double.parseDouble(st.nextToken()), } // end method main } // end class ProjectileMain import java.text.*; public class Projectile { private static final double GRAVITATION = 32; // The gravitational force (feet/sec). // The method toRadians takes the degree value of a given angle as a parameter; // the method computes and returns the value of that angle in radians. public static double toRadians(double angle) { return (angle * (Math.PI/180)); } // end method toRadians // The method flightTime computes and returns the flight time of the trajectory with // the given launch angle and initial velocity. public static double flightTime (double angle, double velocity) { DecimalFormat df = new DecimalFormat ("0.00"); angle = toRadians(angle); return Double.parseDouble (df.format (2 * ((velocity * Math.sin(angle)) / GRAVITATION))); } // end method flightTime // The method distanceTraveled computes and returns the total distance traveled by the projectile. public static double distanceTraveled (double angle, double velocity) { DecimalFormat df = new DecimalFormat ("0.00"); return Double.parseDouble (df.format (velocity * Math.cos(toRadians(angle)) * flightTime(angle, velocity))); } // end method distanceTraveled } // end class Projectile import javax.swing.*; import java.text.*; public class Gunnery { private double distanceToTarget; // The distance to the target in feet. private double initialVelocity; // The initial velocity. private double precision; // The required precision of the hit. private int counter; // Counts the number of attempts until the target was hit // The method checkRange checks if, for launch angle = 45 degrees, the distance traveled plus the // precision is less than the distance to target, the method sends out a message of failure // and terminates the process; otherwise it calls iterateShooting() with 45 degrees. public void checkRange() { if ((Projectile.distanceTraveled(45, initialVelocity) + precision) < distanceToTarget) JOptionPane.showMessageDialog (null, "Target beyond reach! Choose a bigger gun!\n" +"Program exits."); else iterateShooting (45); } // end method checkRange // The method iterateShooting runs a while loop until precision is attained; then calls // displayResults() with the last distanceTraveled, the hitting angle and the number of attempts. public void iterateShooting (double angle) { double updateAngle = angle; counter = 0; while (Projectile.distanceTraveled(angle, initialVelocity) != (distanceToTarget + precision)) // If the trajectory overshoots, then the new launch angle will be half of the previous launch angle. if (Projectile.distanceTraveled(angle, initialVelocity) > (distanceToTarget + precision)) updateAngle = angle; angle = angle / 2; // If the trajectory is too short, then the new launch angle will be half way between the current angle // and the last angle of overshooting. else if (Projectile.distanceTraveled(angle, initialVelocity) < (distanceToTarget + precision)) angle = (angle + updateAngle) / 2; } // end while displayResults(Projectile.distanceTraveled(angle, initialVelocity), angle, counter); } // end method iterateShooting // The method displayResults composes and displays an output message on a JOptionPane window // based on paremeter values. public void displayResults (double distance, double angle, int counter) { DecimalFormat df = new DecimalFormat ("0.00"); JOptionPane.showMessageDialog(null, "Target was hit\n\n Initial velocity = " +initialVelocity +"\nPrecision: " +precision +"\n Distance to target: " +distanceToTarget +"\nDistance traveled by the projectile: " +distance +"\n The hitting angle: "+Double.parseDouble(df.format(angle)) +"degrees" +"\nThe number of attempts: "+counter); } // end method displayResults public Gunnery () { } // end default constrictor Gunnery // The non-default constructor Gunnery takes three double parameters and initializes the // initialVelocity, distanceToTarget, and the precision data fields. public Gunnery (double velocity, double accuracy, double distance) { initialVelocity = velocity; precision = accuracy; distanceToTarget = distance; } // end constructor Gunnery } // end class Gunnery Re: Projectile Motion (while loop) - HELP NEEDED For debugging (well you should learn how to use the debugger...) But moving: displayResults(Projectile.distanceTraveled(angle, initialVelocity), angle, counter); INSIDE the while loop will reveal what is happening... TheCPUWizard is a registered trademark, all rights reserved. (If this post was helpful, please RATE it!) In theory, there is no difference between theory and paractice; in practice there is. * Join the fight, refuse to respond to posts that contain code outside of [code] ... [/code] tags. See here for instructions * How NOT to post a question here * Of course you read this carefully before you posted * Need homework help? Read this first Re: Projectile Motion (while loop) - HELP NEEDED Thanks for the reply.....It would be nice to know how to use a debugger.... I'll probably have to wait until my next Java class. It seems that there is something screwed up in my calculation area (possible rounding) as the loop never stops...... with the input values velocity = 100, precision = 0.15, distance to target = 301, the loop should stop after 21 attempts at 37.26 degrees (hitting angle) with a distance traveled by the projectile of 301.15 (mine shows 300.87 at this stage).... I will have to look into my calcs....... THANKS FOR THE HELP EDIT: Figured it out.... I was rounding the value in the flightTime method when I shouldn't have been...... WOW... something that easy took me THAT LONG to figure out....!!! Last edited by magic507; June 7th, 2007 at 06:46 PM. June 7th, 2007, 04:23 PM #2 Elite / Microsoft MVP Power Poster Join Date Mar 2002 NY, USA June 7th, 2007, 06:02 PM #3 Junior Member Join Date Jun 2007
{"url":"http://forums.codeguru.com/showthread.php?425585-Projectile-Motion-(while-loop)-HELP-NEEDED","timestamp":"2014-04-19T15:22:54Z","content_type":null,"content_length":"83012","record_id":"<urn:uuid:0eb64919-8f42-45c9-bdb2-1f9b10481be2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. Why do this problem? Dominoes are a great resource and this problem is an intriguing way to use them. Not only does this activity require logical thinking but it is also an interesting way of practising addition and subtraction, and it provides an opportunity to talk about different ways of recording. Possible approach If you have an interactive whiteboard, you may find our Dominoes Environment useful for this problem. You could introduce the challenge by laying out ten large dominoes in a square on the floor (it does not matter which dominoes go where). Ask the class to gather round and ask a few questions about the sum of dots on each side so that learners understand how the corner spots are counted in both the horizontal side and the vertical side. Introduce the problem itself and ask pairs of children to talk for a minute or two about how they might tackle the problem. Share some of their suggestions among the whole group before giving them time to work in their pairs with dominoes. Using real dominoes whenever possible would be advantageous, but you can use this sheet a standard set of dominoes to be cut out. Squared paper would also be useful for jottings and recording. As well as talking about the solutions in the plenary, you could focus on how children recorded their solutions. Some may well have just used the dominoes and moved them around as they went but how did they keep track of what they had tried? Some may have jotted down pictures of different arrangements. It would be useful to have a conversation about what ways of recording are most useful in this context. Key questions Tell me what you have done so far. What do the numbers on this side add to? What do you need to make eight? What could you try instead? Possible extension Use the 'double 4 down' dominoes to make a rectangle with equal numbers of dots on each side. Repeat with 'double 5 down' etc. What numbers of dominoes can be made into a true square? Explore the numbers that emerge and explain why certain numbers of dominoes cannot be made into a square. Possible support Use real dominoes and sort out the '3 spot down' ones and use them to make a square. Then count the dots on the sides and work on the problem on a 'trial and improvement' basis. You could start with the 'double 2 down' dominoes making each side add to 16 and using a square like this:
{"url":"http://nrich.maths.org/146/note?nomenu=1","timestamp":"2014-04-18T11:03:09Z","content_type":null,"content_length":"6515","record_id":"<urn:uuid:e6829b29-82be-446e-9bba-6814408060fb>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Surveying method for locating target subterranean bodies - Patent # 4957172 - PatentGenius Surveying method for locating target subterranean bodies 4957172 Surveying method for locating target subterranean bodies (8 images) Inventor: Patton, et al. Date Issued: September 18, 1990 Application: 07/317,634 Filed: March 1, 1989 Inventors: Foster; C. Mackay (Burleson, TX) Patton; Bob J. (Dallas, TX) Assignee: Patton Consulting, Inc. (Dallas, TX) Primary Novosad; Stephen J. Attorney Or Matthews & Branscomb U.S. Class: 175/45; 175/61; 324/346 Field Of 175/45; 175/40; 175/50; 175/61; 324/346; 324/323; 324/338; 324/339; 340/853; 33/302; 33/304; 166/250 U.S Patent 3282355; 4072200; 4372398; 4443762; 4480701; 4700142 Abstract: An improved system for use in drilling a relief well to intersect a target blowout well. A probable location distribution is used to survey the location of the candidate relief wells and the blowout well. Through the use of the relative probable location distribution, the integral probabilities of find, intercept and collision are calculated. A relief well plan is then optimally designed to drill and insure a high integral probability of a find and intercept and a low probability of a collision. The method provided by the present invention allows a relief well to be drilled in a minimum time with minimum risk exposure. Claim: What is claimed is: 1. A method of drilling a relief well for intersection with a blowout well for the purpose of killing said blowout well, comprising the steps of: collecting survey data relating to the blowout wellbore surface location and the borehole path of said blowout wellbore; determining a first set of error coefficients for said survey data for said blowout wellbore; collecting survey data relating to the surface location of a relief wellbore and the borehole path of said relief wellbore; determining a second set of error coefficients for said survey data for said relief wellbore; using said first and second sets of error coefficients to calculate a relative probable location distribution describing the location of said blowout wellbore relative to the location of said relief wellbore at successive depths; using said relative probable location distribution at said successive depths to calculate an integral probability of find for each said depth, said integral probability of find being the probability of locating said blowout wellbore using asearch tool in said relief wellbore; and drilling said relief wellbore along a path having a maximum integral probability of find, such that said relief wellbore intersects said blowout wellbore. 2. The method of claim 1 wherein said first and second sets of error coefficients include both random and systematic errors associated with said survey data for said blowout well and said relief well. 3. The method of claim 2, wherein said first and second sets of error coefficients further include surface survey location errors. 4. The method of claim 3 wherein said relative probable location distribution is calculated using a normal distribution. 5. The method of claim 4 wherein said step of calculating said integral probability of find further comprises the step of dividing said relative probable location distribution at each depth into sectors and summing sectors of said distributionwhich are included in the searched path of relief wellbore. Description: FIELD OF THE INVENTION The present invention relates generally to a method and apparatus for locating target subterranean bodies. More specifically, the present invention provides a method and apparauts for using a relative probable location distribution searchingtechnique in order to locate and kill a blowout well in minimum time with minimum risk exposure. As the easily exploited hydrocarbon energy sources have been depleted, oil and gas wells have been drilled to ever deeper depths and have required more complex technology. Much of the current drilling activity is conducted from off-shoredrilling platforms which often support twenty or more wells. All but one of the wells drilled from such a platform are necessarily deviated from the vertical axis. Oil and gas wells are drilled into a reservoir of oil or gas wherein the reservoir generally consists of a porous rock which is filled with hydrocarbon liquids, hydrocarbon gases, water, and sometimes other liquids and gases. The pressure in thereservoir is considered "normal" when it is equal to the pressure exerted by a column of water extending from the surface to the reservoir depth. Petroleum reservoirs are often over-pressured below certain depths and can be under-pressured whendepleted. When a well is drilled into a reservoir, the reservoir fluids tend to flow into the wellbore and up to the surface unless the pressure exerted by the column of fluid in the wellbore exceeds the reservoir fluid pressure. Well bore fluid weightis, therefore, extremely important in well control. A "blowout" is defined as a fluid flow from the reservoir which is not under control-either to the surface or to another underground reservoir. Wells are normally drilled with a liquid in the wellbore called "mud" which is composed of either a water or oil phase carrier and solid components to give the mud viscosity and extra weight or pressure. Blowouts generally occur when the mudweight is too low (below reservoir pressure) due most often to too low a solids content or dilution by produced liquids, notably gas, which lowers the mud weight. Gas dilution blowouts are generally the worst because of the extreme lowering pressure andfire hazards. Offshore platform blowouts are much harder to control than land blowouts due to the logistics and personal danger. There are typically about 160 reported blowouts per year, most of which are controlled within a few days largely by naturalprocesses such as bridging. About thirty percent are controlled by surface capping and typically within thiry days. About five blowouts per year require relief wells to control. The term "relief well" is a historical term and is actually a misnomer when applied to modern kill wells today. Until about 12 years ago when search methods were developed, relief wells had a very small chance of intersecting the blowout. Consequently, the "relief method" was used to control blowout wells. The relief method involves the drilling of multiple producing wells in the vicinity of the blowout to allow the production from these wells to "relieve" the reservoir pressure. Hencethe term relief well. As was mentioned above, until recently relief wells had a very small chance of intersecting a blowout because of inadequate search methods. Search methods are heavily dependent on accurate surveys of the relief wellbore. Two angles are used todescribe the direction of a well: (1) inclination (often called drift angle) is the angle between the borehole and the vertical axis which is defined by gravity; (2) azimuth is the horizontal directional component of the well which is measured clockwisefrom true geographic north. Directional drillers often refer to the azimuth as the direction and use a quadrant system of notation such as N85:30E or S80:00E. These two directions are mostly east and 141/2 degrees different. The equivalent azimuthstatements are 85.5 and 100.0 degrees. Wells which are deviated from the vertical axis are represented by maps or plots. There are two common views of a deviated well: (1) the plan or horizontal view which is a projection of the well path on the horizontal plane with North-South andEast-West axis; and (2) the section view which is a projection of the well path on a vertical plane, usually a plane closest to the average horizontal direction of the well path. Deviated wells are also described by "build" and "drop" rates. The buildand drop rates refer to the rate at which the inclination (or drift) is increased or decreased, respectively. The rates are normally quoted in degrees per hundred feet. Typical rates are 1-4 degrees per hundred feet. In addition, the rate of curvatureof a deviated well is called "dogleg severity." In the past, changes in azimuth or direction were not made except to "correct" the direction of a well which had deviated from the planned two dimensional course. Such corrections turn left or right and have the same rate restrictions as buildor drop. Normally, build or drop corrections are not mixed with left and right corrections, but, are executed indpendently. Modern "bent housing" downhole motors make drilling in three dimensions more practical than drilling than the previous "bentsub" methods because of the greatly reduced length below the bend. Normal directional drilling is still basically two dimensional. The survyeing and drilling system provided by the present invention is fundamentally a three dimensional process which is extremely important for the drilling of relief wells. As will be discussed in greater detail below, the invention planningsystem is capable of extreme precision in directing the relief well to an exact three dimensional target. The three dimensional quality generates less total curvature than previous surveying methods, thus representing a major improvement over the priorart. By contrast, state of the art directional drilling planning has previously been geared to hitting large targets usually greater than 100 feet across, which do not require precision planning. Until approximately 1975, there were no surveying systems which were capable of providing an accurate quantitative measurement of the direction and distance to a blowout well from the well bore of the relief well. Until 1975, conventionalwireline formation logging tools were used in relatively unsuccessful attempts to guide the relief well to the blowout well. The most successful systems used until that time were based on the Ulsel log, a long spaced resistivity log which was used inconjunction with special sonic detectors. The Ulsel log could be used to detect the blow out well casing, but provided a very poor range estimate and absolutely no directional information. Furthermore, the sonic detectors could detect the sound in thevicinity of high gas production and could detect the depth of the blowing formation, but provided very poor ranging and no directional information. U.S. Pat. No. 4,072,200 issued Feb. 7, 1978, to Morris et al discloses a device for detecting the static magnetization of tubulars in a blowout well from a wireline tool in the relief well. This device has been used in approximately 90previous cases wherein it was necessary to located a remote well. The device disclosed in the Morris patent, sometimes referred to as "MagRange.TM.", detects magnetic monopoles normally associated with tubular (either casing or drill collars) joints inthe blowout wellbore. The occurrence and distribution of poles is virtually random, making the reliability of detection uncertain at a given joint and generally limited to the 30 or 40 foot joint spacing. The range from a joint is typically 25 feet butvaries from virtually zero up to approximately 50 feet. The range from the end of the casing or drill pipe is much higher, on the order of 100 feet. Another surveying technique, disclosed in U.S. Pat. No. 4,529,939 issued on Jul. 16, 1985, to Kuckes, is based on an induction magnetic method. In the Kuckes method, alternating current (1 Hz) is injected into the earth from a wireline toolin the relief well. At the end of the wireline, typically 350 feet below the current injector, two vector magnetic sensors mounted mutually perpendicular to each other, and perpendicular to the borehole, synchronously (with the injected current) detectmagnetic fields emanating from the blowout tubulars due to the current having collected in the tubulars and flowing along the longitudinal axis of the respective tubulars. This method has a range of between 100 and 200 feet, depending on the resistivityof the formations. It also has an improved accuracy with respect to the determination of direction. The range estimate based on the Kuckes method has an approximately accuracy of between 20 and 50 percent, depending on the distance. The two survey tools described above have significantly improved the art of drilling relief wells to intersect and kill a blowout well. Despite these advances, however, significant difficulties remain with respect to navigation of the reliefwellbore. In particular, surveying error of only a fraction of a degree can result in significant deviations from the desired target at depths of two miles or more. Numerous errors can seriously complicate efforts to kill a blowout well by drilling a relief well. In theory, the use of an off vertical relief well to intersect the blowout could be achieved accurately if the location of both the reliefwellbore and the blowout wellbore could be known with sufficient accuracy. In practice however, the actual location of the blowout wellbore is rarely known with sufficient accuracy. Numerous errors are incorporated into the logging of the off verticaldeviations during the drilling of the well. In general the types of errors which can be encountered with the location of the blowout wellbore are the following: (1) errors in the surface survey location; (2) random errors in the directional surveys; and(3) systematic errors in the directional surveys. Various authors have previously recognized individual errors which might be encountered in determining the location of a wellbore. For example, in an article entitled "Borehole Position Uncertainty--Analysis of Measuring Methods and Derivationof Systematic Error Model", Journal of Petroleum Engineering and Technology, Dec. 1981, pages 2339-50, Wolff and De Wardt, discuss systematic errors which are often incorporated into direction surveys of a wellbore. In addition, in another article,"Analysis of Uncertainty in Directional Drilling," Journal of Applied Petroleum Apr. 1969, Walstrom, Brown and Harvey, discuss random errors which can significantly affect the accuracy of directional surveys of a wellbore. The errors described in theabove mentioned articles apply to both the target blowout wellbore and to the relief wellbore. Although the above mentioned articles are useful to the extent they describe two types of errors which contribute to uncertainty as to the location of therespective wellbores, the art has heretofore lacked a teaching of a method for combining these uncertainties to provide a more effective surveying system for using relief wells to kill blowout wells. Furthermore, the prior art surveying techniques havefailed to adequately incorporate errors related to the surface survey location. The infamous Ixtoc 1 is an example case where the error in the surface site location, later measured to be 224 feet, delayed the kill of the blowout by several months. Thesurface site of the relief well is typically much smaller than that of the original blowout wellbore, principally due to greater care in documenting the location of the relief well. In view of the foregoing discussion, it is evident that an accurate method for determining the relative locations of the original blowout wellbore and the relief wellbore is needed. More specifically, it is apparent that there is a need for amore effective surveying system which is capable of combining errors in the surface survey location with random errors and systematic errors related to directional surveys. The surveying system of the present invention, as described in greater detailbelow, provides a relative probable location distribution (RPLD) which includes an estimate of surface site errors and the systematic and random errors due to directional surveys of both the blowout and relief wells. SUMMARY OF THE INVENTION The present invention overcomes the difficulties of the prior art by providing an improved surveying system for drilling a relief well to intersect a target blowout well. One of the principal advances over the prior art provided by the presentinvention is the use of a probable location distribution for surveying the location of the candidates relief wells and the blowout well. Through the use of the relative probable location distribution, the integral probabilities of find, intercept andcollision are calculated. A relief well plan is then optimally designed to be safe, easy and fast to drill and insure a high integral probability of a find and intercept and a low probability of a collision. After the relief well is spudded, the drilling progress of the wellbore is continually monitored, directional surveys are processed, and the relative probable location distribution is continuously calculated. When the relief wellbore is in thepreplan position for the optimum first search, the first search is run. When the "find" is made, the relative probable location distribution is set equal to the error probabilities of the search, which is usually small, and the relief well path to thetarget position is planned. The method provided by the present invention allows a relief well to be drilled in a minimum time with minimum risk exposure. As a result, the present invention makes it possible to avoid many of the catastrophic problems associated with blowoutwells, in particular, loss of life, physical property loss, energy reserve loss and pollution of the environment. Furthermore, the present invention minimizes risks associated with unwanted or untimely collision of relief well with the blowout well,which could result in the relief becoming a blowout well also. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is an illustration of a relief wellbore containing an induced magnetism search tool for locating a blowout wellbore. FIG. 2 is an illustration of a relief wellbore containing a static magnetism search tool for locating a blowout wellbore. FIG. 3 is a process flowchart describing the process for obtaining the relative probable location distribution of the present invention. FIG. 4 is a geometrical illustration of the process of determining the relative probable location distribution of the present invention. FIG. 5 is a geometric description of the relationship of the terms used in the calculation of the relative probable location distribution of the present invention. FIG. 6 is an illustration of a sector method for calculating the integral probability of find for the method of the present invention. FIG. 7 is an illustration of a path method for calculating the integral probability of find for the method of the present invention. FIG. 8 is an illustration of a vertical section showing the well profiles of a blowout wellbore and a relief wellbore in a vertical plane. FIG. 9 is an illustration of a plan view showing the well profiles of a blowout wellbore and a relief wellbore in a horizontal plane. FIG. 10 is an illustration of the compare view used in the method of the present invention. FIG. 11 is an illustration of an expanded view of the vertical section showing the well profiles of a blowout wellbore and a relief wellbore in a vertical plane. FIG. 12 is an illustration of an expanded view of the plan view showing the well profiles of a blowout wellbore and a relief wellbore in a horizontal plane. FIGS. 13a-c are illustrations of compare views of the relative probable location distribution at various depths. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Surveying System The method and apparatus of the present invention is not limited to any particular type of searching tool. However, in order to better understand some of the concepts which will be discussed hereinbelow, reference is made to FIGS. 1 and 2 whichshow two common types of search tools. FIG. 1 is an illustration of an induced magnetism search tool used to search the area around the relief well for conductive tubulars in the blowout well. FIG. 2 is an illustration of a static magnetism search toolused to search the area around the relief well for magnetic poles located in the magnetic tubular in the blowout well. Referring to FIG. 1, a blowout wellbore 10 is shown with the wellbore being defined by a conductive tubular 12. A relief wellbore 14is shown having a wellbore path designed to intersect the blowout wellbore 10. A wireline search tool 16 is contained within the relief wellbore. The wireline search tool operates by producing AC current injection as shown in FIG. 1 to induce an ACcurrent along the tubular collar 12 of blowout wellbore 10. Over the relatively short distances involved, the AC current in the tubulars may be considered to be flowing along a substantially straight line; consequently, the associated AC magnetic fieldhas a cylindrical form where the blowout wellbore is the axis. The AC magnetic field sensors 18 located in the relief wellbore 14 measure the said cylindrical AC magnetic field 20 in the plane perpendicular to the axis of the blowout well. Thesemagnetic field data are used to calculate the distance and direction in the said plane from the blowout wellbore to the relief wellbore. The orientation of the plane will be discussed in greater detail below in connection with the "compare view" plane. Referring to FIG. 2, a blowout wellbore 10 is again shown with a relief wellbore 14 designed to intersect the blowout wellbore 10. The wireline search tool 16a used in the static magnetism search method comprises a plurality of static magneticfield vector sensors 18a. These static magnetic sensors measure the static magnetic field associated with the magnetic poles which generally exist at mechanical joints in the blowout wellbore tubulars. These magnetic field measurements are made at aplurality of depths in the relief wellbore. The resulting profile of the static magnetic field as a function of depth in the relief wellbore is used to calculate the distance and direction in a defined plane from the relief wellbore to the blowoutwellbore. Surveying systems such as those discussed above are shown generally in U.S. Pat. Nos. 4,072,200; 4,372,398; and 4,529,939, which by this reference are incorporated herein for all Search Scheme The principal requirement of an efficient search scheme is to continuously and efficiently search in previously unsearched areas of the relative probable location distribution, discussed in greater detail below, while keeping track of thepreviously searched areas and summing the probabilities of a find until the total grows to a very high percentage. The probability of detecting a blowout at any given location is the portion of the probability density covered by the search radius of thesearch tool. The total probability covered depends upon the radius of the search and probability density in the covered area of the relative probability location distribution. This is the probability of detection at this single depth. Ideally, thesearch path of a relief well is designed so that the well progresses to successive depths, the area covered by the search tool is a different portion of the relative probability location distribution which has not previously been investigated. Consequently, as the search tool is pulled along the relief wellbore to different depths, new areas of the relative probability location distribution are covered by the search radius of the search tool. The new areas of probability are summed as thetool is pulled over different depths to give the integral probability of find to the depth logged. By properly designing the search path of a relief well, this integral probability of find can be made as large as desired, approaching one hundredpercent. One of the principal difficulties in perceiving the search path concept described above is related to an understanding of how new areas of the relative probability location distribution are known to be searched. When directional surveys areavailable for both the blowout well and the relief well, the change in the expected relative position for the two wells is described by the change in the calculated well profiles with depth and the error in this change is represented by increases in therelative probability location distribution. The growth of the relative probable location distribution is generally less than proportionate with the percentage change in well profile position. Consequently, the error in the change may be considerednegligible over reasonable distances along a search path, which is short relative to the entire relief well depth. For cases where there are no directional surveys for the blowout well, it is generally sufficient to assume that the blowout wellbore is straight ahead over the distance of a search path. This assumption is generally valid since directionalsurveys are required in all intentionally off vertical wellbores. Probe Location Distribution (PLD) The probable location distribution (PLD) is a quantitative description of where the wellbore is located in statistical terms. Prior art discussions of uncertainty of the location of a wellbore sometimes refer to "an ellipse of uncertainty."However, the ellipse of uncertainty should not be confused with the probable location distribution, nor the relative probable location distribution discussed below. The term probable location distribution, as is used here, is intended to provide a morecomplete, accurate, and positive term and should be distinguished from the prior art standards. Wellbore location profiles are determined by measuring the direction, both the inclination and azimuth, of the wellbore from top to bottom at intervals of depth, typically between thirty and one hundred feet. The well profile is then computedfrom these directional data using one of several algorithms known in the art, including average angle, tangential, balanced tangential, radius of curvature and minimum curvature. The minimum curvature algorithm is preferred for use in the system of thepresent invention. As is the case with all physical measurements, the directional measurements discussed above contain errors. Walstrom, et al, discussed above in the background section, recognized random type errors and provided an analysis called the ellipse ofuncertainty. The elilipse grows as the well gets deeper, but grows slowly after a large number of measurements, due to the random nature of the error. Wolff et al recognized a much more important form of error, called systematic error. The major difference between systematic and random error is that systematic erorrs generally accumulate proportionate with distance, leading to much largerellipses in deep, deviated wells. The Wolff et al anlaysis includes systematic errors of the various wellbore survey instruments and sums these errors over the depth of the well. Although Wolff et al provided an analysis of systematic errors, theiranalysis did not recognize the use of random error as discussed above. Furthermore, the Wolff et al analysis did not utilize the quantitative distribution nature of the ellipse, but, rather, preferred to treat the ellipse as if it were a boxcardistribution or fence containing all of the error of where the well might be. In addition to the failure to combine random and systematic errors, no previous system for analyzing position error has taken into account errors in the surface site location. The surveying system of the present invention is capable of providing a composite probability location distribution based on random errors, systematic errors, and all other known location errors, most notably, the survey error in the surface sitelocation and drill ship positioning error, when applicable. In the surveying system of the present invention, a programmable processor is used to accumulate variances of each of the above discussed errors. The inputs to the accumulator include: (1) random error accumulation over any section ofdirectional survey; (2) systematic error accumulation over any section of directional survey; (3) any known error such as surface site survey and drill positioning error can be manually input either as a covariance array or as principal axes of theellipsoid. When all of the above discussed errors have been input to the system, the probable location distribution accumulator contains a covariance array which represents the probable location distribution to the depth entered. The processor can be usedto provide an output of the probable location distribution in surface coordinates or in any downhole coordinate system desired. For example, it can be used to provide an output of the probable location distribution as an ellipse in a plane perpendicularto the axis of either the blowout well or the relief well. Normally, in the preferred embodiment, error coefficients are input as standard deviation (one sigma) values to the probable location distribution. In the system of the present invention, a"compare" program can be used to produce a plane perpendicular to the axis of a chosen reference well, and any number of ellipses can be entered representing multiples of the PLD sigmas. These ellipses then represent the probable location distributionof the reference well about its axis. Relative Probable Location Distribution (RPLD) The surveying system of the present invention utilizes a relative probable location distribution (RPLD) which is an extremely powerful aid in the quantification of the relative location of the relief wellbore to the blowout wellbore. Thisrelative probable location distribution represents a significant advance in the art, since it incorporates all of the errors discussed above and provides a composite estimate of the error of estimating each of the wellbores relative to each other. Mathematical Description of the Relative Probable Location Distribution For the locatoin p (which may be in the relief well) and the point q (which may be in the blowout well) there is a probability density function .PHI..sub.p,q (x,y,z) that describes the location of q with respect to p. The meaning of this functionis that the probability that the point q will be found in any particular volume V is the integral of .PHI..sub.p,q over that volume; i.e., ##EQU1## The density function .PHI..sub.p,q is a result of the limits of accuracy in the measuring process. It isdetermined by the errors associated with an individual measurement and errors that are in common with a group of measurements. Several processes of interest, such as collision, search-tool find, etc., are proximity dependent and occur with respect to any of a number of points {q} in the blowout well or from any number of points {p} in the relief well, or both. In casesof interest, the distribution does not vary appreciably over the set of points and can be approximated by integrating the distribution along a straight line. The result is a two dimensional distribution .PHI..sub.a (h,r) in a plane perpendicular to theline of integration: ##EQU2## In the above equation, "a," "h," and "r" represent the coordinate directions in the ahead, high and right coordinate system, respectively. In this case, the probability that the well crosses the plane within some area A, which has been definedby the process of interest, is the integral, ##EQU3## Implementation via Normal Distributions One means of evaluating the probability density function and related area-integrals is to use normal (Gaussian) distributions. FIG. 3 is a block diagram of the full process. All of the measurements are analyzed and the errors are separated intoerrors or groups of errors that are independent (mathematically random) with respect to each other. Every error or group applies to an interval (distance) and may refer to a single measurement or a series of measurements. As shown in FIG. 4, for the general case where p is in one well and q is in another, there are two distinct types of measurements. The first type are those measurements that locate some point in the second well (generally other than q) withrespect to some point (generally other than p) in the first. Examples of this include: Independent determinations of the locations of the two well heads (a and b located from some common point c) The direct determination of the location of one wellhead from the other (a from b or vice versa) The subterraneous measurement of the location of some point in one well from some point in the other (a' from b' or vice versa) In each case, the size, shape, and orientation of the probability distribution is determined by the geometry and the measurement principles. The second type of measurement is a survey along a wellbore. There are many different kinds of directional survey tools in use, such as those discussed hereinabove. In many of these systems, the measurement produces values for distance alongthe wellbore (called the measured depth), the inclination with respect to vertical, and the azimuth angle referenced to north. In FIG. 4, d is a directional measurement which has an error or errors associated only with that one measurement and is notaffected by errors in any other measurement. The group of directional measurements e have an error or errors common to all of them; the magnitude of the error is not necessarily the same for each but there is a functional relationship between the valuesfor the errors. The directional measurement f has additional errors not related to the other measurements in the group. Other borehole survey methods have different properties. One example of such is the inertial reference tool that directly measures three orthogonal displacements over an interval such as g. It produces an error distribution that combines anazimuth reference error and a three dimensional distribution that is a function of the path geometry, the temperature, the speed of the survey run, and various other factors. For some types of directional survey errors, the covariance matrix V can be expressed in terms of the vector errors. Examples of suitable errors are listed in (but not restricted to) Table 1. For the i.sup.th error parameter, V.sub.i =e.sub.ie.sub.i where e.sub.i is the vector error produced by one standard deviation of the measurement error. The vector error itself is the sum of the vector errors over each measurement interval; ##EQU4## where e.sub.ij is the error of the i.sup.th errorparameter in the j.sup.th measurement interval over which it applies. (For some errors, there is only one measurement interval.) ##EQU5## The specifics for each of these terms is explained for the types of errors covered in Table 1. TABLE 1 __________________________________________________________________________ Description Weighting Specification of Geometrical Direction of Error Function Standard Deviation Influence of Error __________________________________________________________________________ azimuth reference 1 angle l.sub.j * sin I.sub.j n.sub.j.sup.r error azimuth error due to sin I.sub.j sin(A.sub.j - D) angle for horizontal l.sub.j * sin I.sub.j n.sub.j.sup.r magnetic remnants and east gyro error ##STR1## angle for vertical l.sub.j * sin I.sub.j n.sub.j.sup.r inclinometer 1 angle l.sub.j * 1.sub.j * n.sub.j.sup.h bias error true inclination sin I.sub.j angle for l.sub.j * n.sub.j.sup.h error horizontal relative depth 1 length per l.sub.j n.sub.j.sup.a error unit length __________________________________________________________________________ Nomenclature (Also see FIG. 5) I inclination--angle measured withrespect to vertical A azimuth--bearing measured with respect to true north D declination--azimuth of the magnetic field l course length over which this measurement applies l* equivalent straight line length over which measurement applies n.sup. hunit vector "high", perpendicular to the direction of the survey and in the vertical plane (or north plane if inclination is zero) n.sup.a unit vector "ahead", in the direction of the survey n.sup.r unit vector "right" or "lateral"; n.sup.r = n.sup.a.times. n.sup.h If the error parameter under evaluation is misalignment, the variance can be written: where .sigma..sub.i is the standard deviation of the misalignment angle, I is the identity matrix, ##EQU6## If V.sub.i is the set of variances in the location of q due to the set of independent error parameters, then the total variance in q is the sum; i.e., ##EQU7## where N is the normalization constant and r is the location vector (xi+yj+zk). For appropriate values of inclination and azimuth, let T be the transformation that converts from surface coordinate directions (north, east, and down) to the downhole set (high, right and ahead). Then ##EQU8## The integral over one axis is thesame as the projection of the distribution into the perpendicular plane. For example, integration along the "ahead" axis is the projection into the "high-right" plane. This projection is easily done by considering only the high-right submatrix. ##EQU9## The normal geometric factors (standard deviations and tilt angle) are calculated by rotating the high-right axes and comparing with the expression for the simple two-dimensional normal density function ##EQU10## Probability of the well crossingthe plane within an area A can be evaluated by any of a number of numerical techniques. One method, illustrated in FIG. 6, that is appropriate when the characteristic dimensions of the area are of the order of or larger than the standard deviations ofthe distribution, is to divide the distribution into small, equal-probability areas such as that each one has a nearly square aspect ratio in normalized probability space coordinates (.sup.X /.sigma..sub.x etc.) Each probability area is examined forinclusion or exclusion with respect to the desired area and the probability totaled accordingly. In addition, some fraction may be included in the total for those that straddle the border of the area of integration. Another method, illustrated in FIG. 7, is appropriate when the area can be described as a non self-crossing path with width small with respect to the standard deviations of the probability distribution. In this case, the area is broken intosquares that are as long in path length as the specified width of the path. For each, the probability density is evaluated in the center of the square, multiplied by the area of the square, and totaled. Treatment of the end points and noninteger-multiple path lengths are refined as desired. Other Methods of Implementation If desired, the probability density function and any desired processes that depend on proximity or geometry can be evaluated can be evaluated by random simulation techniques (Monte Carlo). The measurements are analyzed as before but in this casethe errors may be functionally related to any extent that can be mathematically described. The path from downhole locations to the other locations satisfactory to the process of interest is calculated using randomly determined values of the errors. After a suitable number of path calculations, the probability is determined from the ratio of successful trials to the total number of trials. The PLD (or RPLD) analysis discussed above is first used to calculate the probable location distribution of the blowout well and the relief well. The RPLD covariance matrix is the sum of the covariance matrices of the blowout well and reliefwell. For example, if all of the errors for both the blowout and relief wells are input to the PLD accumulator, then the accumulator contains the RPLD covariance matrix. The RPLD can be represented in any desired coordinate system. In the case thatthe relative surface site error of the two wells is known, as would be the case when the displacement between the two surface sites is directly measured, then the input to the PLD accumulator should be this relative surface site error (presumed to besmaller) rather than the two independent surface site errors of the blowout and relief wells. The "ellipse of uncertainty", the closest industry concept, should not be confused with the RPLD. The RPLD is a tri-axial location error distribution which includes the surface site errors and the systematic and random errors due to directionalsurveys of both the blowout and relief wells. In the preferred embodiment, there are many components of location error, including the random, systematic and surface site errors previously discussed, which are treated as incoherent with each other; thatis, they are random or non-correlated with each other. In this case, the component error variances are summed to obtain the total variance of the PLD or RPLD which may be represented by ellipsoids of constant probability density. These ellipsoids maybe integrated along a direction perpendicular to a plane of choice to produce two-dimensional ellipses in that plane. Search Path One of the important parameters is the range of the available search tool in terms of an effective radius. The tubular specifications of the blowout well casing, the resistivity of the formation, and the properties of the mud used in the reliefwell are also gathered as important evaluation criteria. In addition, the search range of both the induction and static magnetic tool must be evaluated. It is extremely important to plan the relief well in a manner such that its probable location distribution makes only a small contribution to the relative probability location distribution. Once the wellpath has been planned, the relativeprobability location is calculated using anticipated relief well survey error coefficients. As the relief well progresses along a search path, the probabilities of "find" and "intercept" are calculated. The essential inputs for calculating theseprobabilities are the search radius of the search tool, the relief well plan (including the search path), the limiting well curvature, and the relative probable location distribution. The probability of collision can also be calculated by assuming aneffective collision radius, normally on the order of one foot. The above discussed process is an iterative process. The search path design (a portion of the relief well plan) is iterated until the probabilities of find and intercept are very high, theprobability of collision is very low, and the overall relief well plan can be implemented easily and safely. When the search plan adequacy criteria are met, the search plan is adopted as the final relief well plan. The optimal first search position is preplanned to have as high a probability of find as is compatible with a sufficiently low probability of collision. It is also very important to retain a very good position from which to plan the closuremaneuvers to kill the target blowout well. Although variable, the typical first search probability of find is on the order of 65% and the probability if collision is normally less than 1%. The quantitative aspects of this procedure, as outlined above,are very important in achieving a minimum time to kill, because they are effective in eliminating unnecessary search runs. Indeed, the process outlined above, significantly increases the efficiency of the search even in cases where there is littledifficulty locating the location of the blowout well. In the case of an extended reach (long horizontal distance) wells, two or three additional optimal search positions often must be planned in the event a find is not made on the earlier searches. Compare View In order to understand the essential features of the present invention, one must understand the concept of a "compare view" of the relative location of the blowout well and the relief sell. The Compare View is a plane perpendicular to a chosenreference well with the reference well located in the center at the crossing of the "high" and "right" axes. The high axis is defined as the intersection of the compare view plane with a vertical plane which is parallel and coincident with thealong-the-hole axis of the reference well at the depth of the compare view plane. The right axis of the compare view is perpendicular to the high axis and the along-the-hole axis of the reference well. FIG. 10 is an example of the compare view wherethe line marked High-Low is the high axis and the line marked Right-Left is the right axis. The reference well is always at the high-right crossing in the compare view and defines the compare view. The compare view is specified by the direction of anddepth in the reference well. In the special case where the reference well is near vertical at the depth of the compare view, the orientation of the compare view is normally determined by the geographic azimuth (from north) wherein High axis is replacedby North and the Rigth axis is replaced by East. Alternately, the magnetic azimuth may replace the geographic azimuth. The blowout well is often chosen as the reference well. In this case, the compare view is specified by the depth, usually the measured depth, in the blowout well and the inclination and azimuth of the blowout well at said depth. The relativeposition of other wells which cross the compare view plane may be shown. The vector position of crossing of the compare view plane by other wells may be specified either as components along the compare view axes or as a distance from the center andazimuth from the high or north axis. The high and right components are often used. Two versions of the compare view can be used. The definition just described above is for a single compare view plane wherein the reference is located at the center and other wells are shown where they cross the compare view plane at thespecified depth in the reference well. Multiple compare views at successive chosen depths may be plotted. These multiple plots may be successively drawn on a plotter or animated on a computer screen. Furthermore, a computer can be programmed tosuperimpose the positions of the well crossings of the compare view at multiple successive depths in the reference well. The reference well remains at the center for all of the depths. A single plot of the compare view with superimposed positions ofthe wells may be made wherein the position of each well crossing is labeled for the depth of the reference well for that crossing. The compare view was created for and is especially suited for computing and viewing the relative position and relationship of multiple wells; most notably a blowout well and one or more relief wells. This is particularly true when the wells aresubstantially parallel as is generally true during searching, closure and intersecting maneuvers on a blowout killing A vertical section of a deviated blowout well is shown in FIG. 8. The blowout well was drilled straight for about 1500 feet and then angle was built to an inclination of about 45.degree. in the direction South 60.degree. East. The 45.degree. inclination was held to a TVD of 5800 feet and casing was set. The well was then drilled to 6200 feet TVD. A blowout occurred while the drill string was out of the hole leaving open hole below the casing set at 5800 feet TVD. A vertical section of theblowout well in shown in FIG. 8. A plan view of the blowout well is shown in FIG. 9. A near optimum relief well plan with an efficient search path is also shown in FIG. 8 and FIG. 9. A zoom Compare View of the two wells is shown in FIG. 10. The blowout well is chosen as the reference well which is always shown at the center (crossing of the high and right axes). This zoom compare view is a composite of seven compare viewplanes at the seven successive depths in the blowout well. The relief well is shown as a small circle plotted at the crossing of the relief well in the compare view plane; seven circles are shown, one for the crossing at each of the seven depths. Thecircle labeled depth 1 represents the relief well crossing in the shallowest compare view plane, the next deeper plane crossing is labeled depth 2, etc. It is instructive to imagine looking straight at FIG. 10, which is the same as looking straight alongthe blowout well borehole, and visualizing, in animated fashion, perpendicular planes (compare views) at successives depths. In so doing, the relief well crossings are seen to start in the upper left corner at depth 1 and progress down and left to rightas represented by the progressive depth labels all the way to the label, depth 7. The relief well sweeps through the compare view. This relatively small section of the relief well is called the search path and is the portion of the relief well overwhich searches for the blowout well are conducted. During the planning of a relief well, designs are iterated until one is found which optimizes the speed, ease and safety of drilling and achieves high probabilities of find, access, and intercept and low probability of collision. Generally, itis highly desirable to minimize the size and control the shape of the RPLD to permit a high probability of find. It is often important to plan the relief well to minimize the size of the RPLD in one direction and plan the search path to sweep along thelonger axis of the RPLD which maximizes the probability of find with minimum searching. Such an optimized RPLD is shown in FIG. 10 as represented by the three ellipses which have the values of 1, 2, and 3 .sigma. (standard deviation). Note that the search path of the relief well is along the long axis of the RPLD to maximize theprobability of find. The preplanned first search point is at depth 4 and labeled S1 (first search). The radius of the search tool around S1 is shown by the arrow labeled R. The relief well is drilled without hesitation as quickly as possible to the preselectedposition S1 and a search is run. The integral probability of find to S1 is approximately 65% as obtained by integrating the probability density function (of the RPLD) over the searched area shown inside the curve labeled search area boundary. Assume an adequate find was made (65% chance) and that the find is specified as a Relative Find Vector, RFV, in the compare view plane. The RFV is a displacement vector (magnitude and direction) which has an expected value and a random error,both which must be specified. The error is two dimensional in the compare view plane and can be specified by a covariance matrix or, alternately, by the magnitudes of the two semi-major axes of the ellipse and its orientation angle. The errorspecification is essential to quantitative closure procedures. The prior art specifies only the expected value of the find vector and this value is evaluated generally in terms of the plan view. The RFV is shown in FIG. 10 extending to the blowout well from a position labeled F1. F1 is the adjusted location of the relief well which is compatible with the find. A position F1B is also shown which is the blowout position required to becompatible with the find and the relief well position. In the compare view it is desirable to use the F1 concept and adjust all relief wells to the referenced blowout well. The actual translation or modification of the well profiles to accommodate the RFV in the compare view is a big and important issue. The simplest operation is to translate the surface location of the relief well even though this is the leastlikely event to be actually true. The more probable criteria is to systematically adjust the inclination and azimuth values in the blowout well because these are the quantities most likely in error. In practice, it is important to adjust the parametersmost likely in error to improve the probability that projections of the wells ahead from the find point are as accurate as possible. FIG. 11 is an expanded vertical section and FIG. 12 is an expanded plan view of the closure and intercept region of the drilling operation. In both views, S1 and F1 are the same locations as shown in FIG. 10. In FIGS. 13 a-d the compare viewsare shown at a scale of 50 ft/inch as opposed to 100 ft/inch in FIG. 10. In FIG. 13a the first search position S1 of the relief well is shown, the relief well offset, RWO, required to position the relief well at position F1 is shown, and the RFV expected value is shown. At this point, the RPLD is described solely bythe estimated error in the find vector. The RPLD of the find is shown in FIG. 13a are represented by the 1, 2, and 3 .sigma. (standard deviation) ellipses. A closure relief well plan, Closure Plan 1, is made to optimize the time and risk to the intercept and kill of the blowout well. Closure Plan 1 is shown in FIGS. 11, 12, and 13c. Close inspection of all three figures, especially FIG. 13c, willshow how the relief well path is planned to pass close around (270.degree. ) the blowout well. This crossing greatly enhances the accuracy of the search tool and results in a desirably small RPLD of Find. At S2 the relief well direction is planned tobe substantially the same as the blowout well which will make the next closure to intercept very easy. With the relief well plan made, the RPLD of drilling ahead from point F1 to S2, the second preplanned search point, is calculated and shown in FIG.13b. The total RPLD at search point S2 is the combination of the RPLD of find at S1 and the RPLD of drilling from F1 to S2 and is shown in FIG. 13c. The RPLD at S2 represents the error in the relative location of the relief and blowout wells when therelief well is drilled to position S2 where the second search is made. The relief well is drilled ahead along Closure Plan 1 to the position S2 where a second search is run. The probability of find is greater than 99%. An adequate find is assumed to be made and the expected location of the relief well isestablished at F2. F2 is established by the RFV expected value which extends from F2 to the blowout (not shown). FIG. 13d shows the expected relative position of the relief well at position F2. The total RPLD, the combination of the RPLD of find at S2 (search 2) and the RPLD of drilling ahead along Closure Plan 2, is shown along with the Closure Plan 2. Closure Plan 2 is also shown in FIG. 11 and 12. Closure Plan 2 has a high probability of intersecting the blowout well approximately 50 feet below the end of the casing in the blowout well. The probability of "geometric collision" as determined by the probability of collision calculation isapproximately 50%. This means that the relief well has a high probability of actually drilling directly into the blowout. Another important factor is that when the relief well is drilling essentially parallel and very close to the blowout, the reliefwell will have a great tendency to be drawn into the blowout borehole due to the weakened rock around the blowout due to the presence of the borehole and the reduced pressures on the rock. It is important to note that only two search runs were made to achieve this high probability of intercept. Typically, the state-of-the-art requires many searches, upwards of 10 to 20. Each search not run saves typically a day of time in anoperation where the monetary costs are sometimes millions of dollars per day. The costs in the form of pollution, loss of reserves and loss of life, although very real and large, are difficult to quantify. While the method and apparatus of the present invention has been described in connection with the preferred embodiment, it is not intended to be limited to the specific form set forth herein, but on the contrary, it is intended to cover suchalternatives, modifications and equivalents as may be reasonably included within the spirit and scope of the invention as defined by the appended claims. * * * * * Randomly Featured Patents
{"url":"http://www.patentgenius.com/patent/4957172.html","timestamp":"2014-04-19T17:41:22Z","content_type":null,"content_length":"74822","record_id":"<urn:uuid:30d52511-2856-41df-bc07-53b65f3cbe62>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Two-dimensional compressible flow in turbomachines with conic flow surfaces Stanitz, John D A general method of analysis is developed for two-dimensional, steady, compressible flow in stators or rotors of radial and mixed flow turbomachines with conic flow surfaces (surfaces of right circular cones generated by center line of flow passage in the axial-radial plane). The variables taken into account are: (1) tip speed of the rotor, (2) flow rate, (3) blade shape, (4) variation in passage height with radius, (5) number of blades, and (6) cone angle of the flow surface. Relaxation methods are used to solve the nonlinear differential equation for the stream function. Two numerical examples are presented; one for compressible and the other for incompressible flow in a centrifugal compressor with thin, straight blades. The results of these examples are given by plots of the streamlines, constant velocity-ratio lines, and constant pressure-ratio line. An Adobe Acrobat (PDF) file of the entire report:
{"url":"http://naca.central.cranfield.ac.uk/report.php?NID=3658","timestamp":"2014-04-19T22:05:33Z","content_type":null,"content_length":"2149","record_id":"<urn:uuid:4e214f9c-08a3-421f-8597-f4c358b1b5b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
A zeta function using half of the primes up vote 6 down vote favorite It is well known that the zeta function satisfies the Euler product formula. See this wikipedia article. Enumerate all primes by $p_1, p_2, \ldots $ in ascending order. Set $S$ to be the set of all $p_i$ where $i$ is odd. If, for $s > 1$, you define $$\zeta_0(s) = \prod_{p \in S} \frac{1}{1-p^{-s}},$$ is it true that $(\zeta_0(s))^2$ has a meromorphic continuation to the entire complex plane? 5 Every other prime seems to be an arbitrary choice. In my opinion, a more interesting question borne of the same motivation would be: for which subsets $S$ of the primes does the corresponding Euler product admit an analytic continuation? – anon Feb 8 '13 at 4:05 2 Similar questions have been asked: mathoverflow.net/questions/95205/…, mathoverflow.net/questions/70318/…, mathoverflow.net/questions/28000/…, not that they answer your specific question, but maybe a good place to look... – B R Feb 8 '13 at 4:19 Thank you. I wrote this specific question because it is appearing in my work. I agree that the more general question is also interesting. – Khalid Bou-Rabee Feb 8 '13 at 15:30 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged cv.complex-variables or ask your own question.
{"url":"http://mathoverflow.net/questions/121148/a-zeta-function-using-half-of-the-primes","timestamp":"2014-04-19T20:04:47Z","content_type":null,"content_length":"49502","record_id":"<urn:uuid:c6e7d92a-edf0-4b1c-91e1-ebf5da68320f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
A Collection of Tools for Multivariable Calculus The mathlets presented here provide user-friendly tools for visualizing and manipulating basic objects of multivariable calculus: parametric surfaces in rectangular, spherical and cylindrical coordinates, parametric curves, and graphs of functions of two variables. The user can enter his or her expressions using familiar graphing calculator syntax. The mathlets will plot the corresponding surface or curve which then can be rotated in real time. One of the mathlets is devoted to exploring spherical coordinates which are often hard for students to visualize. All mathlets contain a variety of examples and practice problems. Suggested Uses: ● For independent exploration by students; ● For discussions with students in smaller groups in a laboratory setting; ● For classroom demonstrations with a computer projector. Copyright 2005 by Barbara Kaskosz. See http://www.math.uri.edu/~bkaskosz/flashmo/ for more Flash mathlets for Calculus. Please direct questions and comments to bkaskosz@math.uri.edu.
{"url":"http://www.math.uri.edu/~bkaskosz/flashmo/tools/","timestamp":"2014-04-18T08:04:08Z","content_type":null,"content_length":"3252","record_id":"<urn:uuid:6c69bdfc-5ddd-4a61-bced-c0d390f9ea52>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the area of the kite? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f98c08be4b000ae9ece661d","timestamp":"2014-04-19T20:10:16Z","content_type":null,"content_length":"126168","record_id":"<urn:uuid:8e651861-f02f-4a67-b325-832a18d158d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Ultimate Answer to Life, The Universe and Everything Ultimate Answer to Life, The Universe and Everything Is there a real purpose of Life, The Universe and Everything apart from this? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Ultimate Answer to Life, The Universe and Everything I would have thought it would not be an integer and that the meaning of life was In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Ultimate Answer to Life, The Universe and Everything But God created the integers only 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Ultimate Answer to Life, The Universe and Everything So Kronecker says. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Ultimate Answer to Life, The Universe and Everything hi Agnishom, If you liked Douglas Adam's Hitchhiker series, you will also like his two books about the Dirk Gently holistic detective agency. Both in book form and one was made into a BBC series. A sad loss to comedy writing. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: Ultimate Answer to Life, The Universe and Everything Okay, but what do you think is the purpose of life? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Ultimate Answer to Life, The Universe and Everything To be on some forum doing mathematics of course. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Ultimate Answer to Life, The Universe and Everything So we have already fulfiled our purpose? Why are we still alive? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Ultimate Answer to Life, The Universe and Everything I do not know for sure but you will have a long and great life ahead of you. When you get to the end of it you might know. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Ultimate Answer to Life, The Universe and Everything The universe? What is the reason behind such a (seemingly) vast non-sense object being there in existence? BTW, I am really not sure whether I want to live for long Last edited by Agnishom (2013-03-05 13:57:50) 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Ultimate Answer to Life, The Universe and Everything Hmmm, you do not have to live a lot of years to live a long life. The universe, that was built and is maintained by somebody else. I have no idea why except to say have you ever kept a fish tank? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Ultimate Answer to Life, The Universe and Everything 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Ultimate Answer to Life, The Universe and Everything You create an environment for life. You build it and design new features. When it is done you watch all the fish swim back and forth. Maybe that's what God is. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Ultimate Answer to Life, The Universe and Everything And perhaps that OUR God is also an experimental creature made by some Second order God or the God of Gods 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Ultimate Answer to Life, The Universe and Everything There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and There is another theory which states that this has already happened. 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Ultimate Answer to Life, The Universe and Everything So maybe it would be a good idea for us to stop trying to guess what it is for. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Ultimate Answer to Life, The Universe and Everything Yeah, but we need to prove the theory first 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Ultimate Answer to Life, The Universe and Everything We can not really prove anything in Cosmology. We can come up with a plausible theory that fits the currently known observations. If it makes a few good predictions we keep it, if not we dump it. Last edited by bobbym (2013-03-05 20:57:27) In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Ultimate Answer to Life, The Universe and Everything Isn't that sad? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: Ultimate Answer to Life, The Universe and Everything Not really, that is what I like about mathematics. We have some degree of certainty. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=256324","timestamp":"2014-04-17T21:49:51Z","content_type":null,"content_length":"33867","record_id":"<urn:uuid:20e2c886-9157-4f2f-a12e-d1f84f68a4c8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Revêtements étales et groupe fondamental Results 1 - 10 of 120 , 2005 "... We develop in detail most of the theory of the Picard scheme that Grothendieck sketched in two Bourbaki talks and in commentaries on them. Also, we review in brief much of the rest of the theory developed by Grothendieck and by others. But we begin with a historical introduction. ..." Cited by 73 (3 self) Add to MetaCart We develop in detail most of the theory of the Picard scheme that Grothendieck sketched in two Bourbaki talks and in commentaries on them. Also, we review in brief much of the rest of the theory developed by Grothendieck and by others. But we begin with a historical introduction. - Math. Annalen , 1991 "... Abstract: We reduce the regular version of the Inverse Galois Problem for any finite group G to finding one rational point on an infinite sequence of algebraic varieties. As a consequence, any finite group G is the Galois group of an extension L/P(x) with L regular over any PAC field P of characteri ..." Cited by 57 (25 self) Add to MetaCart Abstract: We reduce the regular version of the Inverse Galois Problem for any finite group G to finding one rational point on an infinite sequence of algebraic varieties. As a consequence, any finite group G is the Galois group of an extension L/P(x) with L regular over any PAC field P of characteristic zero. A special case of this implies that G is a Galois group over Fp(x) for almost all primes p. Many attempts have been made to realize finite groups as Galois groups of extensions of Q(x) that are regular over Q (see the end of this introduction for definitions). We call this the “regular inverse Galois problem. ” We show that to each finite group G with trivial center and integer r ≥ 3 there is canonically associated an algebraic variety, Hin r (G), defined over Q (usually reducible) satisfying the following. - Colloquium Publications, Vol.55, American Mathematical Society , 2008 "... ..." , 2002 "... This is the first of a series of papers devoted to lay the foundations of Algebraic Geometry in homotopical and higher categorical contexts. In this first part we investigate a notion of higher topos. For this, we use S-categories (i.e. simplicially enriched categories) as models for certain kind of ..." Cited by 32 (20 self) Add to MetaCart This is the first of a series of papers devoted to lay the foundations of Algebraic Geometry in homotopical and higher categorical contexts. In this first part we investigate a notion of higher topos. For this, we use S-categories (i.e. simplicially enriched categories) as models for certain kind of ∞-categories, and we develop the notions of S-topologies, S-sites and stacks over them. We prove in particular, that for an S-category T endowed with an S-topology, there exists a model - Tohoku Math. J , 1996 "... This paper gives a foundation of log smooth deformation theory. We study the infinitesimal liftings of log smooth morphisms and show that the log smooth deformation functor has a representable hull. This deformation theory gives, for example, the following two types of deformations: (1) relative def ..." Cited by 25 (4 self) Add to MetaCart This paper gives a foundation of log smooth deformation theory. We study the infinitesimal liftings of log smooth morphisms and show that the log smooth deformation functor has a representable hull. This deformation theory gives, for example, the following two types of deformations: (1) relative deformations of a certain kind of a pair of an algebraic variety and a divisor of it, and (2) global smoothings of normal crossing varieties. The former is a generalization of the relative deformation theory introduced by Makio, and the latter coincides with the logarithmic deformation theory introduced by Kawamata and Namikawa. 1 - Math , 1981 "... Mit dem Zugriff auf den vorliegenden Inhalt gelten die Nutzungsbedingungen als akzeptiert. Die angebotenen Dokumente stehen für nicht-kommerzielle Zwecke in Lehre, Forschung und für die private Nutzung frei zur Verfügung. Einzelne Dateien oder Ausdrucke aus diesem Angebot können zusammen mit diesen ..." Cited by 21 (3 self) Add to MetaCart Mit dem Zugriff auf den vorliegenden Inhalt gelten die Nutzungsbedingungen als akzeptiert. Die angebotenen Dokumente stehen für nicht-kommerzielle Zwecke in Lehre, Forschung und für die private Nutzung frei zur Verfügung. Einzelne Dateien oder Ausdrucke aus diesem Angebot können zusammen mit diesen Nutzungsbedingungen und unter deren Einhaltung weitergegeben werden. , 1998 "... This paper lays the foundations for the global theory of irreducible components of rigid analytic spaces over a complete field k. We prove the excellence of the local rings on rigid spaces over k. This is used to prove the standard existence theorems and to show compatibility with the notion of irre ..." Cited by 21 (2 self) Add to MetaCart This paper lays the foundations for the global theory of irreducible components of rigid analytic spaces over a complete field k. We prove the excellence of the local rings on rigid spaces over k. This is used to prove the standard existence theorems and to show compatibility with the notion of irreducible components for schemes and formal schemes. Behavior with respect to extension of the base field is also studied. It is often necessary to augment scheme-theoretic techniques with other algebraic and geometric arguments. COMPONANTES IRRÉDUCTIBLE D’ESPACES RIGIDES Cet article donne les fondements de la théorie globale des composantes irréductibles d’espaces analytiques rigides sur un corps complet k. Nous prouvons l’excellence d’anneaux locaux sur les espaces rigides sur k. De là, nous prouvons les théorèmes standards d’existence et nous montrons la compatibilité avec les notions des composantes irréductibles pour les schémas et les schémas formels. Le comportement par rapport à l’extension de corps base est aussi étudié. Il est souvent nécessaire de compléter les techniques de théorie des schémas par d’autres arguments algébriques et géométriques. "... We introduce a valuation-theoretic approach to the problem of semistable reduction (i.e., existence of logarithmic extensions on suitable covers) of overconvergent isocrystals with Frobenius structure. The key tool is the quasi-compactness of the Riemann-Zariski space associated to the function fiel ..." Cited by 20 (14 self) Add to MetaCart We introduce a valuation-theoretic approach to the problem of semistable reduction (i.e., existence of logarithmic extensions on suitable covers) of overconvergent isocrystals with Frobenius structure. The key tool is the quasi-compactness of the Riemann-Zariski space associated to the function field of a variety. Contents - Amer. J. Math , 2001 "... In this paper, I investigate wildly ramified G-Galois covers of curves φ: Y → P 1 k branched at exactly one point over an algebraically closed field k of characteristic p. I answer a question of M. Raynaud by showing that proper families of such covers of a twisted projective line are isotrivial. Th ..." Cited by 18 (9 self) Add to MetaCart In this paper, I investigate wildly ramified G-Galois covers of curves φ: Y → P 1 k branched at exactly one point over an algebraically closed field k of characteristic p. I answer a question of M. Raynaud by showing that proper families of such covers of a twisted projective line are isotrivial. The method is to construct an affine moduli space for covers whose inertia group is of the form I = Z/p ⋊ µm. There are two other applications of this space in the case that I = Z/p ⋊ µm. The first uses formal patching to compute the dimension of the space of non-isotrivial deformations of φ in terms of the lower jump of the filtration of higher inertia groups. The second gives necessary criteria for good reduction of families of such covers. These results will be used in a future paper to prove the existence of such covers φ with specified ramification data. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1114626","timestamp":"2014-04-19T19:47:08Z","content_type":null,"content_length":"34162","record_id":"<urn:uuid:261df0da-a227-4a5f-9d94-c378764b0e6e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Preventing Rounding Errors up vote 2 down vote favorite I was just reading about rounding errors in C++. So, if I'm making a math intense program (or any important calculations) should I just drop floats all together and use only doubles or is there an easier way to prevent rounding errors? c++ rounding-error 2 What math intense program is this? Being math intense doesn't mean you need to prevent this kind of floating-point errors. – R. Martinho Fernandes Jul 20 '11 at 9:44 5 Using doubles doesn't prevent rounding errors. – Mat Jul 20 '11 at 9:47 @Martinho, I do when customers expect something at least almost accurate >_> – Xander Jul 20 '11 at 9:49 @Mat: Moreover, almost any reasonable modern architecture will promote your floats to doubles anyway, so why bother with the floats at all. (Old CUDA notwithstanding, that is.) OP: If you need guaranteed precision, use a multiprecision library like MPFR. – Kerrek SB Jul 20 '11 at 9:49 7 You need well-defined accuracy requirements and a good understanding of the algorithm that you are implementing and how sensitive it is to rounding errors (aka numerical stability). – Paul R Jul 20 '11 at 9:50 show 1 more comment 2 Answers active oldest votes Obligatory lecture: What Every Programmer Should Know About Floating-Point Arithmetic. Also, try reading IEEE Floating Point standard. You'll always get rounding errors. Unless you use an [DEL:infinite:DEL] arbitrary precision library, like gmplib. You have to decide if your application really needs this kind of effort. up vote 6 down vote accepted Or, you could use integer arithmetic, converting to floats only when needed. This is still hard to do, you have to decide if it's worth it. Lastly, you can use float or double taking care not to make assumption about values at the limit of representation's precision. I'd wish this Valgrind plugin was implemented (grep for float)... 1 Nitpick: "arbitrary", not "infinite". (A natural number can have arbitrary size, but not infinite size.) – Kerrek SB Jul 20 '11 at 11:03 Fixed, thanks :) – Mihai Maruseac Jul 20 '11 at 11:04 add comment The rounding errors are normally very insignificant, even using floats. Mathematically-intense programs like games, which do very large numbers of floating-point computations, often up vote 1 down still use single-precision. 4 Games are not that relevant for floating point accuracy. The user won't say anything if a sprite is 1 pixel to the right. However, if your software is an air traffic controller, a mistake can make one plane crash. – Mihai Maruseac Jul 20 '11 at 10:06 Your logic might be backwards, it might even be they use single precision not because rounding errors are insignificant, but since they do so many operations. – Benjamin Bannier Jul 20 '11 at 11:15 add comment Not the answer you're looking for? Browse other questions tagged c++ rounding-error or ask your own question.
{"url":"http://stackoverflow.com/questions/6759910/preventing-rounding-errors","timestamp":"2014-04-24T06:35:36Z","content_type":null,"content_length":"78697","record_id":"<urn:uuid:05f68bb5-625a-42f0-b837-cd9241878994>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
There were 3000 bacteria in a petri dish 10 hours ago. Now there are only 2000. How would you calculate how many... - Homework Help - eNotes.com There were 3000 bacteria in a petri dish 10 hours ago. Now there are only 2000. How would you calculate how many bacteria there will be 5 hours later and at what percentage the bacteria is decaying per hour. To solve this, you can use the formula for exponential decal/growth: `F = Pe^(rt)` where F = value at time, t P = initial value (value at time, t=0) r = growth/decay factor (+ if it is a growth factor, - if it is a decay) t = time, in hours (in this case) First, identify the given. P = 3000 @ t = 0 (10 hours ago) F = 2000 @ t = 10 (t = now, after 10 hours) t = 10 Plug-in in the formula: `F = Pe^(rt)` `2000 = 3000e^(r*10)` Divide both sides by 3000, so e^(rt) is isolated, since it contains an unknown, the r (decay factor). It is a decay factor because, the initial value is 3000 then it dropped to 2000. `2000/3000 = (3000e^(10r))/3000` Take the natural logarithm of both sides and use the properties: `ln e = 1` and; `ln(x^n) = nln(x)` `ln(2/3) = ln(e^(10r))` `ln(2/3) = 10rln(e)` `-0.4055 = 10r` Divide both sides by 10 to solve for r. (I just rounded ln(2/3) to 4 decimal places, it is better to get the exact result by saving it in your calculator for accuracy). `-0.4055/10 = (10r)/10` `r =-0.04055` No that you have the decay factor, calculate the bacteria after 5 hours using the same formula. P = 2000 (This is our initial value because we move our time to 'now'. You can still use 3000 and get same value but your time, t would be equal to 15 instead.) F = ? t = 5 `F = 2000e^(-0.04055*5)` `F = 1632.99316` Therefore there are 1632.99316 bacteria after 15 hours. As per how the percentage of bacteria is decaying per hour, you can get the bacteria at t = 1 using 2000 as the initial value. `F = 2000e^(-0.04055*1)` `F =1920.529002` Then to get the percentage: `% = (2000-1920.529002)/2000*100` ` ` `% = 3.9735%` Therefore almost 4% of bacteria decayed per hour. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/there-were-3000-bacteria-petri-dish-10-hours-ago-424650","timestamp":"2014-04-20T09:14:33Z","content_type":null,"content_length":"27010","record_id":"<urn:uuid:3cbdf216-2d0c-44e5-ba7f-498ffe544c5c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Simultaneity Spacetime Diagram Model Simultaneity Spacetime Diagram Model Relations Open Source Physics Related Resources relation created by Wolfgang Christian The Easy Java Simulations Modeling and Authoring Tool is needed to explore the computational model used in the Simultaneity Spacetime Diagram Model. Other Related Resources relation created by Lyle Barbato The Simultaneity Spacetime Diagram model is a supplemental simulation for an article by Sebastien Cromier and Richard Steinberg "The Twin Twin Paradox: Exploring Student Approaches to Understanding Relativistic Concepts" in the The Physics Teacher 48 (9), 598-601 (2010). Create a new relation
{"url":"http://www.compadre.org/OSP/items/Relations.cfm?ID=10383","timestamp":"2014-04-18T05:53:27Z","content_type":null,"content_length":"11507","record_id":"<urn:uuid:379c3f20-5954-45cc-aa78-008516b0f749>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
[A Trivial Problem] adj(AB) = adj(A)adj(B) September 23rd 2008, 07:52 PM #1 Dec 2007 [A Trivial Problem] adj(AB) = adj(A)adj(B) I want to know how to prove that adj(AB) = adj(B)adj(A), letting adj (A) be the adjugate matrix of A. I feel the equation adj(A)A = |A|E does not help much. P.S: I wrongly typed the topic. Last edited by kevin_chn; September 23rd 2008 at 07:53 PM. Reason: type error $\begin{array}{rcl} \text{adj}(AB) & = & \text{det}(AB) (AB)^{-1} \\ & = & \text{det}(A) \ \text{det}(B) \ B^{-1}A^{-1} \\ & = & \text{det}(B) B^{-1} \ \text{det}(A) \ A^{-1} \\ & \vdots & \end This proof is only correct if A and B have nonzero determinants. What about a proof in general. The post may not be very helpful, but another way to see the idea is: Adj(B)Adj(A)AB = Adj(B)(Adj(A)A)B = Adj(B)|A|B = |A|(Adj(B)B) = |A||B|E = |AB|E = Adj(AB)(AB) Thus Adj(AB))(AB) = |AB|E = Adj(B)Adj(A)(AB). If inverse AB exists, then we are done. If not , we run into the same problem as o_O's proof. I wanted to thank the forum for both of these answers. I did a google search for adj(AB) adj(B) and arrived here, which I found charming. I've rarely had less hope for an internet search; and I got the answer right away! I wanted to address Mr. Fantastic's tone. It seems to me that this is the manner of speaking that we agree not to participate in. "What about a general proof" is not an impolite question. But your post is deliberately condascending, takes up a huge amount of space, and neglects to answer. So, why post at all? And I think your criteria for "trivial" are sorely misplaced. I am self-instructing in linear algebra. I am working through a Dover reprint of a 1960's text by Hans Schneider and George Barker. The approach is purely axiomatic. Every step is stated as a Definition, Proposition, Theorem, or Lemma. No step is missed, and there is a proof for every one (except the definitions). What strikes me about Linear Algebra, is that every step is trivial. It is, in its entirety, self-evident. Nevertheless, learning a new algebra is not easy, and mathematically self-evident is a poor criteria for human instruction. Schneider disagrees with you, and I think he knew his subject. Even the ramifications of multiplication by zero merit, in each case, an exact proof and subsequent collection of the special cases into a general rule. Simply, this: Zero is a necessary case to consider, and the implications are not so easily dismissed. The determinant is an excellent example. If the determinant is zero, we have choices to make. Are we multiplying two matrices of the same rank? If so, should we reduce the dimension of our multiplication, or is it necessary to keep the problem in n equations? The case of zero determinant requires analysis, and my arcane and intellectually demanding textbook considers all cases. Your quotations are supposed to be a lesson and they're the wrong one. I find in upper division textbooks exactly the opposite recognition. This: that learning a new algebra requires meticulous exposition, and careful handling of all identities. Graduate coursework may open with an exposition of the necessary postulates for addition. And mastery is mastery. I have never once failed to improve at the piano by doing elementary exercises. I have never once felt that identities were always obvious. Quite the contrary; it is the simple identity that we miss. Otherwise we wouldn't have to learn math at all. After all, the entirety of vector algebra is self-evident from the definition of addition and scalar multiplication, right? Was it self-evident to you, or did maybe you need a little help with with every step? So, for the original poster: the case of zero is contained in the posted answer because both sides will be zero. If you are unable to separate adj(AB) into discrete terms in A and B (both adj and ||), then your problem is here: for example, identities like Aadj(A) = |A|I, and then you can separate |AB| into |A||B|. Then, you can satisfy yourself that the zero case for either |A| or |B| will be 0 for AB. This is also self-evident from the fact that if either of A or B is singular, the product AB is singular. It depends on what parts of the algebra you already know. For this proof, it is also important to identify that the determinant is a scalar. This is tricky, because it is a function. It may not have been evident at the time, why the existence and uniqueness of the determinant function needed to be proved; it might be worth reviewing this. The function always exists, it is unique, and it reduces to a scalar. This is crucial. Like with any other scalar, you can move the determinant terms freely, within the matrix products. If the determinant were not a scalar, this identity would not hold. Anyway, I hope that helps anyone else who (like the original poster and me) needed this thread to put all the pieces together. I wanted to address Mr. Fantastic's tone. It seems to me that this is the manner of speaking that we agree not to participate in. "What about a general proof" is not an impolite question. But your post is deliberately condascending, takes up a huge amount of space, and neglects to answer. So, why post at all? If you read my post more carefully you might have seen that it was reminding the OP of the simple courtesy of saying please rather than making a demand. It hardly took up a huge amount of space and was addressing an issue seperate to the question that was asked. I get it. I read your post quite carefully. Too carefully. The little line between the red-flagged text is your signature line! I see. Well, that makes a lot more sense. It was a mean-spirited post the way I read it: red-text leading to red text, and a diversionary quote about why no one would bother to answer the trivial case: because it was obvious. Hahahah alright, fair enough. I am suitably embarrassed. And I apologize. Sorry I misread you! Indeed, I was worried, thinking this was a mod's answer. Hence my long post. (It is your profile which is enormous!) okay. Well, as long as I've got your attention, let me say/ask this: I am self-instructing in math, and it's not easy. I have no one to query and I will probably post regularly in this forum. I will often do as above: spell out how I think a problem needs to be looked at, to be answered. When I don't understand something, I have to phrase logical questions, seek answers, then spell out my reasoning so I can identify errors and make corrections. Explicitly voicing this process helps enormously. I need these explanations to be accurate, or I'll go lost. If I'm not right, it might as well be magic. So, if you've got your scrutiny eye on me, great! I can say that I write as above (the linear algebra parts) to instruct myself, and I NEED corrections. I will be extending my inner procedure into the forums in the hopes of contributing, but I can only do that if my procedures are corrected. I will happily edit posts to avoid lengthening a thread. I state the mathematical elements as if I know what I'm doing because I have no choice. If I don't state my presumptions clearly, I can't find the errors. So, corrections of any explanations of mine, ever, are deeply appreciated. For example, I'm not convinced by what I said about the zero case holding. Is it true? September 23rd 2008, 10:27 PM #2 January 14th 2009, 08:41 AM #3 Jan 2009 January 14th 2009, 12:17 PM #4 January 15th 2009, 08:40 AM #5 June 26th 2010, 11:24 AM #6 Jun 2010 Seattle, WA June 26th 2010, 03:50 PM #7 June 26th 2010, 04:22 PM #8 Jun 2010 Seattle, WA June 26th 2010, 04:26 PM #9 Jun 2010 Seattle, WA
{"url":"http://mathhelpforum.com/advanced-algebra/50369-trivial-problem-adj-ab-adj-adj-b.html","timestamp":"2014-04-19T03:27:52Z","content_type":null,"content_length":"62075","record_id":"<urn:uuid:8666431f-1054-432d-acb2-9227deae2d61>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Hierarchical parallelization of gene differential association analysis. Jump to Full Text MedLine PMID: 21936916 Owner: NLM Status: MEDLINE Abstract/ BACKGROUND: Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based OtherAbstract: procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. RESULTS: Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. CONCLUSIONS: The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding Authors: Mark Needham; Rui Hu; Sandhya Dwarkadas; Xing Qiu Related 12518316 - Metabolic flux analysis of rq-controlled microaerobic ethanol production by saccharomyc... Documents : 21776076 - Central dogma at the single-molecule level in living cells. 21638026 - Alternative promoter usage and differential expression of multiple transcripts of mouse... 21823226 - Rits-connecting transcription, rna interference, and heterochromatin assembly in fissio... 12518316 - Metabolic flux analysis of rq-controlled microaerobic ethanol production by saccharomyc... 15742146 - A carbohydrate fraction, aip1, from artemisia iwayomogi down-regulates fas gene express... Publication Type: Journal Article; Research Support, N.I.H., Extramural; Research Support, Non-U.S. Gov't Date: 2011-09-21 Journal Title: BMC bioinformatics Volume: 12 ISSN: 1471-2105 ISO Abbreviation: BMC Bioinformatics Publication Date: 2011 Date Detail: Created Date: 2011-10-13 Completed Date: 2011-12-07 Revised Date: 2013-06-27 Medline Nlm Unique ID: 100965194 Medline TA: BMC Bioinformatics Country: England Journal Info: Other Details: Languages: eng Pagination: 374 Citation Subset: IM Affiliation: Department of Computer Science, University of Rochester, New York 14627, USA. Export APA/MLA Format Download EndNote Download BibTex MeSH Terms Descriptor/ Algorithms Qualifier: Cluster Analysis Disease / genetics Genetic Association Studies* Oligonucleotide Array Sequence Analysis Programming Languages Grant Support ID/Acronym/ 1 R21 HG004648-01/HG/NHGRI NIH HHS; 5 R21 GM079259-02/GM/NIGMS NIH HHS; UL1 RR024160/RR/NCRR NIH HHS; UL1 RR024160/RR/NCRR NIH HHS From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine Full Text Journal Information Article Information Journal ID (nlm-ta): BMC Bioinformatics Download PDF ISSN: 1471-2105 Copyright ©2011 Needham et al; licensee BioMed Central Ltd. Publisher: BioMed Central open-access: Received Day: 28 Month: 1 Year: 2011 Accepted Day: 21 Month: 9 Year: 2011 collection publication date: Year: 2011 Electronic publication date: Day: 21 Month: 9 Year: 2011 Volume: 12First Page: 374 Last Page: 374 ID: 3248234 Publisher Id: 1471-2105-12-374 PubMed Id: 21936916 DOI: 10.1186/1471-2105-12-374 Hierarchical Parallelization of Gene Differential Association Analysis Mark Needham1 Email: mbneedham@gmail.com Rui Hu2 Email: Rui_Hu@urmc.rochester.edu Sandhya Dwarkadas1 Email: sandhya@cs.rochester.edu Xing Qiu2 Email: Xing_Qiu@urmc.rochester.edu 1Department of Computer Science, University of Rochester, PO Box 270226, Rochester, New York 14627, USA 2Department of Biostatistics and Computational Biology, University of Rochester, 601 Elmwood Avenue Box 630, Rochester, New York 14642, USA Microarray gene differential expression analysis has been widely used to uncover the underlying biological mechanism. Researchers utilize this technology to identify potentially "interesting" genes. More specifically, a statistical test is applied to each individual gene to detect whether the mean expression level of this gene is the same or not across different biological conditions or phenotypes studied in an experiment. A chosen multiple testing procedure (MTP) is then employed to control certain per-family Type I errors. Genes work together to fulfill certain biological functions and they are known to be strongly correlated [^1,^2]. The structure of inter-gene correlation contains rich information that cannot be extracted from mean expression levels. Recent years have seen more and more research focusing on gene dependence structures. For example, some procedures, such as gene set enrichment analysis [^3,^4], incorporate existing biological gene sets information into statistical procedures. Gene cluster analysis uses gene dependence and similarity to group genes [^5^-^11]. Gene network analysis, such as method based on Gaussian or Bayesian networks, employs gene dependence to study gene dynamics and reasoning [^12^-^14]. Another approach is to directly select genes based on the phenotypic differences of their dependence structure [^15^ -^20]. In this paper, we consider the very last approach and focus on a gene differential association analysis (henceforth denoted as GDAA) procedure proposed in [^19]. Unlike traditional differential gene expression analysis, GDAA is designed to select genes that have different dependence structures with other genes in two phenotypes. It complements the analysis of differentially expressed genes. Combining both gene differential association analysis and gene differential expression analysis provides a more comprehensive functional interpretation of the experimental results. As an example, GDAA was applied in [^20] to two sets of Childhood Leukemia data (HYPERDIP and TEL) [^21] and selected differentially associated (DA) genes that could not be detected by differential gene expression analysis. Furthermore, the TEL group is differentiated from other leukemia subtypes by the presence of t(12;21)(p13;q22) translocation, generating the TEL-AML1 fusion gene. Through the over-representation of DA genes, the chromosomal band 21q22.3 containing the TEL-AML1 fusion gene was identified. This chromosomal band was not identified by differential gene expression A typical microarray data set reports expression levels for tens of thousands of genes. For example, both sets of Childhood Leukemia data HYPERDIP and TEL [^21] have expression levels for m = 7, 084 genes updated from the original expression levels by using a custom CDF file to produce values of gene expressions. The CDF files can be found at http://brainarray.mbni.med.umich.edu. Please see [^19 ] for more details. Each slide is then represented by an array reporting the logarithm (base 2) of expression level on the set of 7,084 genes. For convenience, the words "gene" and "gene expression" are used interchangeably to refer to these gene expressions in this paper. Due to such a high dimensionality, the computation of traditional gene differential expression analysis is considered to be more time consuming than many traditional statistical analyses in medical research. A gene selection procedure based on gene dependence structures has to be even more computationally intensive. This is because the dependence structure is typically measured by a pertinent association score, such as the Pearson correlation coefficient for all gene pairs, of which the multiplicity (dimensionality) is m(m-1)2 instead of m. It is therefore more computationally intensive to detect the differences hidden in the correlation matrix. In particular, for the procedure proposed in [^19], the length of the computation is O(m × m × n × K), where m = 7, 084 is the number of genes, n = 79 is the number of subjects in each phenotypic group, and K = 10, 000 is the number of permutations for approximating the statistical null distribution. Such large number of permutations is necessary because statistical inference for microarray analysis is based on multiple testing adjusted p-values, which demands much finer estimation of unadjusted p-values compared to regular permutation tests. With a large number of genes and a medium sample size, running GDAA can take several days or even a month. For example, a sequential implementation of the procedure in [^19] took nearly two months to complete the calculation on a computer with a 2 GHz AMD Opteron processor and 2GB SDRAM. Until about 2003, processor designers were able to leverage technology advances that allowed increasing numbers of smaller and faster transistors on a single chip in order to improve the performance of sequential computation. Hence, it was possible for computational scientists who wanted their codes to run faster to simply wait for the next generation of machines. However, the reality is that around 2003, chipmakers discovered that they were no longer able to sustain faster sequential execution due to the inability to dissipate the heat generated by the computation [^22]. Consequently, designers turned to using the increasing transistor counts to add more processors, each of which execute independent sequential computation. The processors typically share access to the memory subsystem and off-chip bandwidth. These multicore chips now dominate the desktop market and are used to put together multiprocessor servers consisting of multiple processor chips, as well as networked clusters of such servers for high-end computation. Parallel computing (utilizing multiple compute resources simultaneously for the same application) that effectively leverages these increasingly multicore clusters of multiprocessors is thus even more critical than in the past in order to obtain results in a timely manner. In this paper, we propose a new parallel design for the gene differential association analysis procedure in [^19]. The key to our parallelization strategy is that it takes advantage of both fine and coarse-grain parallelism (the granularity representing the frequency of sharing/communication in the concurrent computation). The hardware-based memory sharing within a multicore is utilized for the fine-grain parallelism (with higher need for sharing/communication). Sharing memory in hardware avoids the need for data replication. Since GDAA utilizes a multivariate nonparametric test, it has more memory needs than a comparable gene differential expression analysis. Therefore, the memory sharing feature in our strategy is also critical to reducing the bandwidth demands of the GDAA procedure. The results show that our strategy leverages GDAA's characteristics to reduce the memory and bandwidth needs of the application, thereby improving computational efficiency. Gene Differential Association Analysis Procedure We outline the related GDAA procedure below. More details can be found in [^19]. Statistical Hypothesis Testing Assume there are two biological conditions or phenotypes A and B. Under each condition n subjects are sampled, each measured with m gene expression levels. We denote these gene expressions by {x[ij]}, 1 ≤ i ≤ m and 1 ≤ j ≤ n. For the ith gene, we first compute an (m - 1)-dimensional random vector r[i ]= (r[i1], ⋯, r[i,i-1], r[i,i+1], ⋯, r[im]). Here r [ik ]is the Pearson correlation coefficient between the ith and the kth gene, i.e., │rik=n∑l=1nxilxkl-∑l=1nxil ∑l=1nxkln ∑l=1nxil2-(∑l=1nxil)2n ∑l=1nxkl2-(∑l=1nxkl)2.│ Fisher transformation is then applied to these correlation coefficients: where k = 1, ⋯, i - 1, i + 1, ⋯, m. We denote the correlation vectors (w[i1], ⋯, w[i,i-1], w[i,i+1], ⋯, w[im]) by w[i]. This vector represents the relationship between the ith gene and all other For the ith gene, its correlation vectors under conditions A and B are denoted by w[i](A) and w[i](B), respectively. We test the null hypotheses where Fwi(A)(x) and Fwi(B)(x) are the joint distribution functions of w[i](A) and w[i](B), respectively. If H[i ]is rejected, we declare the ith gene to be a differentially associated gene. The N-statistic In order to test H[i], we need to create samples of correlation vectors to mimic the joint distributions Fwi(A)(x) and Fwi(B)(x), respectively. We divide the dataset under condition A intoG(1≤G≤n2) subgroups, each subgroup containing nG subjects. In order to compute correlation coefficients, every subgroup must contain at least two subjects. Sample sizes of subgroups do not have to be equal. When G does not divide n, the last few subgroups can have a slightly larger or smaller sample size. That being said, an approximately even partition of subgroups is still desirable because it leads to better statistical power than unbalanced partitions. From these subgroups, we compute a sample of size G correlation vectors for the ith gene, denoted by w[i](A, k), 1 ≤ k ≤ G. Similarly, we have a sample of size G correlation vectors for the ith gene under condition B, denoted by w[i](B, k), 1 ≤ k ≤ G. Next, H[i ]is tested by a multivariate nonparametric test based on the N-statistic. This statistic has been successfully used to select differentially expressed genes and gene combinations in microarray data analysis [^23^-^26]. The N-statistic is defined as follows: [Formula ID: │Ni=2G2 ∑k=1G∑l=1GL(wi(A,k),wi(B,l))-1G2 ∑k=1G∑l=1GL(wi(A,k),wi(A,l))-1G2 ∑k=1G∑l=1GL(wi(B,k),wi(B,l)),│ where L is the kernel defined by Euclidean distance, i.e., The N-statistic can serve as a measurement of how much the inter-gene correlation structure of the ith gene has changed from condition A to condition B. Permutation-based Null Distribution and p-value Denote Ni* as the N-statistic associated with the ith gene. To determine the statistical significance of Ni*, which is represented by a p-value, we need to model the null distribution of this statistic. This can be done by the following resampling method. First, we combine the gene expression data under both conditions and randomly permute subjects. Then we divide them into two groups of equal size, mimicking two biological conditions without differentially associated genes. By applying formula (1), we get a permutation based N-statistic for the ith gene, which can be considered as an observation from the null distribution of N[i], i.e., the distribution of N[i ]when H[i ]holds. Repeating this permutation process K times produces K permutation based N-statistics for the ith gene, denoted by N[ik], 1 ≤ k ≤ K. p[i], the permutation based p-value for testing H[i], is computed as the proportion of N[ik ]that is greater than or equal to Ni*: To control per-family error rate (PFER), we apply the extended Bonferroni adjustment [^27] to the above p-values to obtain the adjusted p-values The smaller p˜i is, the more likely w[i](A) is different from w[i](B), i.e., the ith gene changes its relationship with all other genes across conditions A and B. If p˜i is less than a pre-defined threshold, we reject H[i ]and declare the ith gene to be a differentially associated gene. Summary of the GDAA Procedure The above GDAA procedure can be summarized as follows: 1. Divide the subjects (slides) under each condition (A or B) into G subgroups such that there are approximately nG subjects for each subgroup. 2. For each gene, compute its correlation vectors from all subgroups. This step produces G correlation vectors for one gene in each condition. 3. Compute the N-statistic for the ith gene from these 2 × G samples using Equation(1) and record it as Ni*. 4. Pool the subjects in both conditions together. Randomly shuffle the subjects, and then split them into two groups of equal size. 5. Divide the subjects in each group into G subgroups and compute G correlation vectors in each subgroup for each gene. 6. Compute the N-statistics for each gene based on these 2 × G correlation vectors. 7. Repeat steps 4 to 6 K times and record the permutation-based N-statistics as N[ik], i = 1, ..., m, k = 1, ..., K. 8. Obtain the permutation-based p-value, p[i], using Equation(2). 9. Adjust p-value by using Equation(3). Select differentially associated genes based on the adjusted p-values and a pre-specified PFER level. Our parallel design is implemented using Python and C++. Python is in charge of initializing data and all communication between the master process and any slave processes -- sending out computation jobs and collecting results. C++ is used to perform the actual computation within each independent process. A high-level language such as Python provides ease of use and flexibility, especially for data initialization and coordination, but at the cost of performance. By limiting the use of Python to the initialization and coordination with the slaves (where the program spends a very small percentage of its overall time) and using C++ for the computationally intensive portions of the program, we get the best of both worlds: the flexibility of Python and the performance of C++. The use of other languages such as R instead of Python is also possible. The execution proceeds as follows: 1. Read in and initialize data (performed in Python on the master process). 2. Calculate Ni*, 1 ≤ i ≤ m, for the unpermuted dataset using a single core (C++ code) on the master process. 3. Create K permutations of the original dataset; distribute the permutations k (1 ≤ k ≤ K) to independent slave processes (performed in Python) using MPI [^28]. Work is distributed at the granularity of a single permutation -- when a process completes the computation for one permutation, it requests the next permutation. 4. Each worker/slave receives a permutation (using Python), permutes its local copy of the data, and then computes the vector of N-statistics using C++, parallelized using the Pthreads [^29] package. A total number of P threads are created and the per-gene computation is distributed among threads so that each thread performs the N-statistic computation for mP genes. When m is not divisible by P, each thread receives a slightly different number of genes. 5. Once an MPI process has determined that its threads have computed all N-statistics (N[ik], 1 ≤ i ≤ m) for the kth permutation it was assigned, it then returns them to the master MPI process. 6. The master MPI process collects all the N[ik ]to calculate p-values p[i ](performed in Python). Steps 1 and 2 of the algorithm are performed sequentially. To parallelize the remaining steps, we use a two-tiered approach. At the first level, we distribute the work by spawning processes from Python using MPI. One MPI process, the "master" process, is responsible for distributing different permutations to the other "slave" processes. Each slave independently permutes the gene data according to the permutation indices received from the master process and computes the N-statistics for the permuted data (this code is optimized C++ code). The computed values (the vector of N -statistics) are then returned to the master using an MPI message, where the Python code calculates the p-value. The key to this implementation is that the core computation is performed in optimized C++ code. The second level of parallelization occurs within the slaves. When computing the N-statistics, each slave (MPI process) forks off a specified number of threads, each of which computes the permutation's N-statistics for a subset of genes. This allows us to vary the parallelization between MPI processes (which split the work by permutations) and threads (which divide the work by genes). For example, with one quad-core processor on a shared memory architecture, we can run one slave MPI process with four threads, 2 MPI processes each with two threads, or four MPI processes each running a single thread. Splitting the work between MPI processes versus threads has implications for performance and memory usage, which we will highlight in the evaluation section. This hierarchical design is also illustrated in the flowchart of Figure 1. Data Sharing Gene expression level data in each biological condition is represented using a dynamically allocated (m × n)-dimensional array, where n is the number of subjects and m is the number of genes. This two-dimensional array is read-shared within each MPI process and its size grows as a product of m and n. There are two other dynamically allocated two-dimensional arrays created for each MPI process. These two arrays with sizes proportional to (m × G) are used for temporary storage of the correlation computation. One stores the sums of the expression levels within subgroups so that its entry in row i and column j is ∑l∈Subgroupjxil(k). Here xil(k) is the expression level for gene i and subject l in the kth permutation. The other stores the sums of squares of the expression levels within subgroups so that its entry in row i and column j is ∑l∈Subgroupj(xil(k))2. They are also read shared within the MPI process. Another two-dimensional dynamically allocated array with size proportional to O(m × G) is created for each thread, storing the correlation vectors for each gene. This array is both read and written by the thread. The N-statistic, which is a vector with the size of the number of genes m, is also shared across all threads within each MPI process. Each thread writes to independent regions of this vector based on the genes allocated to it. Our evaluation is conducted on a cluster of five machines, each with 16 GBytes of memory and two 3.0 GHz quad-core Intel Xeon 5450 processors, for a total of 40 processors. The machines are interconnected using Gigabit ethernet. Each quad-core processor chip has 6 MBytes of last-level cache per pair of cores. Each machine runs Linux version 2.6.16.60-0.21-smp. The application was compiled using Python version 2.4.2, Pypar version 2.1.0, and gcc version 4.1.2 at the -O2 optimization level. Simulation Data To gain better insight into the effects of different configurations on performance, we simulate several sets of data. Each set has two groups of n = 100 slides representing 100 subjects in each biological conditions. Each array has m genes, where m takes on values in {1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000}. The slides in each group are divided into G = 10 subgroups to calculate correlation vector samples. K = 100 permutations of the subjects in the two groups are created in order to generate the null distributions of N-statistics. Performance Analysis Figures 2 and 3 present the execution time (measured from the time after calculation of the unpermuted statistic) as a function of the number of genes in the dataset, with the operating system default scheduling (Figure 2) and with each thread/process pinned (more detailed explanation to follow) so it executes only on one specific processor/core (Figure 3). While we report three times in the figures to show the variation in the results, we repeated the timing-based execution several times to ensure consistency of the results. The quad-core processor running the Python script is not used for parallel computation. The number of MPI processes forked, and correspondingly the number of threads used per MPI process, is varied. More specifically, the four sets of curves represent 32 single-threaded MPI processes, 16 dual-threaded MPI processes, 8 4-threaded MPI processes, and 4 8-threaded MPI processes, respectively. We also applied 1, 2, 4, and 8 threaded strategies to a dataset of 7000 genes while varying the number of cores (or quad-core processors) used. Figures 4 and 5 present the speedup (execution time on a single core/using a single thread divided by the execution time of the parallel implementation with the specified number of cores/threads) as the number of cores (or quad-core processors) is varied, using the operating system default scheduling (Figure 4) and with each thread/process pinned so it executes only on one specific processor/core (Figure 5). As shown in Figures 2 and 3, using multiple threads per MPI process outperforms the 1 thread strategy substantially. As an example, according to Figure 3, when the number of genes m = 10, 000, the average execution time for the 2 threaded strategy is 1211 seconds, which represents about 70% performance gain compared to the 1 threaded strategy (2077 seconds). When using only MPI processes (1 threaded strategy), there is no data sharing among the processes. All communication is strictly via messages. As the number of threads increases, Figure 6 shows that the total amount of memory required per machine goes down as the number of MPI processes decreases and the number of threads per MPI process increases. This is a result of the data sharing in the parallel threaded implementation, as described in the Data Sharing section. Parallelizing purely at the MPI level results in multiple copies of the data structures being created and exerts more pressure on the memory as well as any shared cache in the system. On our experimental platform, the last-level cache has a size of 6 MBytes, which is shared between two cores in a physical package (quad-core processor). When the working set of the processes/threads executing on these cores exceeds the capacity of the 6 MByte shared cache, some performance will be lost. Using threads allows the cores to share space in the cache more effectively and has the added benefit of reducing memory latency due to the prefetching effect of 1 core on the other. In addition, reducing the number of permutations (MPI processes) computed on at the same time reduces the pressure on the communication link with the master process, which must coordinate and communicate with each MPI process and can therefore result in a bottleneck. Any coarse-grain load imbalance at the permutation level is also mitigated. On our platform, we also observe some anomalies in behavior -- faster performance was observed using 2 threads per slave MPI process rather than with 4 threads (see Figure 2 and 4). In addition, the variance in performance across runs is high, especially in the 2 threaded runs. The 2 threaded strategy represents the sweet spot in terms of leveraging shared resources on this architecture (a 6 MByte cache shared by 2 cores), presuming that the 2 threads strategy execute on cores that share a cache. Our hypothesis is that the default operating system scheduling of the threads does not ensure this affinity. To confirm our hypothesis, we add code to force thread affinity -- each thread is pinned to a particular core while ensuring that threads within a process share a cache and remain within a single chip when possible. The resulting performance, shown in Figure 3 and 5, corroborates our hypothesis. The variance in performance is no longer observed. Most of the efficiency gains from sharing across threads is observed when using 2 threads, i.e., when the parallelization matches the underlying physical characteristics of the machine and leverages the shared cache between 2 cores. Additional performance benefits beyond 2 threads are small. More specifically, the 2, 4, and 8 threaded strategies show only small differences in performance once the threads are pinned to ensure cache sharing. In Figure 3, the 8 threaded strategy is a little better if the number of genes is between 3000 and 7000. Otherwise, the 4 threaded strategy shows slightly better performance. These variations across different numbers of threads come from differences in load balance at the Pthread and MPI parallelization levels. In Figures 4 and 5, we notice that the speedup curves are not very smooth. This step function can be attributed to several causes. The first is load imbalance due to the granularity of workload distribution -- permutations at the MPI parallelization level and genes at the Pthread parallelization level. When using 1 thread per MPI process to conduct 100 permutations, as one example, with 5 processors (20 cores), each core runs 5 permutations (⌈100/20⌉). If we increase the number of processors to 6 (24 cores), some cores will still execute 5 permutations while others execute 4, so that execution time remains proportional to ⌈100/24⌉ = 5, resulting in practically no increase in speedup. As the number of permutations executed per MPI process decreases (with an increasing number of cores), the fraction of idle/wasted time on the cores with one less permutation to execute increases, resulting in lower efficiency. In the case of Figure 4, the increased scheduling variability and poor choice of scheduling when adding a quad-core processor within a machine also contributes to the step function in the 2 and 4 threaded curves. Once the scheduling is made both deterministic and ensures appropriate cache sharing, the step function is less pronounced in the multi-threaded runs in Figure 5 due to their reduced memory bandwidth demands and smoother load function at the MPI Microarray technology has made it possible for medical researchers to measure and study the behavior of thousands of genes at once. Technology advances have been on a fast track in recent years, making it possible to conduct microarray experiments much faster and less expensive than in the past. This trend has been leveraged with the availability of larger and larger datasets. Turning so much raw information into knowledge presents a major challenge for both statistical analysis and computation. As of now, microarray data are used for crude screening of differentially expressed genes. Exploiting the rich information contained in the inter-gene dependence structure has not become a routine, despite the availability of several gene association analysis procedures. This is largely due to the computing bottleneck. In this paper, we present a parallelized implementation of gene differential association analysis that is designed to leverage the features of today's multicore platforms in which resources are shared among processors at a much finer granularity than in the past. We apply the conventional wisdom of parallelizing at the coarsest granularity to distribute permutations among the nodes in a cluster, using MPI for communication. In addition, we parallelize at the finer granularity of per-gene computation within a single dual quad-core machine using shared memory (Pthreads). Sharing memory across threads helps reduce demand for the shared last-level cache capacity on the chip by allowing independent threads to share a single copy of the gene expression data. Our results show that this strategy utilizes the multicore cluster platform much more effectively. In general, the performance sweet spot occurs when using a number of threads that allows the working sets of the corresponding MPI processes to fit within the machine's shared cache. We suggest that practitioners follow the principle of determining what resources are shared when making decisions on how to allocate compute resources among MPI processes and threads for their cluster machines. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels. Availability and Requirements • Project name: Hierarchical Parallelization of Gene Differential Association Analysis; • Project home page: http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm; • Operating system: Linux; • Programming language: Python and C++; • Other requirements: MPI (MPICH2 or Open MPI), Python, C++ Compilation tools, SWIG, Numpy, Pypar; • Licence: GNU GENERAL PUBLIC LICENSE, Version 2, June 1991; • No restrictions to use by non-academics. GDAA: Gene Differential Association Analysis; MTP: Multiple Testing Procedure; MPI: Message Passing Interface; Pthreads: POSIX Threads. Authors' contributions The basic idea was first proposed by RH, SD, and XQ. The detailed study design was developed by all members of the research team. MN carried out the needed computations and simulations and the majority of the software development. All authors have read and approved the final manuscript. This work was supported in part by NSF grants CCF-0702505, CNS-0411127, CNS-0615139, CNS-0834451, CNS-0509270, and CCF-1016902; and NIH grants 5 R21 GM079259-02, 1 R21 HG004648-01, and NCRR UL1 RR024160. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the above-named organizations. In addition, we would like to thank Ms. Christine Brower for her technical assistance with computing and Ms. Malora Zavaglia for her proofreading effort. Finally we are grateful to the associated editor and two anonymous reviewers for their constructive comments which helped us improve the manuscript. Klebanov L,Jordan C,Yakovlev A,A new type of stochastic dependence revealed in gene expression dataStat Appl Genet Mol BiolYear: 20065Article7http://dx.doi.org/10.2202/1544-6115.118916646871 Bhardwaj N,Lu H,Correlation between gene expression profiles and protein-protein interactions within and across genomesBioinformaticsYear: 2005211127302738http://dx.doi.org/10.1093/bioinformatics/ Mootha V,Lindgren C,Eriksson K,Subramanian A,Sihag S,Lehar J,Puigserver P,Carlsson E,Ridderstråle M,Laurila E,et al. PGC-1 α-responsive genes involved in oxidative phosphorylation are coordinately downregulated in human diabetesNature geneticsYear: 200334326727310.1038/ng118012808457 Subramanian A,Tamayo P,Mootha V,Mukherjee S,Ebert B,Gillette M,Paulovich A,Pomeroy S,Golub T,Lander E,et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profilesProceedings of the National Academy of SciencesYear: 200510243155451555010.1073/pnas.0506580102 Raychaudhuri S,Stuart J,Altman R,Principal components analysis to summarize microarray experiments: application to sporulation time seriesPac Symp BiocomputYear: 20005455466 Liu A,Zhang Y,Gehan E,Clarke R,Block principal component analysis with application to gene microarray data classificationStatistics in medicineYear: 20022122 Wang A,Gehan E,Gene selection for microarray data analysis using principal component analysisStatistics in medicineYear: 20052413 Eisen M,Spellman P,Brown P,Botstein D,Cluster analysis and display of genome-wide expression patternsProceedings of the National Academy of SciencesYear: 19989525148631486810.1073/pnas.95.25.14863 Törönen P,Kolehmainen M,Wong G,Castrén E,Analysis of gene expression data using self-organizing mapsFEBS lettersYear: 1999451214214610.1016/S0014-5793(99)00524-410371154 Furey T,Cristianini N,Duffy N,Bednarski D,Schummer M,Haussler D,Support vector machine classification and validation of cancer tissue samples using microarray expression dataYear: 2000 Brown M,Grundy W,Lin D,Cristianini N,Sugnet C,Furey T,Ares M,Haussler D,Knowledge-based analysis of microarray gene expression data by using support vector machinesProceedings of the National Academy of SciencesYear: 20009726226710.1073/pnas.97.1.262 Bahar I,Atilgan AR,Erman B,Direct evaluation of thermal fluctuations in proteins using a single-parameter harmonic potentialFold DesYear: 19972317318110.1016/S1359-0278(97)00024-29218955 Friedman N,Inferring cellular networks using probabilistic graphical modelsScienceYear: 20043035659799805http://dx.doi.org/10.1126/science.109406810.1126/science.109406814764868 Opgen-Rhein R,Strimmer K,From correlation to causation networks: a simple approximate learning algorithm and its application to high-dimensional plant gene expression dataBMC Syst BiolYear: Li K,Genome-wide coexpression dynamics: theory and applicationProceedings of the National Academy of SciencesYear: 20029926168751688010.1073/pnas.252466999 Lai Y,Wu B,Chen L,Zhao H,A statistical method for identifying differential gene-gene co-expression patternsBioinformaticsYear: 2004201731463155http://dx.doi.org/10.1093/bioinformatics/bth37910.1093/ Shedden K,Taylor J,Differential correlation detects complex associations between gene expression and clinical outcomes in lung adenocarcinomasMethods of Microarray Data Analysis IVYear: 2005121131 Choi J,Yu U,Yoo O,Kim S,Differential coexpression analysis using microarray data and its application to human cancerBioinformaticsYear: 200521244348435510.1093/bioinformatics/bti72216234317 Hu R,Qiu X,Glazko G,Klebanov L,Yakovlev A,Detecting intergene correlation changes in microarray analysis: a new approach to gene selectionBMC BioinformaticsYear: 20091020http://dx.doi.org/10.1186/ Hu R,Qiu X,Glazko G,A new gene selection procedure based on the covariance distanceBioinformaticsYear: 2010263348354http://dx.doi.org/10.1093/bioinformatics/btp67210.1093/bioinformatics/ Yeoh EJ,Ross ME,Shurtleff SA,Williams WK,Patel D,Mahfouz R,Behm FG,Raimondi SC,Relling MV,Patel A,Cheng C,Campana D,Wilkins D,Zhou X,Li J,Liu H,Pui CH,Evans WE,Naeve C,Wong L,Downing JR,Classification, subtype discovery, and prediction of outcome in pediatric acute lymphoblastic leukemia by gene expression profilingCancer CellYear: 20021213314310.1016/S1535-6108(02) Patterson D,The Trouble with Multicore MicroprocessorsIEEE SpectrumYear: 20102832 Szabo A,Boucher K,Carroll W,Klebanov L,Tsodikov A,Yakovlev A,Variable selection and pattern recognition with gene expression data generated by the microarray technologyMathematical BiosciencesYear: Szabo A,Boucher K,Jones D,Tsodikov AD,Klebanov LB,Yakovlev AY,Multivariate exploratory tools for microarray data analysisBiostatisticsYear: 200344555567http://dx.doi.org/10.1093/biostatistics/ Xiao Y,Frisina R,Gordon A,Klebanov L,Yakovlev A,Multivariate search for differentially expressed gene combinationsBMC BioinformaticsYear: 20045164http://dx.doi.org/10.1186/1471-2105-5-16410.1186/ Klebanov L,Gordon A,Xiao Y,Land H,Yakovlev A,A permutation test motivated by microarray data analysisComputational Statistics and Data AnalysisYear: 2005 Gordon A,Glazko G,Qiu X,Yakovlev A,Control of the Mean Number of False Discoveries, Bonferroni, and Stability of Multiple TestingThe Annals of Applied StatisticsYear: 20071179190http:// Message Passing Interface ForumMPI: A Message-Passing Interface Standard, Version 2.2Year: 2009http://www.mpi-forum.org/docs/ Barney B,POSIX Threads ProgrammingYear: 2011https://computing.llnl.gov/tutorials/pthreads/ Figure 1 ID: F1] Flowchart of hierarchical parallelization design. Figure 2 ID: F2] Execution time with default OS scheduling. Number of slides in each condition: 100. Number of permutations: 100. Numbers of MPI processes × threads: 32 × 1 (solid), 16 × 2 (dash), 8 × 4 (dot), and 4 × 8 (dash-dot). Figure 3 ID: F3] Execution time with pinned processes. Number of slides in each condition: 100. Number of permutations: 100. Numbers of MPI processes × threads: 32 × 1 (solid), 16 × 2 (dash), 8 × 4 (dot), and 4 × 8 (dash-dot). Figure 4 ID: F4] Speedup with default OS scheduling. Number of genes: 7000. Number of slides in each condition: 100. Number of permutations: 100. Numbers of MPI processes × threads: (number of cores) × 1 (solid), (number of cores/2) × 2 (dash), (number of cores/4) × 4 (dot), and (number of cores/8) × 8 (dash-dot). Figure 5 ID: F5] Speedup with pinned processes. Number of genes: 7000. Number of slides in each condition: 100. Number of permutations: 100. Numbers of MPI processes × threads: (number of cores) × 1 (solid), (number of cores/2) × 2 (dash), (number of cores/4) × 4 (dot), and (number of cores/8) × 8 (dash-dot). Figure 6 ID: F6] Memory usage. Number of slides in each condition: 100. Number of permutations: 100. Numbers of MPI processes × threads: 32 × 1 (solid), 16 × 2 (dash), 8 × 4 (dot), and 4 × 8 (dash-dot). Article Categories: • Software Previous Document: Prevalence of scarred and dysfunctional myocardium in patients with heart failure of ischaemic origi... Next Document: The genetic basis of salinity tolerance traits in Arctic charr (Salvelinus alpinus).
{"url":"http://www.biomedsearch.com/nih/Hierarchical-Parallelization-Gene-Differential-Association/21936916.html","timestamp":"2014-04-17T04:52:38Z","content_type":null,"content_length":"67969","record_id":"<urn:uuid:c509ca76-b25e-483c-b4c8-3c52aa171499>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
irreducible constituent in Normal subgroup up vote 1 down vote favorite Let $G$ be a finite group and $N$ be a normal subgoup of G. Suppose that $\chi \in Irr(G)$. If $\theta , \lambda \in Irr(N)$, such that $[\chi_{N}, \theta]>0$ , $[\chi_{N}, \lambda]>0$, is it true that $\theta(1)=\lambda(1)$? On the other hand, irreducible constituents of $\chi_{N}$ are uniqe? add comment 1 Answer active oldest votes Yes, it is true ( all irreducible constituents of the restriction of an irreducible character to a normal subgroup all have equal degree). This is part of Clifford's theorem. It actually applies not just to complex irreducible characters or representations, but to irreducible representations over any field. In the case of complex characters, the uniqueness of the consituents of the restricted character follows because of the fact that the irreducible characters of $N$ form an orthonormal basis for the space of class functions of $N$ with respect to the usual inner product of class functions. For representations over other fields, the Jordan Holder theorem can also be used. up vote 2 down vote Later edit: Clifford's theorem can be found in many texts: the module version states that if $S$ is a simple $FG$-module where $G$ is a finite group and $F$ is a field, and we have a accepted normal subgroup $N$ of $G$, then ${\rm Res}^{G}_{N}(S)$ is semisimple, and all simple$FN$-summands are conjugate under the action of $G$ (so, in particular, all have the same $F$-dimension) and al occur with the same multiplicity as summands of the restricted module. add comment Not the answer you're looking for? Browse other questions tagged representation or ask your own question.
{"url":"http://mathoverflow.net/questions/105513/irreducible-constituent-in-normal-subgroup","timestamp":"2014-04-20T01:03:55Z","content_type":null,"content_length":"48295","record_id":"<urn:uuid:c5ff5933-6eb8-4424-8c48-680e7284fc2f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Two's complement with unsigned integers I thought that the compiler would know because both operands are of the same type - and therefore assume that no casting/conversion is necessary. If I run the same experiment with unsigned int, I get the expected result. The size of an unsigned int in my system is 4 bytes (32 bits) and the result from the u-u2 expression is 4294967264 (11111111111111111111111111100000 in binary), that is, unsigned. So in this case the code produces the result I had expected, but for unsigned short (16 bits) it seems to treat the result (1111111111100000 in binary) as a signed value. It's confusing to me. @squarehead.. after explanation of vald from moscow you might get why we need unsigned short a = u - u2;. Last edited on I'm guessing this is a compiler/system specific peculiarity. Perhaps in a 64-bit system the operands would be converted to 64-bit values at calculation time, because the int type is represented by 64 bits in memory? Thank you both for your teaching me something new today! Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/80507/","timestamp":"2014-04-18T03:03:41Z","content_type":null,"content_length":"13022","record_id":"<urn:uuid:e80df142-cbae-4261-af72-eb8e2ab915ef>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
(data structure) Definition: A graph whose edges are unordered pairs of vertices, and the same pair of vertices can be connected by multiple edges. Formal Definition: Same as graph, but E is a bag of edges, not a set. Generalization (I am a kind of ...) Aggregate parent (I am a part of or used in ...) Christofides algorithm. See also hypergraph. Note: A definition of "pseudograph" is a multigraph that may have self-loops. Author: PEB Go to the Dictionary of Algorithms and Data Structures home page. If you have suggestions, corrections, or comments, please get in touch with Paul Black. Entry modified 2 February 2006. HTML page formatted Mon Nov 18 10:44:10 2013. Cite this as: Paul E. Black, "multigraph", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 2 February 2006. (accessed TODAY) Available from: http://www.nist.gov/
{"url":"http://xlinux.nist.gov/dads/HTML/multigraph.html","timestamp":"2014-04-16T04:12:31Z","content_type":null,"content_length":"2915","record_id":"<urn:uuid:02dffbf3-8a66-4ee0-b83b-5e9de8c1fc38>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Polymorphism is Set Theoretic, Constructively We consider models in toposes of equational theories over the type system consisting of the Girard-Reynolds polymorphic lambda calculus augmented with finite product types. The particular notion of model we use is very straightforward, with polymorphic product types and function types both being interpreted in a standard way in the topos - the first by internal products and the second by exponentiation. We show that every hyperdoctrine model of a polymorphic lambda theory can be fully embedded in such a topos model, the topos constructed being simply a functor category. There are precise correspondences between polymorphic lambda theories and their hyperdoctrine models, and between toposes and theories in higher order intuitionistic predicate logic. So we can conclude that every theory of the first kind can be interpreted in a theory of the second kind in such a way that the polymorphic types are interpreted in a standard way, but so that up to provability in the higher order theory, they have exactly the same closed terms as before. A simple corollary of this full embedding result is the completeness of topos models: for each polymorphic lambda theory there is a topos model whose valid equations are exactly those derivable in the theory.
{"url":"http://www.cl.cam.ac.uk/~amp12/papers/polist/polist.html","timestamp":"2014-04-16T07:50:04Z","content_type":null,"content_length":"1893","record_id":"<urn:uuid:1d73ebef-f4a3-4bdc-8d82-e14afecf3ef7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
determinat value May 13th 2012, 05:54 AM #1 Jan 2011 determinat value if A is symmetric matrix of odd order and a[ii]=0 for all i then prove that determinant of A is an even number. please explain. Re: determinat value Has the matrix real entries? Re: determinat value yes, the entries are integers, other than diagonal elements. Re: determinat value We can use the definition of the determinant. Since $a_{ii}=0$, we can sum over the permutations without fixed point. Since these permutations act over a set which have an odd number of elements, these one have a square different from the identity. So you can write the sum as two equal parts, which will give an even number. May 15th 2012, 12:12 AM #2 May 15th 2012, 09:53 AM #3 Jan 2011 May 17th 2012, 02:16 AM #4
{"url":"http://mathhelpforum.com/advanced-algebra/198756-determinat-value.html","timestamp":"2014-04-19T21:21:20Z","content_type":null,"content_length":"34090","record_id":"<urn:uuid:574b2f34-d2df-4a0f-89f8-50e23885dc91>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
Centralhatchee, GA Math Tutor Find a Centralhatchee, GA Math Tutor ...I have taught for four years in a public school setting and am now looking to extend my skills into private tutoring. I was a Science major at the University of Georgia, then decided I wanted to teach and received my teaching certificate. Recently, I finished my Master's degree in Education. 11 Subjects: including algebra 1, algebra 2, biology, chemistry ...My favorite subjects include math, science and computer applications. I currently teach Biology and have taught and tutored other subjects such as algebra, chemistry and physical science. Since you (as a student) are living in an era of testing, I also incorporate test-taking strategies/tips in... 19 Subjects: including geometry, GED, anatomy, statistics ...I have been a math tutor in the past for subjects such as Algebra I, II, and geometry. I have also assisted students in studying for the SAT and ACT. My scores for both tests were 1980 (super scored SAT) 1880 (highest score SAT) and 28 (ACT). I am a member of Pi Sigma Alpha, the Political Science Honor Society. 7 Subjects: including SAT math, PSAT, political science, algebra 1 ...I currently work with students with Asperger's, ADHD, etc. I am certified through the State of GA in Special Education and currently work with special needs students-ADD/ADHD and Aspergers. I am currently certified in Special Education through the State of GA and am currently working with Aspergers students. 48 Subjects: including algebra 1, ACT Math, biology, reading ...I received my masters in science with a focus on educational courses. I am gifted certified by the state of Georgia and am also a certified teacher with 9+ years of successful teaching. I have taught in the middle/elementary grade levels. 25 Subjects: including algebra 1, probability, reading, precalculus Related Centralhatchee, GA Tutors Centralhatchee, GA Accounting Tutors Centralhatchee, GA ACT Tutors Centralhatchee, GA Algebra Tutors Centralhatchee, GA Algebra 2 Tutors Centralhatchee, GA Calculus Tutors Centralhatchee, GA Geometry Tutors Centralhatchee, GA Math Tutors Centralhatchee, GA Prealgebra Tutors Centralhatchee, GA Precalculus Tutors Centralhatchee, GA SAT Tutors Centralhatchee, GA SAT Math Tutors Centralhatchee, GA Science Tutors Centralhatchee, GA Statistics Tutors Centralhatchee, GA Trigonometry Tutors Nearby Cities With Math Tutor Bowdon Junction Math Tutors Chattahoochee Hills, GA Math Tutors Edwardsville, AL Math Tutors Five Points, AL Math Tutors Franklin, GA Math Tutors Glenn, GA Math Tutors Graham, AL Math Tutors Greenville, GA Math Tutors Mount Zion, GA Math Tutors Palmetto, GA Math Tutors Roanoke, AL Math Tutors Sargent, GA Math Tutors Turin, GA Math Tutors Wadley, AL Math Tutors Woodland, AL Math Tutors
{"url":"http://www.purplemath.com/Centralhatchee_GA_Math_tutors.php","timestamp":"2014-04-20T21:17:21Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:bbe16fbc-d312-46cd-80fe-8c4cf3486586>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Syllabus For Preliminary Examination In CAD Revised March 2011 Prelim Format • Each student will be examined by the entire group of examiners. The exam will last about 1h; expect 3 questions of 20m each. The format is oral. As a component of the exam, you may be asked questions somewhat outside the syllabus or your domain of knowledge. It is understood that you may not know the answer; the intent is to see how you approach such problems. Syllabus and Materials (effective Fall 2011) 1. Timing □ Fundamental algorithms for timing analysis of circuits and systems. Concepts include setup/hold time, slack, clock skew, critical paths, DAG timing models, static and incremental timing analysis, longest path algorithms, false paths, delay intervals, controlling values, static sensitization and co-sensitization, static timing analysis, linear and nonlinear optimization frameworks for timing. Re-timing: goals, graph abstractions, register insertion/deletion concepts for re-timing, re-timing formulation and solution techniques (Bellman-Ford, FEAS). Obtaining delays using continuous-time circuit and interconnect analysis. Applications to software and circuits. □ Suggested reading: 2. Continuous-Time Modelling and Simulation □ Modelling continuous-time systems (circuits, simple mechanical systems, reaction rate equations) as differential-algebraic equations. Quiescent steady-state analysis; the Newton-Raphson algorithm. Solving sparse systems of linear equations: Gaussian Elimination and LU factorization. Numerical solution of differential equations: existence/uniqueness, Picard-Lindelof theorem, Lipschitz condition; ODE solution fundamentals; Forward Euler, Backward Euler, Trapezoidal methods; LMS methods; use of ODE techniques for DAEs; stability of LMS methods; accuracy, truncation error of LMS methods. Sinusoidal steady state analysis of linear time-invariant systems: DAE linearization, frequency-domain computation of sinusoidal steady state responses, connection with Laplace transforms. Stationary noise analysis of linear time invariant systems: propagation of stationary noise through LTI systems, transfer functions from DAEs, direct and adjoint computation of noise power spectral densities. □ Suggested reading: ☆ A. Sangiovanni-Vincentelli, "Circuit Simulation", in Design Systems for VLSI Synthesis, Martinus Nijhoff Publishers, 1987. ☆ J. Roychowdhury, Numerical Simulation and Modelling of Electronic and Biochemical Systems, Foundations and TrendsĀ® in Electronic Design Automation: Vol. 3: No 2-3, pp 97-303, 2009. Chaps ☆ L.O. Chua and P.-M. Lin, Computer-Aided Analysis of Electronic Circuits: Algorithms and Computational Techniques, Prentice-Hall, 1975. Chaps. 11-13. 3. Boolean Reasoning and Synthesis □ Boolean functions and their interpretation as sets. On/off/don't care sets. Cubes and literals. Truth tables. CNF and DNF representations. Two-level minimization: Implicants, prime implicants; covers, prime, irredundant and minimum covers; Quine-McCluskey exact minimization; heuristic minimization via the Espresso algorithm. Multi-level logic optimization: Boolean networks; Simplification, Elimination, Decomposition, Extraction and Substitution operations on Boolean networks; Boolean and algebraic division; Boolean kernels and kernel-finding algorithms. Binary Decision Diagrams: co-factors, Shannon expansions, and their properties; binary decision trees; variable ordering; reduction rules and how they lead to reduced, ordered binary decision diagrams (ROBDDs); canonicity of ROBDDs; if-then-else operator on ROBDDs; ROBDD implementation concepts; multi-rooted BDDs; BDD size bounds; dynamic variable re-ordering by sifting. Technology mapping; strategies for design and implementation of parallel software. □ Suggested reading: 4. Models of Computation □ Sequential circuits. Feedback in cyclic combinational circuits: well-formed and ill-formed models. Expressing feedback using fixed-point semantics and synchronous abstractions. Constructive semantics. The Knaster-Tarski (Tarski Fixed Point) theorem: partial ordering and posets; Scott order, "bottom", Hasse diagrams; total orders; least upper bounds (joins); monotonic (order-preserving) functions; fixed-point theorem and its proof. Bourdoncle's algorithm. Constructiveness over a range of different inputs: motivation for symbolic execution. Replacing Boolean and "bottom" symbols with functions of input values. Characteristic functions; operations on characteristic functions. Monotonicity of gate operations. Proof of convergence via the Knaster-Tarski theorem. Extension of symbolic execution to handle state machines. Constructive composition of state machines. Concepts of synchronous languages. □ Suggested reading: 5. Scheduling □ Connected and disconnected dataflow models. Balance equations: production-consumption matrix, unique least-positive solution, consistent and inconsistent models. Solving the balance equations: fractional iteration vector, least integer solution via the LCM/GCD, Euclid's algorithm. Symbolic execution, periodic sequential schedules. Sequential scheduling for SDFs: least positive integer null space of the production/consumption matrix. Parallel scheduling. Unbounded compute resources. Modelling separate hardware for each actor; modelling synchronous circuits as dataflow; retiming as dataflow firings. Parallel scheduling with bounded resources. Acyclic precedence graphs; list scheduling; Hu Level Scheduling technique. Max-Plus algebra and □ Suggested reading: ☆ Lee/Messerschmitt: Static Scheduling of Synchronous Data Flow Programs for Digital Signal Processing," IEEE Trans. on Computers, Vol. C-36, No. 1, pp. 24-35, January, 1987. ☆ Baccelli, F., G. Cohen, G. J. Olster and J. P. Quadrat (1992). Synchronization and Linearity, An Algebra for Discrete Event Systems. New York, Wiley. ☆ Sih/Lee: "Declustering: A New Multiprocessor Scheduling Technique," IEEE Trans. on Parallel and Distributed Systems, vol. 4, no. 6, pp. 625-637, June 1993. ☆ Bhattacharyya, Murthy, Lee: Software Synthesis from Dataflow Graphs, Kluwer Academic Press, 1996. ☆ Geilen, M. and S. Stuijk. Worst-case Performance Analysis of Synchronous Dataflow Scenarios. CODES+ISSS, Scottsdale, Arizona, USA, October 2010. 6. Formal Verification and Constraint Solving □ The SAT problem and its applications. CNF representation for SAT. Worst-case and typical complexity. 2-SAT, 3-SAT and Horn-SAT. Resolution and the Davis-Putnam (DP) algorithm. Davis-Logemann-Loveland (DLL) algorithm. Conflict analysis and backtracking. Branching and decision heuristics. Chaff SAT solver heuristic. Boolean Constraint Propagation (BCP). 2-literal watching. Sequential equivalence checking: consensus (universal quantification) and smoothing (existential quantification) operators. Limitations of low-level circuit equivalence techniques for sequential circuits. Equivalence concepts for finite state machines. Composing FSMs for equivalence checking. Representing states and transitions as Boolean functions/sets; FSM encoding via BDDs. Reachability analysis. Sequential equivalence checking via reachability analysis and SAT. Execution trace of a state machine. Propositional logic on execution traces. Properties over a single time-line: Linear Temporal Logic (LTL). LTL operators: G, F, X, U. Safety and liveness. Monitor state machines for temporal logic formulae. Properties over all possible executions: Computation Tree Logic (CTL*, CTL). Path qualifiers: A, E. Backward Reachability Analysis. Verifying safety properties using forward and backward reachability analysis. CTL property verification via fixpoint computation. Use of BDDs: symbolic model checking. Bounded model checking and SAT-based model checking. Timed automata; basics of verification of infinite-state systems and SMT solving. □ Suggested reading: 7. Logic-level Testing, Fault Diagnosis and Reliability □ Logic-level controllability and observability concepts. Defects and fault models; stuck-at faults. Formulating the stuck-at fault problem for combinational and sequential circuits. Fault activation, propagation and justification. Single path sensitization (SPS). Redundant faults. Multiple path sensitization. Automatic test pattern generation (ATPG). Formulating ATPG as a SAT □ Suggested reading: ☆ P. Goel, "An implicit enumeration algorithm to generate tests for combinational logic circuits", IEEE Trans. on Computers, C-30(3):215-222, March 1981. ☆ T. Larrabee, "Test Pattern Generation using Boolean Satisfiability", IEEE Transactions on CAD, vol. 11, no. 1, January 1992. ☆ Murray and Hayes, IEEE Computer, 1996. 8. Fundamentals of Algorithms, Data Structures and Graphs □ Sorting: bubble sort, quicksort, heapsort, radix sort, bucket sort, etc.. Data structures: stacks, queues, linked lists, hash tables, binary search trees, B-trees, heaps. Graph concepts: vertices, edges, valency, isomorphism, directedness; walks, paths, cycles; weighted graphs; vertex colouring; trees; spanning trees; bipartite graphs and matchings; directed graphs, networks and critical paths; cuts, flows; max-flow min-cut theorem. Graph algorithms: sorting and searching algorithms; minimum spanning tree algorithms; shortest and longest path algorithms; Dijkstra, Bellman-Ford, Prim, Kruskal, Kernighan-Lin algorithms. □ Suggested reading: ☆ Cormen, Leisorson, Rivest and Stein, Introduction to Algorithms, MIT Press. ☆ Norman Biggs, Discrete Mathematics, Oxford Science Publications. Chapters on graphs and graph algorithms. ☆ B.W. Kernighan and S. Lin, "An Efficient Heuristic Procedure for Partitioning Graphs", Bell System Technical Journal, vol. 49, 1970. ☆ Alberto Sangiovanni-Vincentelli, Automatic Layout of Integrated Circuits, in Design Systems for VLSI Circuits: Logic Synthesis and Silicon Compilation (eds: De Micheli, Sangiovanni-Vincentelli, Antognetti), NATO Science Series E, Kluwer, 1987. (amazon.com link) ☆ Sung Kyu Lim, Practical Problems in VLSI Physical Design Automation, Springer, 2010. ☆ J. Kleinhaus, G. Sigl, F. Johannes and K.J. Antreich, "GORDIAN: VLSI Placement by Quadratic Programming in Slicing Optimization", IEEE Transactions on CAD, March 1991. ☆ T. Lengauer, "Combinatorial Algorithms for Integrated Circuit Layout", Chapter 10, Compact, pp. 579--649, Wiley and Teubner, 1990. ☆ C.M. Fiduccia and R.M. Mattheyses, "A Linear Time Heuristic for Improving Network Partitions", in Proceedings of the 19th Design Automation Conference, 1982. ☆ R.L. Rivest and C.M. Fiduccia, "A 'Greedy' Channel Router", in Proceedings of the 19th Design Automation Conference, pp. 418-424, June 1982. ☆ C.Y. Lee, "An Algorithm for Path Connections and its Applications", IRE Transactions on Electronic Computers, EC-10, pp. 346--365, September 1961. Programming aspects of the material above (including the ability to interpret and debug programs) are an integral part of the syllabus. As part of this, facility is expected with basic concepts and structures common to programming and scripting languages (such as C, MATLAB, python, perl, bash), concepts of object oriented programming (including classes, references, inheritance, operator overloading, templates, etc., in languages such as C++ and Java), and the basics of threaded programming (pthreads).
{"url":"http://www.eecs.berkeley.edu/GradAffairs/EE/Prelims/Syllabi/cad.des.html","timestamp":"2014-04-16T07:17:52Z","content_type":null,"content_length":"20836","record_id":"<urn:uuid:6170a32e-15a2-4467-9197-a892481cccb5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Institute The Mathematical Institute is the centre for mathematical activity at the University of Oxford. It is one of ten departments under the Mathematical, Physical and Life Sciences Divisional Board. The history of mathematics at Oxford is described in Oxford Mathematics and Mathematicians, the text of a lecture by the late I. W. Busbridge and the Oxford Figures book by John Fauvel, Raymond Flood and Robin Wilson. References to current members of the Institute can be found in Oxford Mathematicians in the Public Eye. Whilst it is usual for mathematics departments in Britain to be split into departments of Pure and Applied Mathematics, the unitary Oxford structure, which encourages numerous strong interactions between the different groups, is regarded as a major factor in the continued high reputation enjoyed by Oxford Mathematics. The Mathematical Institute moved to a single large new building in the summer of 2013 (see the map for details). The members of the Institute include more than 200 graduate students, professors, readers, other members of staff and academic visitors. The head of the department is A list of the Statutory Professors is available. Members of the department who are Fellows of the Royal Society include John Ball, Bryan Birch, Philip Candelas, Ian Grant, Roger Heath-Brown, Nigel Hitchin, Dominic Joyce, Frances Kirwan, Ioan James, Terry Lyons, John Ockendon, Roger Penrose, Graeme Segal, Ulrike Tillmann, Nick Trefethen . Research is carried out in a wide variety of fields including algebraic, differential and general topology, group theory and other branches of algebra, number theory, mathematical logic, functional analysis, harmonic analysis, algebraic and differential geometry, differential equations, probability theory and its applications, combinatorial theory, global analysis, mathematical modelling, mathematical biology, ecology and epidemiology, continuum mechanics, elasticity, applied and fluid mechanics, magnetohydrodynamics and plasmas, quantum theory, atomic and molecular structure, quantum theory and field theory,string theory, relativity and mathematical physics, applied analysis and materials science. Students wishing to do research in the Mathematical Institute may apply either to become a probationary DPhil student or to take an MSc course. There are over 100 students within the department studying taught MSc courses, in Mathematical Modelling and Scientific Computing, Mathematics and the Foundations of Computer Science, Mathematical and Computation Finance and Mathematical Finance (part-time). There are also MScs in Computation and Applied Statistics which are the sole responsibility of the Department of Computer Science and the Department of Statistics respectively. The MSc in Mathematical Modelling and Scientific Computing is a one year course and is to train graduates with a strong mathematical background to develop and apply their skills to the solution of real problems. The MSc in Mathematics and the Foundations of Computer Science is designed to provide students with a solid grounding in advanced pure mathematics, mathematical logic, and the mathematical and logical foundations of computer science. The MSc in Mathematical and Computation Finance is a 10 month course to prepare students for a career in quantitative finance in the financial industry and/or for a research career in academia. The MSc in Mathematical Finance is a part-time course that enables students to progress their career in a more quantitative direction. The Institute's reputation continues to attract graduate students of the highest calibre from overseas as well as from the UK. It admits approximately 40 research students to read for the D.Phil. in Mathematics each year. Research groups organise graduate lectures in their own areas, and the arrangement of supervision of their research students is co-ordinated by the Institute's Director of Graduate Studies.
{"url":"http://www.maths.ox.ac.uk/about","timestamp":"2014-04-19T02:10:19Z","content_type":null,"content_length":"24643","record_id":"<urn:uuid:aa6f7396-8bdc-45e8-9657-3a9c9329e0e1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Gábor Szegő Born: 20 January 1895 in Kunhegyes, Hungary Died: 7 August 1985 in Palo Alto, California USA Gábor Szegő was born in Kunhegyes, a small town in Hungary about 120 km southeast of Budapest. His undergraduate studies were undertaken in Budapest. After attending university in Budapest, Szegő went to Berlin where he studied under, among others, Frobenius, Schwarz, Knopp and Schottky, and Göttingen where he studied with Hilbert, Edmund Landau and Haar. He returned to Hungary where he worked under Féjer and Kurschak. He acted as a coach to the young von Neumann. He enlisted in the Austro-Hungarian cavalry in the First World War and spent some time in the Air force where he met von Mises. In 1921 he moved to Berlin where he became a friend of Schur and worked with von Mises and Schmidt. He cooperated with Pólya in bringing out a joint Problem Book: Aufgaben und Lehrsätze aus der Analysis, vols I and II (Problems and Theorems in Analysis ) (1925) which has since gone through many editions and which has had an enormous impact on later generations of mathematicians. Pólya wrote of their collaboration (see [2]):- It was a wonderful time; we worked with enthusiasm and concentration. We had similar backgrounds. We were both influenced, like all young Hungarian mathematicians of that time, by Lipót Fejér. We were both readers of the same well-directed Hungarian Mathematical Journal for high school students that stressed problem solving. We were interested in the same kinds of questions, in the same topics; but one of us knew more about one topic, and the other more about some other topic. It was a fine collaboration. The book Aufgaben und Lehrsätze aus der Analysis, the result of our cooperation, is my best work and also the best work of Gábor Szegő. In 1926 he moved to Königsberg to succeed Knopp as professor. He stayed there until 1934 when the pressure on him as a Jew forced him to move to the USA where he found a post at Washington University in St Louis, Missouri. In 1938 he moved to Stanford where he remained for the rest of his working life. Szegő's most important work was in the area of extremal problems and Toeplitz matrices. This work led him to introduce the notion of the Szegő reproducing kernel. From these beginnings he moved to prove a number of limit theorems, now known as the Szegő limit theorem, the strong Szegő limit theorem and Szegő's orthogonal polynomials and on the unit circle. He produced over 130 research articles as well as several influential books. In addition to the books he wrote with Pólya, described above, Szegő wrote research monographs on his own work. Orthogonal Ploynomials appeared in 1939 and was published by the American Mathematical Society. It has proved highly successful, running to four editions and many reprints over the years. In a collaboration with Ulf Grenander, Szegő wrote Toeplitz forms and their applications which was published in 1958. Article by: J J O'Connor and E F Robertson April 1998 MacTutor History of Mathematics
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Printonly/Szego.html","timestamp":"2014-04-20T18:54:39Z","content_type":null,"content_length":"3940","record_id":"<urn:uuid:c2f8b4db-9732-4b68-a88d-2939fd3110a1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Either I'm wrong or the book is! February 28th 2010, 03:16 AM #1 Feb 2010 Either I'm wrong or the book is! Hi guys! Here's the problem (fairly simple but I'm still getting it wrong with the book's result...): A yearly output of a silver mine is found to be decreasing by 25% of it's previous year's output. If in a certain year it's output was 25,000,000£ what could be reckoned as it's total future the book gives me 3.3 x 10 to the power 7. Is that right? I'm not sure I totally understood the problem but from what I could catch: If the output was $y_n$ at year n, then the output will be $y_{n+1} = 0.75y_n$ at the following year since we have a decrease by 25%. The question is asking for the total future output. What we have is a geometric sequence with a first term of 25,000,000 and a ratio of 0.75. The total sum of of this sequence is given by $\frac{25,000,000}{1-0.75} = 100,000,000$ (since we are summing to infinity). The answer of the book seems to be wrong unless I misunderstood the problem. By the way, is this the value you are getting?? Hi guys! Here's the problem (fairly simple but I'm still getting it wrong with the book's result...): A yearly output of a silver mine is found to be decreasing by 25% of it's previous year's output. If in a certain year it's output was 25,000,000£ what could be reckoned as it's total future the book gives me 3.3 x 10 to the power 7. Is that right? Hello Mister77 Correct answer in my view: $\frac{3}{4}*25000000$ would be the next year's output Book answer according to me is wrong. Yes that's exactly what I applied the (sum to infinity) and I got the same result 100,000,000. I get the impression that the book does a lot of wrong wording, so I tried to look at it in a different way. It reffers to a "total future output".. I reckon that may possibly mean a ratio of 1.75 (in other words that would mean for the second year, total future output, would be the previous years earnings + 0.75 of this years earnings). If I apply that to the sum to infinity I end up with 25,000,000/1.75-1 = approx 33,333,333.33. That would be kind of similar to what the book gives me as a result (3.3 x 10 to the power 7). But I really don't know if this is just a coincidence based on guess work or how it should be! Someone help! I think the books contains an error: either in the question or in the answer. If the question is as it is, the answer is definitely 100,000,000. Trust me! The answer cannot be $3.3 \times 10^7$ unless the decrease is 75% per year and not 25% because then the answer would be $\frac{25,000,000}{1-0.25} = 3.333\times 10^7$ which is fairly close to the answer of the book. The method we are using is correct so be confident and use 100,000,000 as an answer. I think the books contains an error: either in the question or in the answer. If the question is as it is, the answer is definitely 100,000,000. Trust me! The answer cannot be $3.3 \times 10^7$ unless the decrease is 75% per year and not 25% because then the answer would be $\frac{25,000,000}{1-0.25} = 3.333\times 10^7$ which is fairly close to the answer of the book. The method we are using is correct so be confident and use 100,000,000 as an answer. Thanks mohammadfawaz! Yep, I copied it exactly from the book. So at this point, I guess it has to be wrong. Thanks! February 28th 2010, 03:37 AM #2 February 28th 2010, 03:40 AM #3 February 28th 2010, 04:12 AM #4 Feb 2010 February 28th 2010, 10:01 AM #5 Feb 2010 February 28th 2010, 10:09 AM #6 February 28th 2010, 01:42 PM #7 Feb 2010
{"url":"http://mathhelpforum.com/algebra/131158-either-i-m-wrong-book.html","timestamp":"2014-04-16T06:04:41Z","content_type":null,"content_length":"47923","record_id":"<urn:uuid:8b2f4745-444f-4179-bcba-d2b0d4a77d61>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
Strand of the Web Heather Logan's... Strand of the Web I'm currently a postdoc in the Theoretical Physics department at Fermilab. In September 2002 I'll be moving to the University of Wisconsin, Madison for a postdoc position. At the moment I'm working on supersymmetry and Higgs phenomenology for current and future colliders. Phone: 630-840-5529 Fax: 630-840-5435 Mail: Fermilab, Theoretical Physics Department, MS 106, PO Box 500, Batavia, IL 60510-0500, USA Email: logan@fnal.gov Contact information after September 1, 2002: Email: logan@pheno.physics.wisc.edu Mail: University of Wisconsin, Department of Physics, 1150 University Ave., Madison, WI 53706, USA Fax: 1+608-262-8628 Publications and citations from Spires Curriculum Vitae [postscript] (08/2002) Research interests [postscript] [html] (11/2001) Slides from some of my talks: * Single heavy MSSM Higgs production at a Linear Collider from DPF 2002, College of William and Mary, May 24-28, 2002. * Discriminating the MSSM Higgs from the SM Higgs and heavy MSSM Higgs production from LoopFest, Brookhaven National Laboratory, May 9-10, 2002. * Single heavy MSSM Higgs boson production at a Linear Collider (ps.gz) from Pheno 2002, Madison, Wisconsin, April 22-24, 2002. * Distinguishing an MSSM Higgs from the SM from s-channel production at a gamma-gamma collider (ps) and Monte Carlo generator for SUSY (ps) from the Chicago Linear Collider Workshop, U. Chicago, Jan. 7-9, 2002. * Distinguishing an MSSM Higgs from a SM Higgs at a linear collider (ps.gz) (pdf) from the Workshop on the Future of Higgs Physics, Fermilab, May 3-5, 2001 and Pheno 2001, University of Wisconsin, Madison, May 7-9, 2001. * SUSY radiative corrections to gamma gamma -> t tbar (ps.gz) from the 2nd International Workshop on High Energy Photon Colliders, Fermilab, March 14-17, 2001. * Gamma gamma -> b bbar: background to Higgs production in pandora (ps.gz) from the 2nd International Workshop on High Energy Photon Colliders, Fermilab, March 14-17, 2001. * Linear Collider Physics (ps.gz) from the "Food for Thought" on Physics, Accelerators and Detectors for a Linear Collider, at Fermilab, March 13, 2001. * SUSY radiative corrections at large tan beta (postscript) from the 30 Years of Supersymmetry workshop. * Distinguishing an MSSM Higgs from a SM Higgs at a Linear Collider (postscript) from the Linear Collider Workshop 2000 at Fermilab, Oct. 24-28, 2000 (or scanned slides (thumbnails, html and * B_d,s -> l+ l- in the two-Higgs-doublet model (pdf) from the B Physics workshop for Tevatron Run II * SUSY-QCD corrections to h -> b b-bar in the decoupling limit (postscript) from Pheno 2000 Last updated August 12, 2002.
{"url":"http://theory.fnal.gov/people/logan/","timestamp":"2014-04-17T13:03:50Z","content_type":null,"content_length":"6553","record_id":"<urn:uuid:486108f8-b6c9-440b-bdd9-966f4a59f22a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Moderated Regression Suppose that we would like to learn whether the above mediational model holds across levels of a moderator variable that has 5 mutually exclusive categories. That is, we would like to learn whether the mediational model interacts with another variable. To test this, we can use a macro in SPSS that will provide us with conditional indirect effects, and will inform us of the levels of the moderator for which the mediational model holds or does not hold. Testing the above model using an SEM package such as AMOS or LISREL would provide us with an even richer analysis. In the current notes however, we will consider how to address various questions of moderation that arise in the above model, as a first and somewhat primitive attempt to tease apart various aspects of the mediational model, and look at them separately. For instance, suppose we find evidence that mediation holds across all levels of the moderator. Although this is an interesting finding, we may want to "dig in" a bit in a post-hoc sense to learn more about the above model's paths across varying levels of the moderator. Again, this is best handled by SEM, but it is very instructive to see how this problem can be at least partially addressed (or approached) using moderated regression where the moderator is a categorical variable (5 mutually exclusive categories). The remainder of these notes discuss how to conduct moderated regression. Consider the following question: Q: Does the IV predict the mediator consistently at each level of the moderating variable? The above question is one of an interaction between the IV and the hypothesized moderator. The mediator is now the dependent variable. So, in brief, our function statement (where "E" is error) is the Mediator (Y) = IV + Z + IV*Z + E. Recall that the mediator is now "Y," our dependent variable. We are hypothesizing that the mediator is a function of the IV, the moderator Z, and the product term IV*Z. If we do find evidence for an interaction, it will inform us that the path from IV to MEDIATOR, considered alone, is not consistent across the five levels of Z. If the paths are not consistent (i.e., there is evidence of an interaction), then it suggests that the IV predicts the MEDIATOR differentially across levels of Z. Note however too that it is possible that the IV does NOT predict the mediator at all at some of these levels, but recall that one condition for testing mediation in the first place is that the paths from IV to MED are bivariate statistically significant. But, just because the paths are statistically significant does not necessarily mean they are constant across levels of the moderator. That is, we may expect the path from IV to MED to be statistically significant at each level of the moderator, but the interaction is going to tell us whether the paths (i.e., slopes) change across levels of the moderator. Let's look at how we would test the interaction term for IV predicting MED, with Z as a hypothesized moderator. The following details how to test moderated regression with one continuous DV, one continuous IV, and one categorical moderator (we'll use a variable with 5 mutually exclusive categories for this example). How to Test Moderation in SPSS when the DV and IV are Continuous, but the Moderator is Categorical The question we asked above, that of whether the IV predicts the mediator differentially across levels of the moderator is actually a classic question of moderated regression. For this example then, we are simply treating the mediator as the DV, without considering all simultaneous equations implied in the mediational model. Since the moderator is categorical in nature, we need to produce dummy-coded variables that represent the levels of the moderator. However, recall that when dummy-coding, we will always produce J - 1 categories, where J is the number of levels of the moderator. Then, we'll cross each coded level of the moderator with the continuous independent variable X. Before we show an example, it may be of use to forecast what the output of our regression will eventually look like. Here are the terms we want to test, and the information provided by each term (see below). Recall that when dummy-coding, we need to choose a reference group, that's the group that is NOT represented in the coding. Here are the terms we can expect then from our regression output once we implement the analysis: X Z1 Z2 Z3 Z4 X*Z1 X*Z2 X*Z3 X*Z4 The above are the terms we want to test. What do they mean? Let's break them down to know what conclusions we'll be able to draw (and not draw) from the ensuing regression that we will run: - this is the continuous IV, so we'll be able to conclude whether X predicts Y while holding Z at 0, just as we would in an ordinary multiple regression. Note that Z contains 5 levels, but is represented fully by Z1 through Z4. The effect of X is actually a simple slope, because it evaluates the slope of Y on X when Z = 0. - this is the first of four Z variables, and represents a given level for the coded variable; the coefficient for Z1 represents a mean difference between the level coded as Z = 1 and the baseline category (which we'll identify as Z = 0). Again, choose your reference category wisely - it will be the category for which you would like to make comparisons against. - this is the second of the four Z variables, and again represents a given level for the coded variable; the coefficient for Z2 represents a mean difference between the level coded as Z = 2 and the baseline category. Notice again that just as for Z1, the obtained coefficient is providing us with a comparison, the comparison being between means. - this is the third of the four Z variables, and again represents a given level for the coded variable; the coefficient for Z3 represents a mean difference between the level coded as Z = 3 and the reference category. - this is the fourth of the four Z variables, and again represents a given level for the coded variable; the coefficient for Z4 represents a mean difference between the level coded as Z = 4 and the reference category. - the coefficient for this represents the difference in slopes between Y on X at Z = 1 and Y on X at Z = 0 (the reference category). We will examine this coefficient in some detail later once we obtain our results using fictitious data. We'll also plot the different slopes to visualize the effect. - the coefficient for this represents the difference in slopes between Y on X at Z = 2 and Y on X at Z = 0 (the reference category). Again, as was true for X*Z1, X*Z2 represents a difference in - the coefficient for this represents the difference in slopes between Y on X at Z = 3 and Y on X at Z = 0 (the reference category). - the coefficient for this represents the difference in slopes between Y on X at Z = 4 and Y on X at Z = 0 (the reference category). Example Using Fictitious Data Let's use fictitious data to illustrate the above moderation. Let's run things on 10 cases per group only. Here's how the data should look when entered into SPSS: Notice that the Z variable (i.e., the moderator) has 5-1 = 4 columns. Subjects 1 through 10 are in group 1 of the moderating variable. Subjects 11 through 20 are in group 2 of the moderating variable. Subjects 21 through 30 are in group 3 of the moderating variable. Subjects 31 through 40 are in group 4 of the moderating variable. Finally, subjects 41 through 50 are in group 5 of the moderating variable. What is group 5? It is the reference category. Your choice of a reference category should reflect a kind of "baseline" group against which you're interested in making mean pairwise comparisons. For instance, when we run the analysis, the regression coefficient for Z = 1 will reflect the mean comparison between those subjects in the reference group (Z = 0) to those subjects in the Z = 1 group. Likewise, we'll get another comparison between those subjects in the reference group compared to those subjects in the Z = 2 group. And so on. Producing the Product-Terms Next, we need to produce the relevant product terms. We ask SPSS to compute product terms of X with EACH coded Z variable as follows: We've now created all the relevant product terms of X with the moderator Z. All possible interaction terms are included. If you're wondering why we didn't cross Z1 with Z2, for instance, that would be like crossing part of a variable with itself, so it's not do-able. The 4 categories of the dummy variable represent the 5 groups, and all interaction terms produced represent all the possible slope memberships of Y on X. We're ready to now run the analysis. When you enter variables, it should look like this (be sure to enter all Z1 through Z4 to make sure the dummy-coded variable is properly represented): When we run the regression, we get the following for output: /CRITERIA=PIN(.05) POUT(.10) /METHOD=ENTER X Z1 Z2 Z3 Z4 X_Z1 X_Z2 X_Z3 X_Z4. We're first shown the variables entered/removed. We've entered all variables, so all looks good in the following: Next, we get a summary of the model: Notice that the model explains almost 41% of the variance in the dependent variable Y. We're not using that large of a sample size, so adjusted R-square is "punishing" us a bit, and bringing our explained variance down. The model is statistically significant at p < .01 (F on 9 and 40 degrees of freedom equals 3.042, p = .007). Next up, we're given the parameter estimates. The following table, and variations thereof, will consume our discussion for much of the remainder of these notes (we have an arrow pointing to ".812," and we'll explain this coefficient a bit later in the notes): Let's interpret each and every one of the parameter estimates to indicate exactly what they mean: X - as X increases by one unit, the expected change in Y is -.524 units when Z = 0 (as represented by the coded dummy variables). This interpretation is similar to that in ordinary least-squares regression. However, notice that we had to specify "when Z = 0." This is key. The effect of X is actually a simple slope because it represents Y on X when Z = 0. It looks like a "main effect" above, but it is actually a simple slope. If it were a traditional main effect, then the interpretation would be "as X increases by one unit, the expected change in Y is -.524 units across X." It is very important to note that when interaction terms are present, the meaning of main effect terms change, as in the current case where we have the simple slope for when Z = 0. Z1 - the mean difference between the group coded 0 and the group coded 1 is equal to -4.549. Although it represents a mean difference, we can actually interpret it "regression-style" to clarify its meaning. That is, as we go from Z = 0 to Z = 1, the expected change in the Y variable is a decrease of 4.549 units. In other words, the expected mean of the group coded 0 is 4.549 units more than the mean of the group coded 1. Let's write out the equations to see this a bit better. For an individual in group 0, and having X = 0, the predicted score is: Y = 8.190 -.524(X) -4.549(Z1) = 8.190 -.524(0) -4.549(0) = 8.190 Now, if that person were in group 1, but still having X = 0, we would have: Y = 8.190 -.524(X) - 4.549(Z1) = 8.190 -.524(0) - 4.549(1) = 8.190 -4.549 = 3.641 The numbers 8.190 and 3.641 are predicted values (they are also means) for group 0 and group 1 respectively, given X = 0. Notice that as we go from group 0 to group 1, the predicted value drops by a magnitude of 4.549 units. This is exactly what the coefficient for Z1 is telling us. We're tempted to verify this through a simpler analysis to make sure it's correct. Let's try to verify it. Let's calculate the mean of each group Z = 0 vs. Z = 1: USE ALL. COMPUTE filter_$=(Z1 = 1). VARIABLE LABEL filter_$ 'Z1 = 1 (FILTER)'. VALUE LABELS filter_$ 0 'Not Selected' 1 'Selected'. FORMAT filter_$ (f1.0). FILTER BY filter_$. EXECUTE . Notice that the mean for group 1 is equal to 5.2. What is the mean for group 0? use 41 thru 50. Why is the mean difference that of 5.2 - 5.1 = 0.1 and not -4.549 as the coefficient suggested? It is because the difference of 0.1 does not take into consideration the effect of partialling out X. Let's do a mini-analysis in which we do not partial out X from the difference and see if it matches up to 0.1. It should. We will only include the Z variable in our analysis (as to ignore the influence of X): /CRITERIA=PIN(.05) POUT(.10) /METHOD=ENTER Z1 Z2 Z3 Z4. The results from the above are the following: Notice the coefficient for Z1, it is equal to .100, the exact same difference between means as we found previously when we did not partial out X. Notice as well that the intercept is equal to 5.100, which is the mean of the group coded 0. The above interpretation is simple, because we don't have X to partial out. As we go from group 0 to group 1, the expected increase in Y is equal to .100 (i.e., 5.100 to 5.200). Let's return to the interpretation of coefficients, with X included. Recall the output: We've already interpreted the value for Z1 as the difference between the reference group and the group dummy coded 1, controlling for X. Specifically, we can say that as we move from the reference group to group 1, the expected change in Y is a decrease of 4.549 units, holding X constant. In other words, the difference in means between the reference group and the group coded 1 is equal to 4.549, with the group coded Z = 1 having the lower mean (because the sign of the coefficient is negative). We have to be sure to state this difference in the context of X being held constant (as we'll see later on, the coefficients change when X is not held constant). For Z2, the interpretation is analogous. As we go from Z0 to Z2, the expected decrease in Y is equal to -5.315. This again is the mean difference between the reference group and the group coded 2, partialling out X. Remember that you MUST interpret these mean differences under the condition that X has been partialled out, otherwise it is not an accurate interpretation of the given coefficient. As we go from Z = 0 to Z = 2, the expected change in Y is a decrease of 5.315. For Z3, the interpretation is that as we go from Z = 0 to Z = 3, the difference in means is equal to 9.271. That is, the expectation experiences a decrease in means from Z = 0 to Z = 3. Otherwise said, the mean difference between these two groups, controlling for X, is equal to 9.271. Finally, let's look at Z4 = -5.019. As we go from Z = 0 to Z = 4, the difference in means is equal to -5.019. Note carefully that though we're saying "from Z = 0 to Z = 4," we're not implying any kind of continuity between 0 and 4. We're simply using these as labels to describe the dichotomous situation represented by each interaction coefficient, or otherwise said, the comparison between Interpreting the Interaction Terms Let's have a closer look at the interaction term X*Z1 (i.e., the term with the red arrow pointing toward it). What does it represent? It represents a difference in slopes. The difference in slopes is that between Y on X at Z = 0 versus Y on X at Z = 1. Because the coefficient is positive (.812), it means that the slope of Y on X at Z = 1 is .812 greater than the slope of Y on X at Z = 0. If we look at the other interaction terms, we see a similar trend, that there is an increase in the slope of Y on X as we go from Y on X at Z = 0 to Y on X at Z = 2, Z = 3, but then the slope drops a bit at Z = 4. Notice too that the terms are statistically significant, suggesting that these differences in slopes are probably not best explained by sampling error (or losely, "chance"). It would appear there is an actual effect in the population from which these data were presumably sampled. Visualizing the Simple Slopes Let's look more closely at the coefficient for X*Z1. It is equal to .812. Again, literally, it means that the slope of Y on X increases (expectantly) by .812 units as we move from the reference group (Z = 0) to the group coded 1 (Z = 1). If we were to simply report our analyses in this way, it would be pretty hollow. We would like to produce a visual display so we can actually "see" the effect. In what follows we conduct two analyses: 1) the regression of Y on X for Z = 0, 2) the regression of Y on X for Z = 1. The difference in regression weights (raw b weights) between analyses should equal the coefficient value of .812 that we observed above, and would should get a powerful visual display of the slope difference. The following analyses should suggest the same result that we got from the above analysis that included interaction terms. Let's get started with the first analysis: Analysis 1: Y on X when Z = 0. To get this anlaysis, ask SPSS to select only those cases for which we have zeros for all dummy-coded variables, since this represents the reference group (cases 41 through 50). Recall the original coded data: The data we want for the first analysis are in cases 41 through 50 (because that's the group for which Z = 0, it's the reference group). We can ask SPSS to select these cases by the command (or just use window commands, it's much more convenient than always typing in syntax when not necessary - we show the syntax to show the procedural steps, since it's sometimes difficult or inconvenient to show steps through window snapshots): use 41 thru 50. Next, run the regression of Y on X. Because we've selected only those cases that represent the reference group, the ensuing regression will be Y on X for when Z = 0: Notice the value of -.524 for X. We will use this value in a moment. It represents the expected change in Y for a one-unit increase in X when Z = 0. Notice that the coefficient is not statistically significant. In other words, this "simple slope" is not statistically significant. Analysis 2: Y on X when Z = 1. Let's now run the regression of Y on X for when Z = 1. Again, select cases that represent group 1 for membership on the moderating variable Z. For our data, this is accomplished by: use 1 thru 10. When we then run the regression, we obtain: Notice the value of .289 for the coefficient for X. It represents the expected change in Y for a one-unit increase in X when Z = 1. Notice that the coefficient is not statistically significant. Again, just as was the case for Z = 0, this "simple slope" is not statistically significant. But nevertheless, what have we just calculated in these two separate regressions? When we subtract -.524 from .289, we obtain .289 - (-.524) = 0.813. What is this number of 0.813? It represents the difference between slopes of Y on X when Z = 0 vs. Y on X when Z = 1, and is identical (within slight rounding error) to the coefficient we found earlier in the full analysis, marked with the arrow in the following: Notice as well that the slope difference is statistically significant (p = .022) but neither simple slopes is statistically significant as we saw in the above output. This is just fine, and it simply means that neither slope is really doing much alone, but there still is a statistically significant difference between them. Notice as well what the coefficient is actually telling us. It's telling us that as we move from the reference group (Z = 0) to the group coded 1 (Z = 1), the slope INCREASES by .812. If this is true, then we should be able to visualize this in two separate plots to better understand the effect we've found. Let's produce a scatterplot for Y on X when Z = 0: use 41 thru 50. EXECUTE. Notice the direction of the relationship. It's negative. According to the regression coefficient of .812, when we plot for Y on X for Z = 1, we should see a .812 increase in the slope. Let's obtain the plot for Y on X when Z = 1 to visualize this effect: use 1 thru 10. Notice that the slope has changed (the red slopes are only approximate, they were inputed manually and not fitted exactly according to the regression equation). By how much? By .812 units. That is, as our coefficient told us, we're seeing an increase in the slope of Y on X for Z = 0 of -.524 to a slope of .289 when Z = 1. This difference is of .812 units. Hence, we've visualized what the regression coefficient X*Z1 was telling us in the original analysis. We could do this for all of the product terms to get a feel for the respective interaction terms. As a guideline, whenever you present simple slopes analyses, it's always a good idea to plot the simple slopes following the original analysis with relevant product terms. The visualization provides a powerful way to gain an appreciation of what's actually going on in your data, and undoubtedly your audience is going to want to see these slopes and plots to get a feel for your findings. How to Display the Effects of Z1, Z2, Z3, Z4 [this section is still under construction] Group n Mean of Y Mean of X 0 10 5.1000 5.9000 1 10 5.2000 5.4000 2 10 5.9000 5.2000 3 10 4.4000 6.3000 4 10 6.3000 6.6000 MM 5.3800 5.8800 Combined 50 5.3800 5.8800 Obtaining Predicted Values in SPSS Let's return to the table of coefficients: Let's look at the constant of 8.190 (not the arrowed number, but rather the constant at the top of the table). What does this represent? It is the expected Y for an observation when X = 0. Notice that across the 5 groups of the moderator, an increase in X is associated with an expected decrease in Y of -.524 units. Had we centered X, the interpretation would be that the expected value for Y for someone with an average level of X is 8.190. Will we talk about centering shortly. Now, let's write out the equation for the first observation in group 1. That individual was in group 1 and had X = 6. What is that observation's predicted value? Y = 8.190 -.524(6) - 4.549(1) + .812(6) = 8.190 - 3.144 - 4.549 + 4.872 = 5.369 Thus, the predicted value on Y for someone in group 1 who has a value of 6 on X, is 5.369. We can ask SPSS to produce a whole vector of predicted values for our entire data set (we show the first few predicted values in what follows). Notice that our predicted value of 5.369 that we calculated matches up with the predicted value for observation 1 (within rounding error): How well do our predicted values match up with the observed values? Let's correlate them: /VARIABLES=Y PRE_2 What does this value of .637 represent? It is the multiple R from our analysis, since multiple R is the bivariate correlation between observed and predicted values. Recall the model summary: Recall that the purpose of regression, no matter how simple or complex, is to test a model that does its best at reproducing the observed data. If the model reproduces the observed data perfectly, we would expect a multiple R of 1.0. Anything less, and the model isn't doing as good of a job. Writing Out the Model Equations In order to gain a better appreciation of what the estimated coefficients in the above regression mean exactly, let's write out a few of the equations. Here's the actual regression equation that we've estimated: Y = 8.190 + (-.524)(X) + (-4.549)(Z1) + (-5.315)(Z2) + (-9.271)(Z3) + (-5.019)(Z4) + (.812)(X*Z1) + (1.106)(X*Z2) + (1.394)(X*Z3) + (.998)(X*Z4) We can better appreciate what each coefficient is telling us if we consider some scenarios. For instance, suppose a given observation has X = 0, and is in group Z = 0. What would this mean for Z1? Because Z1 is not "activated," we would enter zero for it. Similarly for Z2, Z3 and Z4. And since X*Z1 through X*Z4 are indicators of an interaction with Z, these would all be zero as well. So, we would have: Y = 8.190 + (-.524)(0) + (-4.549)(0) + (-5.315)(0) + (-9.271)(0) + (-5.019)(0) + (.812)(0) + (1.106)(0) + (1.394)(0) + (.998)(0) = 8.190 So, for an observation with zero on X, and in the reference group (Z = 0), the predicted value on Y is equal to 8.190, which is the value of the intercept. Now, suppose an observation has an X score equal to 10, but is still in group Z = 0. The predicted value would be: Y = 8.190 + (-.524)(10) + (-4.549)(0) + (-5.315)(0) + (-9.271)(0) + (-5.019)(0) + (.812)(0) + (1.106)(0) + (1.394)(0) + (.998)(0) = 2.95 The predicted value for someone having a score of X = 10, and in the reference group (Z = 0) is equal to 2.95. Notice that the "10" on X brought the score down from the intercept value of 8.190. This is because the effect for X has a negative coefficient (controlling for Z). A unit increase in X equals an expected change in Y of -.524 units, and in the above we multiplied -.524 by 10 because X = 10. It's very reasonable then that our predicted Y dropped quite a bit. We can keep producing predicted values for various combinations in the equation. Let's do one more. Assume the observation has X = 10 again, but instead of being in the reference group, the observation is in group 3 (Z = 3). Then we would have the following, being sure to "activate" Z = 3 (or "indicate" the variable, which is why we call it an indicator): Y = 8.190 + (-.524)(10) + (-4.549)(0) + (-5.315)(0) + (-9.271)(1) + (-5.019)(0) + (.812)(0) + (1.106)(0) + (1.394)(0) + (.998)(0) = 8.190 -.524(10) -9.271(1) = 8.190 -5.24 - 9.271 = -6.321 Let's do one again, this time for an observation actually in our data. Let's take observation 11 in our data. It has a Y value of 8, an X value of 5, is in group Z = 2, and therefore has a product term X*Z2 = 5. It's equation would be the following: Y = 8.190 + (-.524)(5) + (-4.549)(0) + (-5.315)(1) + (-9.271)(0) + (-5.019)(0) + (.812)(0) + (1.106)(5) + (1.394)(0) + (.998)(0) = 8.190 -2.62 -5.315 + 5.53 = 5.785 Notice that our answer, within slight rounding error, is the predicted value produced by SPSS (count down to observation 11) What was the *actual* Y? It was 8, so we can say that our model didn't reproduce this data point as well as we would have wanted (but still did a decent job, depending on what the standard error of residuals turns out to be): Why Bother Calculating Predicted Values Manually? It may seem like a trivial exercise to calculate a few predicted values manually, but it isn't trivial at all. Rather, it's excellent practice at specifying the actual model equations, and ameliorating our understanding of what the coefficients mean in a relatively complex regression with interaction regressors. For instance, if you were asked what the model equation looked like for someone with X = 0 in the reference group (Z = 0), you'd have no trouble writing it out. Similarly, if you were asked what the model equation looked like for someone with X = 10 and in group 4 (Z = 4), you'd again have little difficulty in writing out the equation. Yes, software will compute predicted values for us, but it's always useful to try a few on your own to make sure you're clear on how these predicted values are being produced, especially when the regression model is relatively complex and involves interaction terms, etc. It also helps you familiarize yourself with the model Mean Centering the Continuous Predictor X To aid in the interpretation of parameter estimates, it's helpful to mean center the continuous predictor before running the analysis. The mean of variable X is equal to 5.88. To mean center, we command SPSS to subtract this value from each X data point for each individual: COMPUTE X_cent = X-5.88. The new values for X are then produced in SPSS: To verify that SPSS did it correctly, consider the first centered value of .12. It was computed by taking 6 (the X value for observation 1 in our data) minus 5.88, which equals .12. Consider what centering the predictor accomplishes. Recall that previously, the intercept in our regression represented the expected value of Y when X = 0. When we center, the intercept will represent the expected value of Y when X still equals 0, but zero now represents the mean of X, and not truly a zero score on X. To understand this better, consider the centering effect for an observation with a score of 5.88. When we center, we get 0 (5.88-5.88). So, when X_cent = 0, it's actually at the mean of X. Let's produce the relevant product terms using the newly centered X variable: COMPUTE X_cent_Z1 = X_cent * Z1. COMPUTE X_cent_Z2 = X_cent * Z2. COMPUTE X_cent_Z3 = X_cent * Z3. COMPUTE X_cent_Z4 = X_cent * Z4. The new interaction terms will be produced in SPSS when we take the new products: The product terms using the centered X variable have now been produced, and we can re-run our regression analysis to learn how to interpret the coefficients when X is centered: /CRITERIA=PIN(.05) POUT(.10) /METHOD=ENTER X_cent Z1 Z2 Z3 Z4 X_cent_Z1 X_cent_Z2 X_cent_Z3 X_cent_Z4 /SAVE PRED . We see that the model R of .637 is identical. This is no surprise, since the centering of X didn't really change anything in terms of predictive power of the model, it simply helps us interpret the coefficients a bit better. By centering, we conducted a linear transformation, and so that R remains constant was entirely expected. Similarly, the statistical significance of the model is identical to when X was not centered. Again, this is expected. Next up, let's take a look at the coefficients. This is where we notice changes: Here's where centering X will paid dividends in interpretation. Look at the value for the intercept. It is equal to 5.11. What is this value? It is the predicted value for the reference group for when X = 0, which because we've centered X, means that it is the predicted value for the reference group for when X equals its MEAN (and not actually zero as was true when X was not centered). What is the mean of X? It's original mean (before centering) was 5.88. So, the constant 5.11 means that the predicted value for Y for an individual in the reference group with an average amount of X (X = 0, which means X = 5.88 because it's been mean-centered) is 5.110. How do we know this is the predicted value for the reference group? Because if it's for the reference group, then that implies that all Z will equal 0, and we'll have: Y = 5.110 -.524(X) + .228(Z1) + 1.185(Z2) + (-1.076)(Z3) + .848(Z4) + .812(X*Z1) + 1.106(X*Z2) + 1.394(X*Z3) + .998(X*Z4) = 5.110 -.524(0) + .228(0) + 1.185(0) + (-1.076)(0) + .848(0) + .812(0) + 1.106(0) + 1.394(0) + .998(0) = 5.110 Notice that our predicted Y of 5.110 matches that of the intercept term. Because we centered X, when we input X = 0, we're actually evaluating at the mean of X, rather than when X actually equals 0 such as before when it was not centered. The term "-.524(0)" now says, "at the mean of X". Let's keep X = 0, but evaluate when Z = 1. The value for Z1 is .228, which means that as we go from group 0 to group 1, the expected increase in Y is .228 units. Realize that this represents a contrast between the reference group and the group coded 1. Let's evaluate the equation: Y = 5.110 -.524(X) + .228(Z1) + 1.185(Z2) + (-1.076)(Z3) + .848(Z4) + .812(X*Z1) + 1.106(X*Z2) + 1.394(X*Z3) + .998(X*Z4) = 5.110 -.524(0) + .228(1) + 1.185(0) + (-1.076)(0) + .848(0) + .812(0) + 1.106(0) + 1.394(0) + .998(0) = 5.110 + .228 = 5.338 Hence, we see that being in group 1 compared to group 0 (reference group) increases the predicted Y by .228, resulting in a predicted value of 5.338. To see the value in this kind of prediction, imagine you were to guess the Y score of a person standing behind a closed door. You know nothing about the person, except that you know they have average X, and are in group 1 of the moderator rather than group 0. You could reason as follows: "A good prediction for someone with average X in the reference group would be 5.110. But, if I know that person is in group 1 rather than group 0, I'm going to increase my estimate by .228 units, for a guess of 5.338. That's my best prediction for a person exhibiting these characteristics." An Analysis Without Partialling Out X Consider an analysis in which we do not partial out X, but rather simply analyze the effect of the Z-variable: What is the constant 5.100? It is the mean of the group coded 0 (the reference group). The Z1 coefficient of .100 indicates that as we go from Z = 0 to Z = 1, the increase in the mean is of the magnitude .100. So, the mean for group 1 is 5.200. The coefficient .800 indicates that as we go from group 0 to group 2, the expected mean increase is of the order .800, indicating that the mean for group 2 is 5.100 + .800 = 5.900. For Z = 3, the coefficient of -.700 indicates that the difference between means in group 0 to group 3 is of the magnitude .700, but this time, because the sign of the coefficient is negative, it represents a decrease in the mean rather than an increase. That is, the mean of group 3 is 5.100 - .700 = 4.400. Finally, for Z = 4, we have a coefficient of 1.200. This means that the expected difference in means between group 0 (reference group) and group 4 is of the order 1.200. Since the sign is positive, it indicates that the mean of group 4 is 1.200 units greater than the mean of 5.100. So, the mean of group 4 is 5.100 + 1.200 = 6.300. Notice that without partialling out the continuous predictor X, the means derived from the coefficient table match up to the actual means for the groups, as we can easily verify by obtaining descriptives on the group means across levels of Z. Notice the means in the following descriptives are 5.100, 5.200, 5.900, 4.400, 6.300 for Z groups 0, 1, 2, 3, 4 respectively. These group means match up perfectly with the means we figured out from the coefficient table:
{"url":"http://psychweb.psy.umt.edu/denis/datadecision/reg/moderation.html","timestamp":"2014-04-17T21:30:21Z","content_type":null,"content_length":"57647","record_id":"<urn:uuid:a415df16-805a-4fd7-a02f-bd175aaaa122>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the area of a square with sides of length 13 yards? United States customary units are a system of measurements commonly used in the United States. The U.S. customary system developed from English units which were in use in the British Empire before American independence. Consequently most U.S. units are virtually identical to the British imperial units. However, the British system was overhauled in 1824, changing the definitions of some units used there, so several differences exist between the two systems. The majority of U.S. customary units were redefined in terms of the meter and the kilogram with the Mendenhall Order of 1893, and in practice, for many years before. These definitions were refined by the international yard and pound agreement of 1959. The U.S. primarily uses customary units in its commercial activities, while science, medicine, government, and many sectors of industry use metric units. The SI metric system, or International System of Units is preferred for many uses by NIST The system of imperial units or the imperial system (also known as British Imperial) is the system of units first defined in the British Weights and Measures Act of 1824, which was later refined and reduced. The system came into official use across the British Empire. By the late 20th century, most nations of the former empire had officially adopted the metric system as their main system of measurement, but some Imperial units are still used in the United Kingdom and Canada. The square yard is an imperial/US customary (non-metric) unit of area, formerly used in most of the English-speaking world but now generally replaced by the square metre outside of the U.S., Canada and the U.K. It is defined as the area of a square with sides of one yard (three feet, thirty-six inches, 0.9144 metres) in length. (Gaj in Hindi) There is no universally agreed symbol but the following are used: A two-dimensional equable shape (or perfect shape) is one whose area is numerically equal to its perimeter. For example, a right angled triangle with sides 5, 12 and 13 has area and perimeter both equal to 30 units. An area cannot be equal to a length except relative to a particular unit of measurement. For example, if shape has an area of 5 square yards and a perimeter of 5 yards, then it has an area of 45 square feet (4.2 m2) and a perimeter of 15 feet (since 3 feet = 1 yard and hence 9 square feet = 1 square yard). Moreover, contrary to what the name implies, changing the size while leaving the shape intact changes an "equable shape" into a non-equable shape. However its common use as GCSE coursework has led to its being an accepted concept. For any shape, there is a similar equable shape: if a shape S has perimeter p and area A, then scaling S by a factor of p/A leads to an equable shape. Alternatively, one may find equable shapes by setting up and solving an equation in which the area equals the perimeter. In the case of the square, for instance, this equation is Hospitality Recreation Related Websites:
{"url":"http://answerparty.com/question/answer/what-is-the-area-of-a-square-with-sides-of-length-13-yards","timestamp":"2014-04-18T04:07:46Z","content_type":null,"content_length":"28398","record_id":"<urn:uuid:91b4ec09-c63e-4fa6-912a-09505d6a4d14>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Learning, Outcomes Assessment and Accreditation Outcome Statements for Physics Majors Physics Standards Document – September 2001 1.0 Process Standards 1.1. Critical Thinking and Problem Solving 1.1.1 Skill inventory 1. Deduction 2. Inference 3. Reasoning 4. Formulating questions 5. Order of magnitude estimation 1.1.2 Performance expectation 1. Evaluate whether a calculated result or reported measurement is physically plausible by crude estimations of the quantity. 1.2 Data Analysis 1.2.1 Skills inventory 1. evaluate reliability of data 2. statistical analysis of data 3. analyze impact of results on society (economic, moral, and political). 1.2.2 Performance expectations 1. Able to propagate the uncertainty on a datum through a series of calculations in order to assess the uncertainty of a derived result. 2. Able to graphically represent data and indicate error bars appropriate to the uncertainty in the data. 3. Able to distinguish between and estimate random and statistical sources of error in a measurement. 4. Able to quantitatively calculate random error in a collection of replicate measurements by calculation of the standard deviation. 1.3 Accessing Information 1.3.1 Skills inventory 1. computer/library searching 2. using reference books 3. assessing current scientific data over the Internet 4. evaluating reliability of information 1.3.2 Performance expectations 1. Use databases and computer networks to access physical information. 2. Use scientific journal citations to locate articles in the physics library. 3. Understand the organization of scientific journals and journal articles and be able to efficiently extract information from individual articles. 2 Content Standards 2.1 Force and Motion in 3 Dimensions 1. The student is given a first, comprehensive introduction to the concept of units and comparison between Systeme International (SI) units, CGS units and Engineering units (British Imperial). The fundamental dimensions of length (L) mass (M) and time (T) are introduced and discussed. Topics are introduced from first principles. Laws are expressed in vector form. Emphasis is put on phenomenology and the experimental foundation of the theory. Everyday experiences of the laws of mechanics are emphasized. Students should understand where approximations, (such as the small angle approximation in oscillators) and idealizations (such as ignoring air resistance in projectile motion) significantly impact the outcome of the analysis. Description of motion. 1.1. The student can distinguish between displacement and distance traveled. The student can distinguish between average velocity (acceleration), and instantaneous velocity (acceleration and that the difference is in the limit as time tends to zero. 1.2. The student can distinguish between speed, (scalar), and velocity, (vector). 2. Differential- and integral relation among position, velocity and acceleration in 3 dimensions. 2.1. Given the position of a particle as a function of time, the student can derive expressions for velocity and acceleration as functions of time. 2.2. Given the acceleration as a function of time and initial conditions for the velocity and position, the student understands how to derive expressions for position as a function of time and velocity as a function of time. 3. Motion subject to a constant acceleration. 3.1. Given initial velocity and position of a projectile, the student understands how to calculate its position and velocity at all times, the shape of the trajectory, the highest point of the trajectory, the range of the projectile. 4. Transformation between frames moving with constant relative velocity. 4.1. Given knowledge of position and velocity of a particle relative to a given frame, the student can express the position and velocity relative to a second frame moving at constant velocity relative to the first frame. 5. Centripetal force and acceleration. 5.1. Students should understand that circular motion at constant speed is NOT uniform motion 5.2. The student understands how to calculate the acceleration of the moon from its orbital period (or its speed) and its distance from earth. 5.3. Expressed in terms of the initial position and velocity, the student can calculate, as a function of time, the centripetal force on a simple pendulum (in the small angle limit) performing SHM. 6. Newton’s three laws. 6.1. Students should understand that ma, or dp/dt, is not a force, rather it is a response to the forces acting on the net force acting on the object or system. 6.2. The student can design experiments to demonstrate each of Newton’s laws. 6.3. The student can explain the concepts of mass and weight and emphasize their difference. 6.4. The student can draw free body diagrams and use them to calculate the acceleration of a block on an incline subject to multiple constant forces (including friction). 6.5. Motion subject to a constant force (e.g. projectile motion). 7. The effect of static and kinetic friction forces on motion. 7.1. The student can perform a demonstration showing that the coefficient of static friction is greater than the coefficient of kinetic friction. 7.2. The student can design an experiment to determine the coefficient of friction (static or kinetic) for the contact between two surfaces made from known materials. 8. Work due to a variable force. 8.1. The student knows how to calculate the work done by a constant force and by a variable force (for example as a function of position) in 3D. 8.2. Students understand the distinction between conservative and non-conservative forces and work 9. Relationship between work and energy 9.1. The student understands the relationship between the work done by all the forces acting on a particle, and the kinetic energy of the particle. 9.2. Students understand the relationship between potential energy and the work done by a conservative force 10. Newton’s law and gravitation. 10.1. The student can demonstrate in detail why an inverse square law of gravity, together with Newton’s second law and Kepler’s third law accounts well for the motion of the Moon. 10.2. The student understands why Kepler’s second law is not an accident. 10.3. The student can derive Kepler’s third law for a circular orbit from Newton’s laws. 10.4. The student understands how to express the gravitational field from a distribution of point masses. 10.5. The student understands how to express the gravitational potential from a distribution of point masses. 11. Rotational Dynamics: Rotation of a rigid body about a fixed axis. 11.1. Students should understand the simple relationship between the variables used in linear dynamics and those used in rotational dynamics. They should understand that for a rotating body the rotational dynamical variables are independent of radial distance from the rotation axis, whereas the linear dynamical variables are radius dependent. 11.2. The student can distinguish between average angular velocity (acceleration), and instantaneous angular velocity (acceleration). 11.3. The student can distinguish between linear velocity and angular velocity. 11.4. The student can calculate the moment of inertia of a distribution of particles (discrete or continuous) of simple geometry. 11.5. A flywheel of known moment of inertia rotates about a fixed axis and subject to several impressions (forces and torques). The student understands how to calculate angular acceleration, velocity and position of the flywheel in terms of the given impressions. 11.6. The student can calculate as a function of time, the position of a physical pendulum oscillating at small amplitude. 11.7. A person stands on a ladder of known length, which leans on a smooth vertical wall. The student is able to express one of the following quantities in terms of (some of) the others: the mass of the ladder, the angle the ladder makes with horizontal, the mass of the person, the (minimum) coefficient of static friction present between the floor and the ladder, the friction force between the floor and the ladder, the height of the person above the floor. 12. Rotation of a rigid body about an axis of fixed direction through the center of mass. 12.1. The student understands how to calculate torques on a yo-yo, and the acceleration of its center of mass. 12.2. The student can experimentally determine the moment of inertia of simple rotationally symmetric systems (such as a basketball) by rolling them down an incline. 12.3. The Sun is eventually going to collapse and become a white dwarf star of radius similar to that of Earth. The student understands how to calculate the angular velocity of the white dwarf. 12.4. The student can account for why a planet moves faster close to the Sun than farther away. 13. Special relativity. The Lorentz transformation. 13.1. Students understand that simultaneity is relative to the observer. 13.2. The students can demonstrate that the Lorentz transformations account for time dilation and length contraction. 13.3. The student understands how to relate the velocity of a particle in different Lorentz frames. 13.4. The student understands how the Lorentz transformation accounts for the ability of muons, created at the top of the atmosphere, are able to penetrate to the surface of the Earth. 13.5. Students understand the relativistic relation between mass, energy and momentum, and its agreement with Newtonian physics at non-relativistic speeds. 2.2 Conservation Laws Conservation laws are central to understanding the behavior of most physical systems. The student understands how conserved observables determine the state of a system at a later time. Ability to apply the conservation laws is emphasized. 14. Conservation of energy. 14.1. The student is able to recognize the conditions under which the mechanical energy of a physical system is conserved. 14.2. The student can demonstrate how to use conservation of energy to set up relationships between physical quantities (such as mass, charge, speed, position, etc.). 14.3. Given the radius of a planetary orbit and the masses of the orbiting bodies, the student can calculate the energy of the planet. 14.4. Students can account for the work done by a thermodynamic engine in terms of the energy input and the waste energy expelled. 15. Conservation of linear momentum in 3 dimensions. 15.1. The student understands the different role played by the internal forces and the external forces acting on a system of particles or a rigid body, in determining the motion of the system. 15.2. The student understands the relationship satisfied between the internal forces and the external forces acting on a system of particles or a rigid body, in order that the total momentum of the system be conserved (internal forces cannot change the linear momentum of the system, the sum of the external forces is zero). 15.3. The student understands the relation between the motion of the center of mass of a physical system and the external forces action on it. 16. Conservation of angular momentum. 16.1. The student recognizes the relation between the net torque on a system and its angular momentum. 16.2. The students understand how Kepler’s second law is a consequence of angular momentum conservation. 17. Collisions in one and two dimensions. 17.1. Students understand the conditions under which the total momentum of a system of particles is conserved. 17.2. Students understand the kinematics of one- and two-dimensional collisions, and how to use momentum conservation to relate initial and final states of the system for the case where external forces are small. 17.3. The student can quantitatively and qualitatively distinguish elastic and inelastic collisions. 17.4. The student understands when both energy and momentum is conserved, and how to use this to relate initial and final states of a two dimensional collision of two particles. 17.5. The student can apply momentum conservation to solve 1D and 2D collision problems. 2.3 Potentials and the Potential Energy Function Many forces in Nature are approximately conservative. Systems subject to conservative forces are conveniently described in terms of a potential energy function. This function provides a powerful simplification in the understanding of energy considerations of the system. 18. Conservative and non-conservative forces. 18.1. Students understand the distinction between conservative and non-conservative forces. 18.2. Students understand the relationship between work done by a conservative force and the potential energy. 18.3. Students understand how to derive the force from a potential energy function, and vise versa. 19. Electrostatic potential. Gravitational potential. 19.1. Students know how to calculate the potential energy function for elastic forces, and for an inverse square force law. 20. The field and potential energy associated with an electric dipole. 20.1. Students understand how to calculate the potential due to an electric dipole. 2.4 Vibrations and Harmonic Oscillations 21. Simple Harmonic Motion (solution from substitution in the second order linear homogeneous differential equation resulting from a force analysis). 21.1. Students should understand the concept of a restoring force. 21.2. The student can relate acceleration, displacement (angular or linear) and frequency for an object performing SHM. 21.3. The student can analyze several examples of oscillators, such as a simple pendulum, physical pendulum, torsion pendulum, orbital motion, spring oscillator, and the conditions under which they execute Simple Harmonic Motion. 22. Damped, driven, oscillations (solution of the inhomogeneous D.E.). 22.1. The student understands that the amplitude of a driven damped oscillator varies with the driving frequency. 22.2. The student understands that for a lightly damped system, amplitude occurs close to the natural frequency of the oscillator. 23. Energy levels of the harmonic oscillator. 2.5 Waves 24. Mechanical waves, electromagnetic waves, matter waves. Students understand that electromagnetic wave are self propagating and mechanical waves require a propagation medium. Students understand the concept of coherence. 24.1. The student can distinguish between transmission of energy by a traveling wave, and by matter. 24.2. The student knows how to relate the velocity, angular frequency, and the wave number of a wave 24.3. The student can calculate the velocity of a wave on a string of a known substance, held in tension by a known weight. 24.4. The student can calculate the power transmission of a sinusoidal wave along a string. 24.5. The student recognizes the difference between longitudinal and transverse waves and knows that electromagnetic waves are transverse, while mechanical waves can be either. 25. Refraction, reflection, interference and diffraction of waves. 25.1. Students can calculate angles of reflection and refraction of a lightwave encounters the surface between two media. 25.2. Students understand that light may be elliptically polarized, partially polarized, or it may beunpolarized. 25.3. Students understand how a rainbow forms. 25.4. Students can explain how an optical fiber works. 25.5. Students understand image formation in spherical refractors or reflectors. 25.6. Students understand how to use a grating to analyze the wavelength composition of light 25.7. Students understand how to use X-ray diffraction to derive information about the structure of a crystal. 25.8. Students understand and can predict the spacing of interference fringes for the double slit experiment. 25.9. Students understand and can predict the spacing of interference fringes due to reflection from thin films, accounting for any phase changes at boundaries between media of differing refractive 26. Doppler effect. 26.1. Students understand that the frequency of a wave, perceived by an observer, changes if the observer is in relative motion with respect to the source (relativistic, non-relativistic). 2.6 Heat and Thermodynamics Students are introduced to concepts of heat and temperature. The universality of the temperature concept is stressed. Laws of heat transfer, heat conduction, heat capacity, latent heat, and calorimetry is discussed from heuristic principles. Kinetic gas theory is introduced from first principles. The first two laws of thermodynamics are discussed in detail. Emphasis is laid on the graphical representation of thermodynamical processes in PV-diagrams. Reversible engines and their efficiency are discussed. The Carnot engine is discussed in detail. The concept of entropy is introduced and discussed in the context of thermodynamic processes. 27. Heat transfer, heat conduction and absorption. 27.1. Students can calculate the heat that must be added to a given amount of a given substance in order to transform it from the solid phase to the gaseous phase. 27.2. Students can calculate the rate at which heat is lost through the walls of a house when the internal and external temperatures are constant. 27.3. Students understand the three ways of heat transfer and the physical conditions present for each of these to dominate. 28. First law of thermodynamics. 28.1. Students understand how the first law of thermodynamics extends the principle of energy conservation. 28.2. Students understand that heat added to a thermodynamic system will change the internal energy of the system, or result in work done by the system on the environment, or both. 28.3. Students understand how to calculate the work done by a thermodynamic system for several common processes including the isobaric, isochoric, adiabatic and isothermal process. 29. Entropy and second law of thermodynamics. 29.1. Students can account for the efficiency of a Carnot engine and a Stirling engine. Students understand how a refrigerator functions. 29.2. Students understand that if heat is added at constant temperature to a thermodynamic system, the entropy of the system will increase. 29.3. Students can distinguish the free expansion of a gas from the isothermal expansion, and account for the change in the entropy of the universe in each of these two processes. 30. Kinetic theory of gases. 30.1. Students understand how kinetic gas theory provides for an equivalent definition of temperature expressed in terms of molecular speeds. 30.2. Students are familiar with the ideal gas approximation, its range of validity, and how to use it to obtain accurate predictions for the specific heat capacities for mono- and diatomic gases. 2.7 Material Properties 31. Students know how common material properties such as coefficients of friction, mass density, elastic moduli, thermal expansion coefficients, specific heat, electric conductivity, etc., are 32. Elastic properties of matter (e.g. Young modulus, shear modulus). 32.1. Students understand that different materials respond to stresses in different ways. 33. Thermal properties of matter (e.g. thermal expansion, specific heat). 33.1. Given a block of a material, a ruler of a different material, and their coefficients of linear expansion, the student knows how to calculate the percent change in length of the block, measured by the ruler, as the temperature is changed by a given amount. 34. Electrical and magnetic properties of matter (e.g. conductance, dielectric properties). 34.1. Students have a qualitative as well as a quantitative understanding that the capacitance of a capacitor changes if the space between the plates is filled with a material. 34.2. Students know how to how to use Gauss law in a dielectric to calculate the electric field between the plates of a capacitor filled with a dielectric. 34.3. Students know different materials exhibit different magnetic properties, and how these properties affect the different behavior of materials in a magnetic field. 34.4. Students know how properties of magnetization is quantitatively determined by the coefficient of magnetic permeability. 34.5. Students understand how an atomic dipole moment can explain dielectric properties of matter, and how a magnetic dipole moment can explain magnetic properties of matter. 35. Properties of fluids (Pascal’s law, Archimedes’ law, Bernoulli’s law). 35.1. Students understand how to calculate the pressure at different levels in a fluid. 35.2. Students have a quantitative understanding how a submarine can lower or increase its depth in the water. 35.3. Students can explain the motion of a curve ball, and the lift on an airplane wing. 2.8 Electricity and Magnetism The student is given a first, comprehensive introduction to phenomena of Electricity and Magnetism.Topics are introduced from first principles. Laws are expressed in vector form. Emphasis is put on phenomenology and the experimental foundation of the theory. Maxwell’s equations are introduced in their integral form using vector calculus. The role played by the laws of electricity and magnetism in the design and function of devices used today is emphasized. 36. The electric field. Students should understand the vector nature of the electric field. They should also appreciate the facility of calculating the electric field using the (scalar) electric 36.1. Students recognize the similar nature of electric field vector, E and potential, and gravitational field vector, g, and potential. 36.2. The students can calculate the electric field/potential due to a symmetric line, surface, or volume charge distributions such as a charged disk, rod, circular line segment, or around a spherical or cylindrical charge distribution by integrating over the charge distribution. 36.3. Students understand how to use Gauss’ law to calculate the electric field for highly symmetric charge distributions, such charge on the plates of a parallel plate capacitor, or a spherical or cylindrical capacitor. 36.4. The students can calculate the electric field/potential around a system of point charges. 37. The magnetic field around a symmetric current distribution. 37.1. Students understand how to calculate the magnetic field along a symmetry axis, due to a symmetric current distribution. 37.2. Students know how to use the force due to a magnetic field on a current carrying wire to calculate the torque on a current loop. 37.3. Students understand how to use the Biot-Savart or Ampere’s law to calculate the magnetic field due to simple, symmetric current loops, including circular and linear segments. 38. The electric field and potential energy associated with an electric dipole. 38.1. Students understand how to calculate the field and potential due to an electric dipole. 39. The relation between electricity and magnetism. Faraday’s law. Students understand the concept of magnetic flux. 39.1. Students understand how to calculate the induced emf from a simple current loop moved through a uniform magnetic field. 39.2. Students can calculate the emf produced in a current loop by a variable magnetic field. 39.3. Students understand that changing the current through a coil induces an emf in the coil. 39.4. Students know that given two coils, the emf induced in either coil is proportional to the rate at which the current changes in the other. 40. DC- and AC circuits. RLC circuits (e.g. capacitive- and inductive reactance, resonance). Students should understand the analogy between driven damped mechanical oscillators and RLC circuits. 40.1. Students understand how to apply Kirchhoff’s laws to a multi-loop circuit to calculate the current or the emf in the circuit. 40.2. Students understand the different effects on the amplitude and phase of voltage and current that various circuit components (e.g. resistors, capacitors, inductors) have in AC or DC circuits with a sinusoidal emf. 40.3. Students understand how to calculate amplitude and phase of voltage and current through an RCL circuit. 40.4. Students are able to set up a differential equation fr a RC circuit to and use it to explain how charge accumulates on the capacitor. 40.5. Students can explain predict? the Voltage-time, current-time curves in an RL circuit after the emf has been connected. 41. Electromagnetic Waves. 41.1. Students can solve Maxwell’s equations for a plane wave to obtain expressions for the electric and magnetic fields in a propagating wave. 41.2. Students understand that electromagnetic waves carry energy, and know how to calculate the intensity and radiation pressure of the wave. 2.9 Quantum Mechanics Students are introduced to the dual description of matter as particles and waves. The Scrödinger equation and its solutions are discussed for the simplest systems. Barrier penetration and trapping is discussed, applications in technology are emphasized. Quantum numbers of atoms are defined, shell structure and the periodic system is discussed in this context. A quantum description of the solid state is provided, which accounts for the conduction properties of metals semi-conductors and insulators. Applications in electronics are emphasized. Nuclear structure and models: topics include classification of nuclides, nuclidic charts, binding energy and energy in nuclear reactions, models for radioactive decay, nuclear fission and fusion. Nuclear reactors. Radiation. A non-mathematical presentation of the classification scheme for elementary particles. Conservation laws in elementary particle physics. The early universe. 42. Photons and matter: Students should know the significance of blackbody radiation to the development of quantum mechanics 42.1. Students are able to use the blackbody spectrum to derive the surface temperature of the Sun. 42.2. Students understand how the Photoelectric effect or double slit experiments confirm that light can be interpreted as particles. 42.3. Students understand that matter can be thought of as a wave whose, and that the wave (wavefunction) is determined by a wave equation. The absolute square of the wavefunction has an interpretation as a probability density. 42.4. The student can construct and interpret the wave function for a free particle. 42.5. Students understand how to use the wavefunction and Schroedinger’s equation to account for the tunneling of a particle through a barrier. 42.6. The student understands that some physical variables such as position and momentum of a particle cannot be determined simultaneously without the introduction of an inherent uncertainty. 43. Idealized models (e.g. free particle, particle in a box, harmonic oscillator). 43.1. Students understand how to derive the energy levels and state wavefunctions for simple quantum systems such as the free particle, a particle in a box, and the harmonic oscillator. 44. Atomic structure. (e.g. Bohr model. Electron shell structure. Pauli principle). 44.1. The student is able to interpret the quantum numbers of the hydrogen atom. 44.2. Students can account for the radial and angular probability density distributions of the hydrogen atom in terms of the values of the principal and orbital quantum number, for states with principal quantum number equal to 1 or 2. 44.3. Students understand the implications of the Pauli exclusion principle on the filling of atomic shells. 44.4. Students can explain the shell structure of an atom in terms of the atomic quantum numbers. 45. Atomic orbital and spin angular momenta and electron spin. 45.1. Students recognize the relation between the magnetic dipole moment of an atom, and its angular momentum quantum number. 45.2. Students understand the experimental evidence for the spin of the electron. 45.3. Students can explain the behavior of an atom in an external magnetic field. 45.4. Students understand the basic atomic physics of how laser light is produced. 45.5. Students understand the arrangement of atoms in the periodic table. 46. Condensed matter. Semi-conductors and the transistor. 46.1. Students understand how to use the band-gap structure of solids to explain the difference between conductors, semiconductors, and insulators. 46.2. The student can account for the n-type and p-type semiconductors, and the function of the transistor. 47. Nuclear decay. Models of the nucleus. 47.1. Students can explain why the repulsive forces of the protons in the atomic nucleus do not blow itapart, and they can account for the stability of the nucleus. 47.2. The student can provide a statistical account of radioactive decay. 48. Nuclear energy: fission and fusion. 48.1. Students can account quantitatively for how energy is gained in fission and fusion processes. 49. The quark model 49.1. Students understand that particles are arranged in a classification scheme consisting of leptonsand hadrons according to their interaction. 49.2. Students understand that quarks bind, subject to certain constraints, to form baryons and mesons. 49.3. Students understand the implication of the conservation laws of baryon number, lepton number, and strangeness on particle interactions, particle decays, etc. 49.4. Students recognize interaction of the fundamental particles through three of the four forces of Nature. 49.5. Students are able to give a simple account of the thermal history of the universe, and its matter content.
{"url":"http://web.uri.edu/assessment/uri/physics/","timestamp":"2014-04-20T20:56:45Z","content_type":null,"content_length":"58328","record_id":"<urn:uuid:37ae9126-fca5-4a53-8f30-9bd6cb09e12e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Subsets of Non-Measurable sets March 14th 2010, 07:59 AM #1 Feb 2010 Subsets of Non-Measurable sets We know there exists a non-measurable subset in [0,1). Call it P. P is a non-measurable set constructed by identifying the interval [0,1) with the unit circle in R^2. If I can find it online, I'll post a link. Let A be a measurable subset of P. Show that A has (Lebesgue) measure 0. I must admit I'm stuck as to how to proceed Last edited by southprkfan1; March 14th 2010 at 08:57 AM. We know there exists a non-measurable subset in [0,1). Call it P. P is a non-measurable set constructed by identifying the interval [0,1) with the unit circle in R^2. If I can find it online, I'll post a link. Let A be a measurable subset of P. Show that A has (Lebesgue) measure 0. I must admit I'm stuck as to how to proceed The usual proof of the non-measurable set is to prove that it has outer measure 1 and inner measure 0. I like this proof, which contains the result you are looking for. March 14th 2010, 10:03 AM #2
{"url":"http://mathhelpforum.com/differential-geometry/133739-subsets-non-measurable-sets.html","timestamp":"2014-04-19T18:44:07Z","content_type":null,"content_length":"32424","record_id":"<urn:uuid:31b87092-4349-401e-95f8-5a5e00fdc0f7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Common Core Standards Applying the Common Core Standards for High School Mathematics To meet many of the Common Core Standards for geometry you can use Mathematics: A Very Short Introduction and Symmetry: A Very Short Introduction. Geometry » Congruence Experiment with transformations in the plane • CCSS.Math.Content.HSG-CO.A.1 Know precise definitions of angle, circle, perpendicular line, parallel line, and line segment, based on the undefined notions of point, line, distance along a line, and distance around a circular arc. Use the Mathematics: VSI Chapter 6 (Geometry) which describes angles, line segments, arcs, circles, and more. Understand congruence in terms of rigid motions • CCSS.Math.Content.HSG-CO.B.6 Use geometric descriptions of rigid motions to transform figures and to predict the effect of a given rigid motion on a given figure; given two figures, use the definition of congruence in terms of rigid motions to decide if they are congruent. Use the Symmetry: VSI Chapter 3 (Types of symmetry) which describes ridged motion. Prove geometric theorems • CCSS.Math.Content.HSG-CO.C.10 Prove theorems about triangles. Theorems include: measures of interior angles of a triangle sum to 180°; base angles of isosceles triangles are congruent; the segment joining midpoints of two sides of a triangle is parallel to the third side and half the length; the medians of a triangle meet at a point. Use the Mathematics: VSI Chapter 6 (Geometry), Chapter 3 (Proofs), and Chapter 5 (Dimensions). These chapters include the Pythagorean Theorem and information on the geometry of triangle that will help you meet this standard. Geometry » Similarity, Right Triangles, & Trigonometry Define trigonometric ratios and solve problems involving right triangles • CCSS.Math.Content.HSG-SRT.C.8 Use trigonometric ratios and the Pythagorean Theorem to solve right triangles in applied problems. Use the Mathematics: VSI Chapter 3 (Proofs) & Chapter 5 (Dimensions) which cover the Pythagorean Theorem. Geometry » Circles Understand and apply theorems about circles • CCSS.Math.Content.HSG-C.A.2 Identify and describe relationships among inscribed angles, radii, and chords. Include the relationship between central, inscribed, and circumscribed angles; inscribed angles on a diameter are right angles; the radius of a circle is perpendicular to the tangent where the radius intersects the circle. Use the Mathematics: VSI Chapter 6 (Geometry) section on “Spherical geometry” and Chapter 3 (Proofs) sections on “Dividing a circle into regions” and “Regions of a circle”. • CCSS.Math.Content.HSG-C.A.3 Construct the inscribed and circumscribed circles of a triangle, and prove properties of angles for a quadrilateral inscribed in a circle. Use the Mathematics: VSI Chapter 6 (Geometry) section on “Spherical geometry”. Find arc lengths and areas of sectors of circles • CCSS.Math.Content.HSG-C.B.5 Derive using similarity the fact that the length of the arc intercepted by an angle is proportional to the radius, and define the radian measure of the angle as the constant of proportionality; derive the formula for the area of a sector. Use the Mathematics: VSI Chapter 6 (Geometry) section on “Spherical geometry” and Chapter 3 (Proofs) sections on “Dividing a circle into regions” and “Regions of a circle”. Geometry » Expressing Geometric Properties with Equations Translate between the geometric description and the equation for a conic section Geometry » Modeling with Geometry Apply geometric concepts in modeling situations • CCSS.Math.Content.HSG-MG.A.1 Use geometric shapes, their measures, and their properties to describe objects (e.g., modeling a tree trunk or a human torso as a cylinder). Use the Mathematics: VSI Chapter 6 (Geometry) and Chapter 1 (Modeling) Statistics & Probability To meet Common Core Standards for Statistics & Probability use Statistics: A Very Short Introduction and Probability: A Very Short Introduction. Statistics & Probability » Interpreting Categorical & Quantitative Data Summarize, represent, and interpret data on a single count or measurement variable • CCSS.Math.Content.HSS-ID.A.2 Use statistics appropriate to the shape of the data distribution to compare center (median, mean) and spread (interquartile range, standard deviation) of two or more different data sets. Use the Statistics: VSI Chapter 2 (Simple descriptions) and the Probability: VSI Chapter 4 (Chance experiments) • CCSS.Math.Content.HSS-ID.A.3 Interpret differences in shape, center, and spread in the context of the data sets, accounting for possible effects of extreme data points (outliers). Use the Statistics: VSI Chapter 2 (Simple descriptions), Chapter 3 (Collecting good data), and Chapter 6 (Statistical models and methods). • CCSS.Math.Content.HSS-ID.A.4 Use the mean and standard deviation of a data set to fit it to a normal distribution and to estimate population percentages. Recognize that there are data sets for which such a procedure is not appropriate. Use calculators, spreadsheets, and tables to estimate areas under the normal curve. Use the Probability: VSI Chapter 4 (Chance experiments) and the Statistics: VSI Chapter 2 (Simple descriptions), Chapter 4 (Probability), and Chapter 5 (Estimation and inference). Statistics & Probability » Making Inferences & Justifying Conclusions Understand and evaluate random processes underlying statistical experiments • CCSS.Math.Content.HSS-IC.A.1 Understand statistics as a process for making inferences about population parameters based on a random sample from that population. Use the Probability: VSI Chapter 4 (Chance experiments) and the Statistics: VSI Chapter 2 (Simple descriptions), Chapter 4 (Probability), and Chapter 5 (Estimation and inference). • CCSS.Math.Content.HSS-IC.A.2 Decide if a specified model is consistent with results from a given data-generating process, e.g., using simulation. For example, a model says a spinning coin falls heads up with probability 0.5. Would a result of 5 tails in a row cause you to question the model? Use the Probability: VSI Chapter 1 (Fundamentals), Chapter 2 (The workings of probability), and Chapter 4 (Chance experiments) and the Statistics: VSI Chapter 4 (Probability) and Chapter 5 ( Estimation and inference). Make inferences and justify conclusions from sample surveys, experiments, and observational studies Statistics & Probability » Conditional Probability & the Rules of Probability Understand independence and conditional probability and use them to interpret data • CCSS.Math.Content.HSS-CP.A.3 Understand the conditional probability of A given B as P(A and B)/P(B), and interpret independence of A and B as saying that the conditional probability of A given B is the same as the probability of A, and the conditional probability of B given A is the same as the probability of B. Use the Probability: VSI Chapter 2 (The workings of probability) which covers the Addition law, Multiplication Law, Independence, and more. • CCSS.Math.Content.HSS-CP.A.5 Recognize and explain the concepts of conditional probability and independence in everyday language and everyday situations. For example, compare the chance of having lung cancer if you are a smoker with the chance of being a smoker if you have lung cancer. Use the Probability: VSI Probability VSI Chapter 2 (The workings of probability) to understand the concepts and Chapter 6 (Games people play) and Chapter 7 (Applications in science, medicine, and operations research) to provide everyday situational examples. Use the rules of probability to compute probabilities of compound events.
{"url":"http://www.veryshortintroductions.com/page/common-core-standards","timestamp":"2014-04-19T07:07:08Z","content_type":null,"content_length":"46207","record_id":"<urn:uuid:82e776c8-4317-40c1-abad-746fdde2a7c0>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
simplicial groupoid simplicial groupoid Higher category theory Basic concepts Basic theorems Universal constructions Extra properties and structure 1-categorical presentations It is probably best to distinguish between the following: • A simplicial groupoid is a simplicial object in Cat (that is, a functor from $\Delta^{op}$ to $Cat$), in which is all the categories involved are groupoids. • A simplicially enriched groupoid is a groupoid enriched over the category SimpSet of simplicial sets. (For a discussion of the terminology of simplicial groupoid and simplicial category, see the entry on the second of these.) Any simplicially enriched groupoid yields a simplicial groupoid in which the face and degeneracy operators are constant on objects and it is often in this latter form that they are met in homotopy (Of course, what is ‘best’ is not always done in the literature, so the reader is best advised to check the meaning being used when the term is met in an article or text.) • Simplicially enriched groupoids are related to simplicial sets via an adjunction found independently by Dwyer–Kan and Joyal–Tierney; see Dwyer-Kan loop groupoid. This adjunction gives an equivalence of homotopy categories so that simplicially enriched groupoids model all homotopy types. • A simplicially enriched groupoid having exactly one object is essentially the same as a simplicial group. Notationally however it is often important to distinguish a simplicial group form the corresponding single object simplicially enriched groupoid. • Many constructions on simplicial groups, such as that of its Moore complex carry over to simplicialy enriched groupoids without difficulty. • Philip Ehlers, Simplicial groupoids as models for homotopy type Mather thesis (1991) (pdf) Revised on August 9, 2011 03:01:08 by Urs Schreiber
{"url":"http://ncatlab.org/nlab/show/simplicial+groupoid","timestamp":"2014-04-19T02:29:16Z","content_type":null,"content_length":"28349","record_id":"<urn:uuid:95145db9-5bf2-4e39-ace8-b70f90471652>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Equal Chance A bag contains n discs, made up of red and blue colours. Two discs are removed from the bag. If the probability of selecting two discs of the same colour is 1/2, what can you say about the number of discs in the bag? Problem ID: 146 (Jan 2004) Difficulty: 3 Star
{"url":"http://mathschallenge.net/view/equal_chance","timestamp":"2014-04-17T12:30:50Z","content_type":null,"content_length":"4180","record_id":"<urn:uuid:1b7b1705-e1d4-404e-949e-86129191c8aa>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Use standard identities to express 1. The problem statement, all variables and given/known data Use standard identities to express sin(x+pi/3) in terms of sin x and cos x 2. Relevant equations 3. The attempt at a solution 0.5sinx + 0.8660cosx I'm just not sure if i need to simplify it even further and hopefully I'm on the right track. Thanks
{"url":"http://www.physicsforums.com/showthread.php?p=3850472","timestamp":"2014-04-20T23:34:30Z","content_type":null,"content_length":"27557","record_id":"<urn:uuid:28b8c7f9-591d-4c0e-a16f-625302873ce4>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
perlmeditation lin0 <p>Fellow Monks</p> <p>I wanted to share with you an article I found in the [http://www.jstatsoft.org/|Journal of Statistical Software]. The article is titled [http:// www.jstatsoft.org/v11/i01/paper|Using Perl for Statistics: Data Processing and Statistical Computing] and was written by [http://www.dur.ac.uk/dbs/faculty/staff/profile/?username=dbr0gb1|Giovanni Baiocchi]. In this 75- page article, the author provides a nice introduction to Perl and describes the use of Perl for Statistical Computing. There is also a section on [http://pdl.perl.org/|The Perl Data Language (PDL)]. Here is the abstract of the article:</p> <readmore> <blockquote>“In this paper we show how Perl, an expressive and extensible high-level programming language, with network and object-oriented programming support, can be used in processing data for statistics and statistical computing. The paper is organized in two parts. In Part I, we introduce the Perl programming language, with particular emphasis on the features that distinguish it from conventional languages. Then, using practical examples, we demonstrate how Perl’s distinguishing features make it particularly well suited to perform labor intensive and sophisticated tasks ranging from the preparation of data to the writing of statistical reports. In Part II we show how Perl can be extended to perform statistical computations using modules and by “embedding” specialized statistical applications. We provide example on how Perl can be used to do simple statistical analyses, perform complex statistical computations involving matrix algebra and numerical optimization, and make statistical computations more easily reproducible. We also investigate the numerical and statistical reliability of various Perl statistical modules. Important computing issues such as ease of use, speed of calculation, and efficient memory usage, are also considered.” </blockquote> </readmore> <p>The [http:// www.jstatsoft.org/|Journal of Statistical Software] is listed in the [http://www.doaj.org/doaj?func=home|Directory of Open Access Journals]. For information on Open Access, see [http:// en.wikipedia.org/wiki/Open_access|the Open Access entry in the Wikipedia]</p> <p>Cheers,</p> [lin0] <readmore> <hr> <p><b>Updates:</b></p> <p><i>Update 1:</i></p> <p>Fixed typo (s/Perl Programming Language/Perl Data Language/), as pointed out by [Jenda].</p> <hr>
{"url":"http://www.perlmonks.org/index.pl?displaytype=xml;node_id=585987","timestamp":"2014-04-17T17:18:27Z","content_type":null,"content_length":"3021","record_id":"<urn:uuid:00c197e8-b787-48dd-985a-d02c7c8fe05a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
Resolution: standard / high Figure 3. Correlates of anhedonia.A: Correlation coefficients for all pairwise correlations between questionnaire measures. All are highly significant (p<.01), except for the correlation between anhedonic depression and anxious anxiety, denoted by a red dot. B: Hierarchical weighted regression analysis across all datasets, involving all 255 participants with a full set of BDI, BDA and MASQ scores. The plots shows the linear coefficients between anhedonic depression (AD) score and the reward sensitivity and learning rate parameters ρ and ϵ. Each bars shows one linear coefficient; the red error bars indicate ± 1 standard error; and the green error bars indicate the 99.4% confidence interval (corresponding to a Bonferroni corrected level p=.05/8). AD is significantly and negatively correlated with the reward sensitivity ρ, but not significantly correlated with the learning rate ϵ. C: Scatter plot of anhedonic depression against reward sensitivity. Size of dots scale with weight (inference precision). D: Scatter plot of reward sensitivity vs. learning rate. E: Significance of correlations across parameter estimates from 70 surrogate datasets. There is a consistent and stably significant correlation between AD and reward sensitivity ρ, but not between AD and learning rate ϵ. Huys et al. Biology of Mood & Anxiety Disorders 2013 3:12 doi:10.1186/2045-5380-3-12 Download authors' original image
{"url":"http://www.biolmoodanxietydisord.com/content/3/1/12/figure/F3?highres=y","timestamp":"2014-04-20T10:47:39Z","content_type":null,"content_length":"13068","record_id":"<urn:uuid:6b236890-5b64-401a-92ad-df1ffc30f352>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
My conversations with gullible machines... I wrote up a small program to find prime-numbers between x and y ; implemented the Sieve of Eratosthenes. The SOE is basically where you take all numbers (say 1-100) into an array. Cross out 1. Loop from 2-Sqrt(N) i.e. 2-10. Now cross out every multiple of 2 in the array. Next cross out every multiple of 3.. and so on. Finally all the numbers that aren't crossed out are prime. However I did a very simplistic implementation (test-driven) to boot. All the tests did pass.. however running it to get all primes between 10K and 100K took 34 secs to execute. And I was on the new n improved ruby 1.9 with a better VM. I switched to the stable ruby 1.8.6 and it took ~60 secs. So now I turned back to my primary Weapon C#. The same steps implemented in C# took 722 msecs. So now there were 2 possibilities 1. Ruby is sloooow (the usual gripe) 2. Something in my algorithm is messed up. So I posted my source onto StackOverflow - (my current fave time sink :) to get some eyeballs to look at the problem. So here's my naive implementation class PrimeGenerator def self.get_primes_between( x, y) sieve_array = Array.new(y) {|index| (index == 0 ? 0 : index+1) position_when_we_can_stop_checking = Math.sqrt(y).to_i sieve_array[(factor).. (y-1)].each{|number| sieve_array[number-1] = 0 if isMultipleOf(number, factor) ( (element != 0) && ( (x..y).include? element) ) def self.isMultipleOf(x, y) return (x % y) == 0 # Benchmarking code require 'benchmark' Benchmark.bm(30) do |r| r.report("Gishu") { a = PrimeGenerator.get_primes_between( 1_000, 10_000) } A few people got back with what might be the problem. One of the first and most voted issue was the slicing the array to get a new subset array. That's got to be expensive inside a loop for an array of this size. Obviously I was using for loops in C# (since C# doesn't have Ruby's each) for index in factor..y-1 do sieve_array[index] = 0 if isMultipleOf(index+1, factor) That shaved off 8 secs - 24%. But still 26 secs. Another optimization was to reduce iterations - instead of checking each number with IsMultiple(number, _loopVar), set the 'step' for the loop to equal _loopVar (so from 2 you go 4,6,8,10...) And finally Mike Woodhouse dispatched a homer. He posted a tiny snippet that did the same thing in under 1 sec. No that's not a typo. def sieve_to(n) s = (0..n).to_a s.each do |p| next unless p break if p * p > n (p*p).step(n, p) { |m| s[m] = nil } # Usage: # puts sieve_to(11).select{|x| x>=3}.inspect user system total real Gishu 27.219000 0.000000 27.219000 ( 27.375000) Mike 0.141000 0.000000 0.141000 ( 0.140625) I then had to know how he did it.. 1. Smart#1: He skips if array[_loopvar] is already 0. whereas if array[6] is crossed out due to 3, I did a scanning run to set all multiples of 6 to 0 which were already 0.(now down to 6 secs) 2. Smart#2: Use the Numeric#step trick to jump to the next multiple vs traversing entire list with an IsMultiple check (now down to 0.25 secs) 3. Smart#3: He begins the inner loop with Square of p. e.g if _loopvar is 7, I used to check from 8->end. Mike checks from 49->end. This one was a little tricky to figure out.. the reason is 7x2, 7x3, 7x4, 7x5, 7x6 have already been crossed out when _loopVar was 2,3,4,5,6 respectively. (now down to 0.21 secs) 4. Smart#4: Smart use of Ruby's nil for crossed out vs my choice of 0. That followed by Array#compact reduces the number of elements for the final select operation to get primes between x and y (now down to 0.18 secs.) So after all that it's Ruby 0.15 secs and C# (with same optimizations) 0.028 secs, much more easy to take in. An interesting thought to end this post is that even with the naive first implementation C# performed reasonably well... under a sec. I'd have moved on to the next task. Ruby on the other hand slowed to a crawl forcing me to take another look for optimizations. In a indirect sort of way, C#'s superior performance could be hiding bad/substandard solutions and letting programmers get away with
{"url":"http://madcoderspeak.blogspot.com/2009/04/i-wrote-up-small-program-to-find-prime.html","timestamp":"2014-04-19T19:48:15Z","content_type":null,"content_length":"108968","record_id":"<urn:uuid:ffd8ce8b-60b8-4604-8431-84f7ac68feb8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
normal distribution question September 8th 2009, 04:11 AM #1 Sep 2009 normal distribution question I'm trying to find an answer to the following problem. Any help will be greatly appreciated. I have a normally-distributed variable X with a mean of 1000 and a some variance (let's assume 1). I choose a range of 970-990 (for the example's sake). For any X above 990, I receive 20 dollars. For any X below 970, I receive 0 dollars. For X between 970 and 990, I receive: X - 970 My question is: what is the function for calculating the mean profit of this example? I'm trying to find an answer to the following problem. Any help will be greatly appreciated. I have a normally-distributed variable X with a mean of 1000 and a some variance (let's assume 1). I choose a range of 970-990 (for the example's sake). For any X above 990, I receive 20 dollars. For any X below 970, I receive 0 dollars. For X between 970 and 990, I receive: X - 970 My question is: what is the function for calculating the mean profit of this example? Expected profit = 20 Pr(X > 990) - 970 Pr(970 < X < 990). Got it. You use the standard normal distribution, and then transform it to the given distribution. What would you do if the SD isn't 1? There's one thing I still have a problem with. The Pr(970 < x < 990) is the chance of X falling anywhere within that range. But the amount of money received is depending on where withing that range X falls: If X = 985 I receive 15 dollars, while if X = 972 I receive 2 dollars. I don't see how the function you gave relates to that fact. Thanks again, There's one thing I still have a problem with. The Pr(970 < x < 990) is the chance of X falling anywhere within that range. But the amount of money received is depending on where withing that range X falls: If X = 985 I receive 15 dollars, while if X = 972 I receive 2 dollars. I don't see how the function you gave relates to that fact. Thanks again, Sorry, I misread your question. I think the term should be $\int_{x=970}^{990}(x - 970) \Pr(970 \leq X \leq x) \, dx$ where the appropriate expression for $\Pr(970 \leq X \leq x)$ needs to be substituted. The integral will need to be calculated numerically. September 8th 2009, 04:36 AM #2 September 8th 2009, 04:57 AM #3 Sep 2009 September 8th 2009, 05:03 AM #4 September 8th 2009, 05:25 AM #5 Sep 2009 September 8th 2009, 05:29 AM #6 September 9th 2009, 12:11 AM #7 Sep 2009 September 10th 2009, 03:22 AM #8
{"url":"http://mathhelpforum.com/advanced-statistics/101118-normal-distribution-question.html","timestamp":"2014-04-18T06:59:40Z","content_type":null,"content_length":"57738","record_id":"<urn:uuid:52b21af5-1bd4-4a4b-b0e0-8f81c9e664d1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Orangevale Math Tutor ...Success is imminent - take the plunge. What are the chances of your passing your stats and probability course? If you're in doubt, my 36 years of helping hundreds of students be successful will definitely improve your odds. 15 Subjects: including algebra 1, algebra 2, calculus, geometry Math does not have to be frustrating! With a little time and help, I believe that every student can succeed in mathematics. I am a certified math teacher with a bachelors in mathematics from Point Loma Nazarene University and a Masters of Arts in teaching from the University of North Carolina at Chapel Hill. 6 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...With particular focus on the subject of science: I am a highly experienced scientist with extensive training in both comprehension and instruction of science. Computers have always been of interest to me since childhood. My first home computer as a child was a Commodore 64. 22 Subjects: including algebra 1, algebra 2, biology, calculus ...I have a passion for educating the next generation (Home schooled students especially, but I can work with any curriculum). I have several years experience tutoring and am currently teaching many math and science classes for academies in the area. I have a B.A. in Math and a B.S. in Physical Science from Humboldt State University. I am now a current vendor for many of the local 11 Subjects: including trigonometry, physical science, linear algebra, electrical engineering ...I had experience tutoring all levels of math and science (physics, chemistry, biology, anatomy) at the after-school program of a high school in Sacramento for over a year and private tutoring for 2 years. To me, nothing is more rewarding than hearing students enjoy the subject more and show grea... 12 Subjects: including calculus, algebra 1, algebra 2, chemistry
{"url":"http://www.purplemath.com/Orangevale_Math_tutors.php","timestamp":"2014-04-19T23:24:05Z","content_type":null,"content_length":"23901","record_id":"<urn:uuid:43b445f7-51ed-454e-bf4d-ec5781e5265c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Distance of a Point to a Segment The distance of a point to a segment is the length of the shortest line joining the point to a point on the segment. This Demonstration depicts this shortest line as dotted. The measurements are normalized so that the point starts at a distance of 1 to the segment. To compute the distance from point p to segment ab (all as complex numbers) compute first z=(p-a)/(b-a). If 0 ≤ Re[z] ≤ 1 then the distance is equal to Abs[Im[z](b-a)]. If not, it is equal to the smallest of the distances from p to a or to b.
{"url":"http://demonstrations.wolfram.com/DistanceOfAPointToASegment/","timestamp":"2014-04-20T18:26:29Z","content_type":null,"content_length":"42304","record_id":"<urn:uuid:5ad299c4-c18e-4cfc-af3a-f92a6595c00c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodacre SAT Math Tutor Find a Woodacre SAT Math Tutor ...I look forward to talking with parents about their concerns for their students. We CAN make a difference!The study of psychology at all levels, from high school through undergraduate, Master's and Doctoral levels, including statistics, experimental design and methodology. As a doctoral student ... 20 Subjects: including SAT math, calculus, geometry, statistics ...Thank you for your consideration,KarenI taught my 3 sons how to play chess (now they beat me!). I assisted at the chess club at their elementary school. I can teach all of the basic moves and some strategies. I may not be the best chess player in the world, but I am a patient teacher. 26 Subjects: including SAT math, Spanish, geometry, elementary math ...I specialize in tutoring high school mathematics, such as geometry, algebra, precalculus, and calculus, as well as AP physics. In addition, I have significant experience tutoring students in lower division college mathematics courses such as calculus, multivariable calculus, linear algebra and d... 25 Subjects: including SAT math, calculus, physics, statistics ...Besides studying World History on the side at university and then doing my own study and reading ever since, I have experienced first hand the "living history" that one can pick up by traveling, exploring and talking with and living with local people. In the course of my work, I have traveled t... 18 Subjects: including SAT math, calculus, statistics, geometry ...If anyone is interested in studying to prepare for the upcoming school year, I am your man. I enjoy working with students, showing them how math can be seen in the real world and giving them examples they can relate to. My goal as a tutor is to help students acquire the skills that will help them excel at math. 10 Subjects: including SAT math, geometry, statistics, algebra 1
{"url":"http://www.purplemath.com/woodacre_sat_math_tutors.php","timestamp":"2014-04-20T13:44:11Z","content_type":null,"content_length":"23866","record_id":"<urn:uuid:74556441-8fc2-4674-999a-7f19b345430d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
The familiar trigonometric functions can be geometrically derived from a circle. But what if, instead of the circle, we used a regular polygon? In this animation, we see what the “polygonal sine” looks like for the square and the hexagon. The polygon is such that the inscribed circle has radius 1. We’ll keep using the angle from the x-axis as the function’s input, instead of the distance along the shape’s boundary. (These are only the same value in the case of a unit circle!) This is why the square does not trace a straight diagonal line, as you might expect, but a segment of the tangent function. In other words, the speed of the dot around the polygon is not constant anymore, but the angle the dot makes changes at a constant rate. Since these polygons are not perfectly symmetrical like the circle, the function will depend on the orientation of the polygon. More on this subject and derivations of the functions can be found in this other post Now you can also listen to what these waves sound like. This technique is general for any polar curve. Here’s a heart’s sine function, for instance
{"url":"http://lovecaitlin.tumblr.com/","timestamp":"2014-04-19T22:36:26Z","content_type":null,"content_length":"84270","record_id":"<urn:uuid:43005118-6552-4c8a-96b5-8766bcd5b2e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
middle term / factor question.. I'm kinda banging my head for this one. Is this one has middle term? 3x^2+6x-3 ? Help me plz. tx Well, divide by 3 to start: x^2 + 2x - 1 Now explain your question! If it was x^2 + 2x + 1, then that would factor to (x + 1)^2 As Wilmer said, you factor out the "3" in each term: $3(x^2+ 2x- 1)$. Other than that, it cannot be factored with integer coefficients. The fact is that [b]most[b] polynomials cannot be factored with integer coefficients. (I keep saying "with integer coefficients" because we could say that $x^2+ 2x- 1= x^2+ 2x+ 1- 2= (x+1)^2- 2$. And now we can use the simple rule $a^2- b^2= (a- b)(a+ b)$ so that $x^2+ 2x- 1= (x+1-\sqrt{2})(x+1+\sqrt{2})$. Of course, to do that, I had to, effectively, solve the equation $x^2+ 2x- x= 0$ by 'completing the square' and, since a main application of factoring is to solve equations, we usually mean "with integer coeffcients". But, once again, most polynomial equations can't be factored that way.) thanks, Hei HallsofIvy How do you write maths characters like that? 3x^2+6x-3 first you have to convert it into quadratic form of equation i.e a x^2 + b x + c =0 then find D = b^2 - 4ac if D>0 then factor of equation is possible otherwise use X1 = [ -b + sqrt(D) ] / 2a X2 = [ -b - sqrt(D) ] / 2a ..... try it also helps you Quadratic Equation Last edited by Neeraj; July 26th 2012 at 04:17 AM. See the LaTeX Help subforum. Remember to use [tex] ... [/tex] tags. See also the LaTeX Wikibook or search the web. Thanks emakarov I came across this formula, I heard this one is used when there is no easy way to factor. What is the story with this formula and how does it works? $\frac{-b \pm\sqrt{b^2 - 4ac}}{2a} $ I'm a programmer and I am liking La Text Edit: My bad, I fixed the sign before 4ac. Last edited by ameerulislam; July 26th 2012 at 07:28 AM. I have this equation to factor $x^2 - 4x +5$ $\frac{-b \pm\sqrt{b^2 - 4ac}}{2a}$ within square root it comes -16.. So does that means I can't factor that equation? yeah only complex number will fit there. I think I can ignore it as I'm just doing simple calculus (only real numbers)..
{"url":"http://mathhelpforum.com/algebra/201344-middle-term-factor-question.html","timestamp":"2014-04-21T00:21:19Z","content_type":null,"content_length":"78316","record_id":"<urn:uuid:b3dd750c-fbf5-4ef6-ba2f-0051e0bf3e83>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Test Your Wits on These Mathematical Puzzles (Mar, 1932) The Four Color Theorem was not proven until 1976 and required the use of a computer. I’m pretty sure the thing about arabic numerals representing the number of angles in their characters is total B.S. Test Your Wits on These Mathematical Puzzles by WILLIAM J. HARRIS There’s nothing like a puzzle to test one’s mental alertness, and those presented here by Mr. Harris are certainly corkers. He also gives you some simple tricks which, though they only take a few minutes to learn, will convince your friends that you are a mathematical wizard of the first water. (P. S.— Answers are in the back of the book!) Lewis Carroll: Mathematician (Apr, 1956) Lewis Carroll: Mathematician Many people who have read “Alice’s Adventures in Wonderland” and “Through the Looking-Glass” are aware that the author was a mathematician. Exactly what was his work in mathematics? by Warren Weaver Lewis Carroll—wasn’t he a first-class mathematician too?” This is a typical remark when the name of the author of Alice in Wonderland comes up. That Carroll’s real name was Charles Lutwidge Dodgson and that his main lifelong interest was mathematics is fairly common knowledge. In fact, among his literary admirers there has long been current a completely false but unstoppable story that Queen Victoria read Alice, liked it, asked for another book by the same author and was sent Dodgson’s very special and dry little book on algebraic determinants. Lewis Carroll was so great a literary genius that we are naturally curious to know the caliber of his work in mathematics. There is a common tendency to consider mathematics so strange, subtle, rigorous, difficult and deep a subject that if a person is a mathematician he is of course a “great mathematician”—there being, so to speak, no small giants. This is very complimentary, but unfortunately not necessarily true. Carroll produced a considerable volume of writing on many mathematical subjects, from which we may judge the quality of his contributions. What sort of a mathematician, in fact, was he? FOR THE MATHEMATICIAN who’s ahead of his time (Feb, 1956) FOR THE MATHEMATICIAN who’s ahead of his time IBM is looking for a special kind of mathematician, and will pay especially well for his abilities. This man is a pioneer, an educator—with a major or graduate degree in Mathematics, Physics, or Engineering with Applied Mathematics equivalent.
{"url":"http://blog.modernmechanix.com/tag/math/","timestamp":"2014-04-19T22:29:22Z","content_type":null,"content_length":"70368","record_id":"<urn:uuid:82722eac-15bc-4d80-84e6-f8ca6aa0a8a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the slope of the line between (3, –4) and (–2, 1)? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5005098de4b0fb99113475d3","timestamp":"2014-04-19T07:09:55Z","content_type":null,"content_length":"77177","record_id":"<urn:uuid:209dfdfe-1bc1-4bf2-ab75-8727049ea9ee>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
polygonal number polygonal number A polygonal number, or figurate number, is any value of the function for integers $n\geq 0$ and $d\geq 3$. A “generalized polygonal number” is any value of $P_{d}(n)$ for some integer $d\geq 3$ and any $n\in\mathbb{Z}$. For fixed $d$, $P_{d}(n)$ is called a $d$-gonal or $d$-polygonal number. For $d=3,4,5,\ldots$, we speak of a triangular number, a square number or a square, a pentagonal number, and so on. An equivalent definition of $P_{d}$, by induction on $n$, is: $P_{d}(n)=P_{d}(n-1)+(d-2)(n-1)+1\qquad\text{ for all }n\geq 1$ $P_{d}(n-1)=P_{d}(n)+(d-2)(1-n)-1\qquad\text{ for all }n<0\;.$ From these equations, we can deduce that all generalized polygonal numbers are nonnegative integers. The first two formulas show that $P_{d}(n)$points can be arranged in a set of $n$ nested $d$-gons, as in this diagram of $P_{3}(5)=15$ and $P_{5}(5)=35$. Polygonal numbers were studied somewhat by the ancients, as far back as the Pythagoreans, but nowadays their interest is mostly historical, in connection with this famous result: Theorem: For any $d\geq 3$, any integer $n\geq 0$ is the sum of some $d$$d$-gonal numbers. In other words, any nonnegative integer is a sum of three triangular numbers, four squares, five pentagonal numbers, and so on. Fermat made this remarkable statement in a letter to Mersenne. Regrettably, he never revealed the argument or proof that he had in mind. More than a century passed before Lagrange proved the easiest case: Lagrange’s four-square theorem. The case $d=3$ was demonstrated by Gauss around 1797, and the general case by Cauchy in 1813. Mathematics Subject Classification no label found no label found
{"url":"http://planetmath.org/PolygonalNumber","timestamp":"2014-04-16T13:16:56Z","content_type":null,"content_length":"69034","record_id":"<urn:uuid:3fdd9184-0adc-49c4-8eb8-4d2febb492ff>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Warriors Scouting Report: Tracking Potentially Assisted Spot-Up Jumpers using Synergy I posted the following on Golden State of Mind yesterday. The idea of "potential assists" is interesting to me. One of the weaknesses of box score stats is that assists are recorded, but not passes which would have been assists, if the ball had gone in the basket. Obviously, because assists are only awarded when a basket is scored, they are inherently dependent on the shooter. I could be the best point guard in the world, but if I'm surrounded by bad shooters, I might look worse, because I would get fewer assists. Similarly, I might be a really good passer, but don't get that many opportunities to set up my teammates, because I'm playing with Steve Nash or Chris Paul. My teammates score very efficiently when I get them the ball, but my assist rate still appears low. In theory, if "potential assists" were recorded, we would have some more information about passing, which is obviously an important part of the game. There are no websites that I know of that record potential assists. So I decided to start tracking them on my own using Synergy. For this first scouting report, I'm focusing on spot-up jumpers, because these are the most straightforward plays to assess potential assists, and well, because spot-up plays are very important to winning. Basically, almost every jumpshot that is categorized as a spot-up play comes as a result of a potential assist. To be sure, not all of them do. Sometimes, a player catches the ball and then drives to the hoop or dribbles a couple of times and then takes a shot. I didn't track these. I also didn't track plays that resulted in fouls or turnovers, when I realized after some preliminary observations that these usually don't occur on pure jumpshots (i.e. those that would be potentially assisted). For reasons of sample size, I also limited my tracking to the six Warriors with the greatest number of shots, including Curry, Ellis, Wright, Lee, Williams, and Radmanovic. For each play, I recorded the game, quarter, a shot id, shooter and passer (by jersey number), type of shot (2 or 3), and whether the shot went in (obviously). Here's a few rows of data, so you get the idea: GameID ShotID Q Shooter Make Type Passer LALGSW040611 1 1 1 0 3 8 LALGSW040611 2 1 8 0 3 1 LALGSW040611 3 1 30 0 3 20 LALGSW040611 4 1 30 0 3 8 LALGSW040611 5 1 8 0 2 10 LALGSW040611 6 1 1 0 3 10 LALGSW040611 7 1 8 0 3 30 LALGSW040611 8 2 8 1 3 23 LALGSW040611 9 2 1 0 3 8 LALGSW040611 10 3 1 0 3 10 LALGSW040611 11 3 8 1 3 1 LALGSW040611 12 3 8 0 3 30 LALGSW040611 13 3 1 1 3 30 In total, I was able to track 1,093 shots. One-thousand and ninety-three shots. Yes, that's quite a number. Now we get to the stats. First up, I want to show you the shooting efficiency of each of the six tracked players on potentially assisted spot-up plays: SHOOTER SHOTS POINTS PPS Curry 169 234 1.38 Williams 163 215 1.32 Ellis 152 175 1.15 Wright 393 450 1.15 Radmanovic 98 112 1.14 Lee 118 98 0.83 Here, the number of shots includes 2-pt and 3-pt shots that were potentially assisted. PPS is simply the number of points scored divided by the number of shots. (I decided to use PPS as opposed to TS% or eFG%, because it allows easier comparison to Synergy stats, and makes some of the upcoming derived stats easier to calculate.) Not surprisingly, Curry and Williams were the most efficient (by quite a lot). Lee, because he takes so many two point shots and virtually no 3-pt shots, was the least efficient spot-up shooter. So far, so good. Now, let's look at something that you haven't seen before, which we'll call "passing efficiency": PASSER SHOTS POINTS PPS Wright 127 174 1.37 Ellis 271 352 1.30 Lee 185 234 1.26 Udoh 26 30 1.15 Biedrins 35 39 1.11 Law 38 42 1.11 Williams 71 76 1.07 Lin 28 29 1.04 Curry 226 222 0.98 Radmanovic 43 39 0.91 I know this is where the shin is going to hit the fat. Dorell Wright was the most efficient passer, as the PPS off his passes was 1.37. Ellis was just behind at 1.30, followed by Lee (1.26). The big, perhaps shocking, surprise here is that Curry comes in close to the bottom with a PPS of only 0.98. So, what's the deal? Is Curry really such a bad passer? Remember, folks. I'm a so-called Curry fanboy, so it's not like this is the outcome I was looking for or expecting. Time for a little more parsing. Here's a table that gives the efficiency of each passer-shooter tandem: PASSER Curry RANK Ellis RANK Lee RANK Radmanovic RANK Williams RANK Wright RANK RATIO Wright 1.69 1 1.00 8 0.57 6 1.10 7 1.89 2 NA 1.12 Ellis 1.42 4 NA 0.77 4 1.23 5 1.46 3 1.26 3 1.09 Lee 1.69 2 1.15 6 NA 2.14 1 1.06 6 1.12 5 1.05 Udoh 1.50 3 2.25 1 0.00 7 1.20 6 2.00 1 0.86 8 1.03 Biedrins 0.60 6 1.80 2 0.00 8 1.50 2 1.00 7 1.40 2 0.97 Law 0.00 8 1.50 5 0.80 3 0.90 8 0.86 8 1.56 1 0.97 Williams 0.73 5 1.08 7 0.75 5 1.25 4 NA 1.22 4 0.93 Curry NA 0.87 9 1.21 2 0.78 9 1.15 5 0.92 7 0.89 Lin NA 1.50 4 2.00 1 1.29 3 0.67 9 1.00 6 0.87 Radmanovic 0.38 7 1.80 3 0.00 9 NA 1.31 4 0.60 9 0.74 Just to be clear how to read the data, for example, Curry's PPS when potentially assisted by Wright was 1.69 (the upper left corner of the table). Curry was most efficient when receiving passes from Wright, thus, ranking Wright first (the column RANK to the right of each shooter). Here you can also see that Wright was more efficient when potentially assisted by Ellis (1.26 PPS) compared to Lee (1.12) or Curry (0.92). The careful reader may have noticed the "RATIO" column at the right side of the table. To explain this new term, I need to show you the pass (or shot) distribution: PASSER Curry Ellis Lee Radmanovic Williams Wright SHOTS XPPS PPS RATIO Wright 42 34 14 10 27 0 127 1.23 1.37 1.12 Ellis 59 0 26 22 41 123 271 1.19 1.30 1.09 Lee 35 34 0 7 16 93 185 1.21 1.26 1.05 Udoh 2 4 5 5 3 7 26 1.12 1.15 1.03 Biedrins 5 5 5 2 3 15 35 1.15 1.11 0.97 Law 1 6 5 10 7 9 38 1.14 1.11 0.97 Williams 11 13 8 12 0 27 71 1.15 1.07 0.93 Curry 0 38 48 18 34 88 226 1.11 0.98 0.89 Lin 0 2 1 7 9 9 28 1.19 1.04 0.87 Radmanovic 8 5 4 0 16 10 43 1.23 0.91 0.74 Curry was potentially assisted by Wright 42 times, Ellis by Wright 34 times, and so on. If we take the PPS from the first table, we can then calculate an "expected PPS", which I call XPPS. Here's an example calculation using Wright: 1.23 = 42*1.38 (Curry) + 34*1.15 (Ellis) + 14*0.83 (Lee) + 10*1.14 (Radmanovic) + 27*1.32 (Williams) The actual PPS on shots potentially assisted by Wright was 1.37. The ratio of (actual) PPS to XPPS therefore represents a measure of "normalized" passing efficiency, that takes into account the particular distribution of passes by each player. In theory, this is a more fair way to compare players. For example, Wright obviously benefits from being able to pass to Curry (who is very efficient), but Curry can't pass it to himself (oh, one wishes). You can see this by looking at the XPPS. Even given that Curry has a low XPPS, his RATIO shows that, for whatever reason, the actual PPS off of Curry passes was lower than might be expected. Here's where we get a little more sophisticated. We want to know if any of these data are statistically significant. In other words, are these numbers real or could they result from chance alone? First, I ran a linear regression with Points as the dependent variable and Shooter as a single predictor (in other words, ignoring the passer): lm(formula = Points ~ as.factor(Shooter), data = subset(GSW2011)) Min 1Q Median 3Q Max -1.385 -1.145 -1.143 1.681 1.857 Estimate Std. Error t value Pr(>|t|) (Curry) 1.3846 0.1091 12.692 < 2e-16 *** as.factor(Shooter)Ellis -0.2333 0.1585 -1.472 0.14142 as.factor(Shooter)Lee -0.5541 0.1701 -3.257 0.00116 ** as.factor(Shooter)Radmanovic -0.2418 0.1801 -1.343 0.17968 as.factor(Shooter)Williams -0.0656 0.1557 -0.421 0.67360 as.factor(Shooter)Wright -0.2396 0.1305 -1.836 0.06656 . Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.418 on 1087 degrees of freedom Multiple R-squared: 0.01146, Adjusted R-squared: 0.006915 F-statistic: 2.521 on 5 and 1087 DF, p-value: 0.028 Here, Curry is treated as the baseline (1.3846 PPS) with the other players being the comparisons (or contrasts). It turns out that only Lee was found to be statistically different from Curry (p <0.01). The negative coefficient means that Lee's PPS was found to be 0.83 = 1.3846 - 0.5541 (Curry - Lee). Note that 0.83 is the PPS value given in the first table. Wright's PPS was just above the level usually considered statistically significant, although some people would call it a "trend". As Warriors fans, we probably can all agree that Curry is a better spot-up shooter than Ellis, but technically speaking, these data don't "prove" that is the case. Maybe 1.5 or 2 years of data would provide a big enough sample size to make stronger claims. Let's look at passing now. I'm doing the same regression, except this time using Passer as the predictor: lm(formula = Points ~ as.factor(Passer), data = subset(GSW2011)) Min 1Q Median 3Q Max -1.3721 -1.2649 -0.9956 1.7232 2.0930 Estimate Std. Error t value Pr(>|t|) (Curry) 0.99558 0.09447 10.538 <2e-16 *** as.factor(Passer)Ellis 0.28118 0.12794 2.198 0.0282 * as.factor(Passer)Lee 0.26929 0.14081 1.912 0.0561 . as.factor(Passer)Other 0.11752 0.14468 0.812 0.4168 as.factor(Passer)Radmanovic -0.08860 0.23629 -0.375 0.7078 as.factor(Passer)Williams 0.07485 0.19322 0.387 0.6986 as.factor(Passer)Wright 0.37652 0.15672 2.402 0.0165 * Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.42 on 1086 degrees of freedom Multiple R-squared: 0.009539, Adjusted R-squared: 0.004066 F-statistic: 1.743 on 6 and 1086 DF, p-value: 0.1078 Again, Curry is the baseline comparison. Notice the PPS is much lower this time (0.996). I should note here that I've lumped all other players not listed into a group called "Other". Ellis and Wright are the two players here who were found to be statistically different from Curry, each with a positive coefficient (which should be added to Curry's). Lee comes very close to significance, so we'll call that a trend. Ok, one more regression. Now, we're going to include both Shooter and Passer as factors in the analysis: lm(formula = Points ~ as.factor(Passer) + as.factor(Shooter), data = subset(GSW2011)) Min 1Q Median 3Q Max -1.487 -1.221 -1.027 1.688 2.168 Estimate Std. Error t value Pr(>|t|) (Curry) 1.203563 0.156291 7.701 3.04e-14 *** as.factor(Passer)Ellis 0.210800 0.132681 1.589 0.1124 as.factor(Passer)Lee 0.187243 0.145679 1.285 0.1990 as.factor(Passer)Other 0.069063 0.146181 0.472 0.6367 as.factor(Passer)Radmanovic -0.194948 0.239388 -0.814 0.4156 as.factor(Passer)Williams 0.046394 0.195319 0.238 0.8123 as.factor(Passer)Wright 0.278321 0.164856 1.688 0.0916 . as.factor(Shooter)Ellis -0.169417 0.163773 -1.034 0.3012 as.factor(Shooter)Lee -0.459595 0.177621 -2.588 0.0098 ** as.factor(Shooter)Radmanovic -0.175921 0.185026 -0.951 0.3419 as.factor(Shooter)Williams 0.004799 0.159190 0.030 0.9760 as.factor(Shooter)Wright -0.176175 0.136564 -1.290 0.1973 Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.418 on 1081 degrees of freedom Multiple R-squared: 0.01773, Adjusted R-squared: 0.007735 F-statistic: 1.774 on 11 and 1081 DF, p-value: 0.05399 The only statistically significant result here is that Lee has a negative effect as a shooter. Wright's positive effect as a passer appears to be a trend. With a p-value of 0.11, Ellis comes relatively close to being labeled a trend. In both these cases, a larger sample size might yield statistically significant results. When we take into account the recipient of potential assists, the (marginal) effects of the passer do not appear to be statistically significant. Now, that doesn't mean all passers are equivalent, or even that all Warriors players are equivalent. After watching a lot of video over the past week, I got the distinct impression that Ellis creates a lot of open shots with his ability to drive, draw defenders, and kick out. Conversely, while I believe Curry does have that ability, he doesn't use it quite as often. Also, it seems to me that Curry's passing can be improved, in terms of tightening up the accuracy. In particular, I think this is an issue between Curry and Wright. Wright appears to be most efficient when receiving the ball directly in front of his body, as opposed to his left or right. I think the connection from Curry to Wright would be more efficient, if Curry could more consistently center his passes on Dorell. This is also an issue when Dorell shoots after the ball is swung to him on the perimeter, and he has to turn his body to catch it and then turn back towards the basket to shoot. Dorell is not nearly as quick in getting his shot off as Reggie or Curry, so I believe his efficiency is more sensitive to the pass quality. Originally, I was going to post some scouting videos showing these things, but I don't want to be accused of cherry picking. Also, I didn't quantify any of this, so I could be wrong. Maybe in the future I'll undertake a more careful and quantitative analysis in this regard. I would suggest, however, that next season (whenever it plays out), you look for yourself. Of course, there's an even bigger issue here to discuss. The fact of the matter is that our PG is our best shooter, yet most of us want the ball in his hands more so that he can setup his teammates. Should these data make us reconsider whether the Curry/Ellis backcourt should actually be Ellis/Curry? Think about it. These data suggest that the main reasons for Curry's low passing efficiency is simply due to the fact that he's passing to teammates who are worse shooters than he is. Especially Lee. I think that's something that has to be looked at. To be sure, Lee has to be part of the offense. It would be nice if he could develop a three-point shot, but if it hasn't happened by now, that's probably just wishful thinking on my part. Of course, there are other types of plays that are potentially assisted. Maybe Curry is much more efficient at setting up those plays? That's certainly something I'd like to investigate further. Part of me thinks it might really be worth experimenting with Monta at PG full time, with Mark Jackson as a mentor. Of course, another solution is simply to surround Curry with better shooters (Reggie, maybe Klay). At any rate, I'm glad I undertook this little project. It brought up some interesting issues and raises some questions for further research. 9 thoughts on “Warriors Scouting Report: Tracking Potentially Assisted Spot-Up Jumpers using Synergy” 1. Excellent work! 1. Thanks, Daniel. Oddly, your comment went to my spam box. Glad I checked. 2. It shows a couple of text lines in the R-code box. Looks really weird 1. Thanks, Jerry. I fixed it. 3. Hey Evan - Neat results, but I don't think a linear regression is completely appropriate here. A couple of ideas - you could record the four different outcomes (missed two, made two, missed three, made three) as your dependent variable instead of points scored and run a multinomial regression using scorer and passer as predictors. R has it in the mlogit library. It would tell you if the probability of those four things depends on who the passer and/or shooter were. Or you could run a GLM where your observations are makes out of total shots for each shooter-passer pair; you would probably also include if the shot was a 2 or 3 as a predictor. In R it would look something like glm(makes/shots ~ shooter+passer+shotvalue, weight=shots, family=binomial). That would tell you the likelihood of a shot being made depending on the shooter, passer, and if it's a 2 or 3 (via the logit link). 1. Thanks, Alex. I actually started out doing logistic regression (using glm in R), but the sample size for 2's is really small, except for Lee's. I tried various combinations of predictors, including Type, but in the end, I think what I have here was the most interesting (meaning there were other less interesting results). Also, what we - or at least, I - care about is the overall efficiency, not simply the 2-pt or 3-pt efficiency. The Lee/Curry 2-pt efficiency may be solid, but that appears to hurt the overall efficiency, at least, for spot-up attempts. Maybe I'll take another look, though. I'll update the post if I find something. 2. Oh, before I forget, isn't what I've done essentially equivalent to two-way ANOVA? 1. It is, but ANOVA probably isn't suited for points scored if it can only be 0, 2, or 3; that's very coarse whereas ANOVA assumes a normal distribution. If you had enough data that you could use PPS as your DV you could probably run the linear regression/ANOVA on that. I'm mostly concerned because you shouldn't get a non-significant regression (your last one has a p>.05) if there are significant effects within (Lee as a shooter is fairly low). The errors on your coefficients are likely wrong. The multinomial or glm regressions should account for the small cells for the most part; you'll just have bigger errors on the two point shots (or there are fancier things you can do). Then if you're interested in efficiency you could just combine the model estimates to get an EV. Figure out the probability for making a two pointer given a certain shooter and passer, multiply by two, then add to the probability of making a three given a certain shooter and passer times three. 1. Thanks, that's a good point about the distribution. Looks like I need to read up on multinomial regression and how to do it in R.
{"url":"http://www.d3coder.com/thecity/2011/07/29/warriors-scouting-report-tracking-potentially-assisted-spot-up-jumpers-using-synergy/","timestamp":"2014-04-19T22:06:26Z","content_type":null,"content_length":"77993","record_id":"<urn:uuid:0a7f87c5-5138-4e36-9c12-ba85cfead066>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
James Gregory and the Pappus-Guldin Theorem - A Ratio Between the Trunk and the Solid of Revolution Gregory treats separately the ratio between the trunk and the solid of revolution and the ratio between the trunk and the cylinder. The key to understanding the first ratio is Cavalieri's Principle: "If two plane (or solid) figures have equal altitudes, and if sections made by lines (or planes) parallel to the bases and at equal distances from them are always in the same ratio, then the plane (or solid) figures also are in this ratio". (See [10, pp. 315-321], cited in [11, p. 516].) Once Gregory establishes a fixed ratio between corresponding slices of the trunk and the solid of revolution, this principle will imply that the solids themselves have the same ratio. He begins with the trunk, slicing it by an arbitrary plane OPV perpendicular to the axis of rotation. Let V denote the point of intersection of OPV with the axis of rotation and let O and P denote the intersections of OPV with l and k, respectively. Then OP = \hbox{height of the cylinder} \qquad \hbox{and} \qquad PV = \hbox{radius of rotation} Note that the values of will not change no matter what the choice of the perpendicular plane , but the intersection of with the trunk will form a trapezoid vary in size as moves along the axis of rotation. By similar triangles, {OP \over PV} = {GF \over FV} \qquad \hbox{and} \qquad {OP \over PV} = {HE \over EV} Multiply the first equality by 1/2π. Then some elementary arithmetic shows that \eqalign{ {OP \over 2 \pi PV} = {GF \over 2 \pi FV} & \Rightarrow {OP \over 2 \pi PV} = {GF \over 2 \pi FV}\cdot {{1 \over 2} FV \over {1 \over 2} FV } \cr & \Rightarrow {OP \over 2 \pi PV} = {{1 \ over 2} GF\cdot FV \over \pi FV^2} \cr & \Rightarrow {OP \over 2 \pi PV} = {area(\Delta GFV) \over area(circle(FV))} \cr } where ΔGFV denotes the triangle GFV and circle(FV) denotes the circle on radius FV. The second equality shows similarly that {OP \over 2 \pi PV} = {area(\Delta HEV) \over area(circle(EV))} Consequently, by applying Euclid V.19 to these two ratios, {OP \over 2 \pi PV} = {area(GHEF) \over area(annulus(FV - EV))} where area(GHEF) denotes the area of the trapezoid GHEF and area(annulus(FV - EV)) denotes the area of the annulus obtained by revolving the segment EF around the axis of rotation. Note that the numerator on the right is a slice of the trunk and the denominator is the corresponding slice of the solid of revolution. Since OP and PV do not change, it follows by Cavalieri's principle that the ratio of the volume of the trunk to the volume of the solid of revolution is equal to the ratio of OP to 2πPV. To be more specific, if AB is any planar figure let rev(AB) denote the volume of the solid of revolution obtained by revolving AB around an axis of revolution and trunk(AB) denote the volume of a trunk of a right cylinder over AB. Then {trunk(AB) \over rev(AB)} = {OP \over 2 \pi PV} ) is the height of the cylinder over and 2π ) is the circumference of the circle with radius equal to the radius of rotation for , then we can write { trunk(AB) \over rev(AB)} = { height(AB) \over circum(AB)} This is the sought after ratio between the solid of revolution and a trunk constructed from the same 2-dimensional figure. This formula also yields a way to describe the ratio between the volumes of two solids of revolution--something quite important for someone brought up to appreciate Euclidean proportion theory. Suppose, for instance, that AB and EF are two planar figures extended into 3-dimensions to form cylindrical figures which by assumption have the same height. Using the notation from above, the previous result implies that {trunk(AB) \over rev(AB)} = {height(AB) \over circum(AB)}\qquad \hbox{and}\qquad {trunk(EF) \over rev(EF)} = {height(EF) \over circum(EF)} ) = ) by assumption, we eliminate the common value from both equations to arrive at {rev(AB) \over rev(EF) } = {trunk(AB) \over trunk(EF)} \cdot {circum(AB) \over circum(EF)} This shows that the ratio between solids of revolution can be understood completely in terms of trunks and radii of rotation.
{"url":"http://www.maa.org/publications/periodicals/convergence/james-gregory-and-the-pappus-guldin-theorem-a-ratio-between-the-trunk-and-the-solid-of-revolution","timestamp":"2014-04-20T09:09:21Z","content_type":null,"content_length":"106299","record_id":"<urn:uuid:03acca93-0380-404e-9c29-ebdbef419588>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Yahoo Groups GEOSTATS: 1) stationary mean, 2) correl. dimension Expand Messages View Source Dear all out there, I have got 2 (maybe very basic) questions, the first one about the requirement of stationary mean in semivariance analysis, the 2. about the correlation dimension of fractal analysis. before performing a semivariance analysis, any secular trend must be removed from the data in order to meet the requirement of stationarity of the mean. Question: how do you decide at which scale fluctuations are actually trends ? Stationarity of the empirical data means regional mean = ยต(x) = const. over x, but over which range of x the regional mean has to be taken ? As an example, I add (see zipped attachment <examp1.doc>) the spatial distribution of count rates over a monazite (a radioactive mineral) containing beach of Brazil. The dots represent the sampling locations. The picture has been produced using 'naive' kriging with Surfer Software, i.e. using the default settings for the variogram and assuming isotropy, just in order to grossly visualize the distribution. I would say that there is an obvious trend, represented by the maximum between easting 300 and 450 m; but is the maximum at ca. 160 m also part of the trend ? Doesn't the big maximum in fact consist of 3 maxima at ca. 340, 370 and 410 m, respectively, which should be modelled by the trend surface ? (Apart from the problem, how to model such a trend structure.) It seems to me that the correlation dimension is quite a useful tool to assess the topologic structure of the spatial distribution of a variable; or could be if used properly. I use this kind of fractal dimension because it is (as I think) the easiest to calculate: D :<=> AM(n(r)) ~ r^D, where the left hand side denotes the number of points within distance r from a fixed point x, averaged over all points x. D is then easily calculated by log regression. Now the question: there is always an 'edge effect' to D produced by the fact that the sampling area is inevitably limited in space. An infinite complete regular quadratic sampling grid, e.g., has D = 2, but the same grid with finite extension has D < 2, because the points at the border have naturally less neighbours than points within the grid and therefore contribute to AM(n(r)) by lowering D and thus inferring a fractal patchiness of the structure which is clearly an artefact. For this reason, D depends heavily on the overall size (extent) of the grid (number of sampling points), regardless of its structure, which makes this quantity somewhat questionable, I think. Does somebody know how to deal with this problem ? Thank you very much & regards, PB Peter Bossew Georg Sigl-Gasse 13/11 A-1090 Vienna, Austria ph. +43-1-3177627 Your message has been successfully submitted and would be delivered to recipients shortly.
{"url":"https://groups.yahoo.com/neo/groups/ai-geostats/conversations/topics/1610","timestamp":"2014-04-19T07:09:44Z","content_type":null,"content_length":"41601","record_id":"<urn:uuid:b3db40c6-1cda-4822-88bb-61f233ecb471>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Intermediate Algebra Posted by brenda on Tuesday, May 22, 2012 at 1:27pm. Essay: show all work. Find the quotient: Related Questions intermediate algebra - Essay. Show all work. find the quotient: 49a^2-b^2... Intermediate Algebra - Show all work. Find the quotient:49a^2-b^2/7a-b algebra - Essay. Show all work. Find the quotient. 49a^2-b^2 --------- 7a-b intermediate algebra - Essay. Show all work. find the quotient: y^4+3y-1... Intermediate Algebra - Show all work. Find the quotient:6x^3-x^2-7x-9/2x+3 algebra - Essay. Show all work. Find the quotient y^4+3y-1 -------- y^2-3 intermediate algebra - Find the quotient...Show work.....y^4+y+1/y^2-9 algebra - Essay. Show all work. Find the following quotient: 36x^2-49y^2... Intermediate Algebra - Essay Question.Show all work.Find the difference.(3x^2-6x... math - Essay: Show all work. Find the quotient: y^4 + 3y -5 / y^2 + 7
{"url":"http://www.jiskha.com/display.cgi?id=1337707657","timestamp":"2014-04-20T12:19:04Z","content_type":null,"content_length":"8266","record_id":"<urn:uuid:ba49975e-477d-444c-8c22-823bbf060f20>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Worst-case analysis of a generalized heapsort algorithm - Comput. Math. Appl , 1998 "... In this paper we have observed and shown that ternary systems are more promising than the more traditional binary systems used in computers. In particular, ternary number system, heaps on ternary trees, and quicksort with 3 partitions do indicate some theoretical advantages over the more established ..." Cited by 1 (0 self) Add to MetaCart In this paper we have observed and shown that ternary systems are more promising than the more traditional binary systems used in computers. In particular, ternary number system, heaps on ternary trees, and quicksort with 3 partitions do indicate some theoretical advantages over the more established binary systems. The magic Napierian e plays the crucial role to establish the results. The experimental data, supporting the analysis, have also been presented. Keywords: Analysis of algorithms; Performance evaluation; Quicksort; Heaps; Divide and conquer technique 1 Introduction With the invention of computers, 2-parametric algebra, number system and graphs among other systems started to flourish with accelerated speed. Boolean algebra got its important applications in computer technology, binary number system has occupied the core of computer arithmetic, and binary trees have become inseparable in Revised version of ref. no. CAM 2974. y Corresponding Author. mathematical "... Abstract. In this paper we present a new data structure for implementing heapsort algorithm for pairs of which can be simultaneously stored and processed in a single register. Since time complexity of Carlsson type variants of heapsort has already achieved a leading coefficient of 1, concretely nlg ..." Cited by 1 (0 self) Add to MetaCart Abstract. In this paper we present a new data structure for implementing heapsort algorithm for pairs of which can be simultaneously stored and processed in a single register. Since time complexity of Carlsson type variants of heapsort has already achieved a leading coefficient of 1, concretely nlg n + nlg lg n, and lower bound theory asserts that no comparison based in-place sorting algorithm can sort n data in less than ⌈lg(n!) ⌉ ≈ n lg n − 1.44n comparisons on the average, any improvement in the number of comparisons can only be achieved in lower terms. Our new data structure results in improvement in the linear term of the time complexity function irrespective of the variant of the heapsort algorithm used. This improvement is important in the context that some of the variants of heapsort algorithm, for example weak heapsort although not in-place, are near optimal and is away from the theoretical bound on number of comparisons by only 1.54n. , 2002 "... In this report a new data strucutre named M-heaps is proposed. This data structure is a modi cation of the well known binary heap data structure. The new structure supports insertion in constant time and deletion in O(log n) time. Finally a generalization of the data structure to d ary M-heaps is p ..." Add to MetaCart In this report a new data strucutre named M-heaps is proposed. This data structure is a modi cation of the well known binary heap data structure. The new structure supports insertion in constant time and deletion in O(log n) time. Finally a generalization of the data structure to d ary M-heaps is presented. This structure has similar time-bounds for insertion and deletion. , 2001 "... An elementary approach is given to studying the recurrence relations associated with generalized heaps (or d-heaps), cost of optimal merge, and generalized divide-and-conquer minimization problems. We derive exact formulae for the solutions of all such recurrences and give some applications. In pa ..." Add to MetaCart An elementary approach is given to studying the recurrence relations associated with generalized heaps (or d-heaps), cost of optimal merge, and generalized divide-and-conquer minimization problems. We derive exact formulae for the solutions of all such recurrences and give some applications. In particular, we present a precise probabilistic analysis of Floyd's algorithm for constructing d-heaps when the input is randomly given. A variant of d-heap having some interesting combinatorial properties is also introduced.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1259756","timestamp":"2014-04-25T06:24:07Z","content_type":null,"content_length":"20278","record_id":"<urn:uuid:587a8e43-27dc-4d27-8b7f-07075820b47d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
A circle packing conjecture up vote 13 down vote favorite Consider $n$ circles with variable radii $r_1,\ldots, r_n$ that pack inside a fixed circle of unit radius. In other words, all $n$ variable-radius circles are contained in the unit radius circle and their interiors have empty intersections. The tangency graph of a packing comprises $n+1$ vertices, one for each circle, and edges between vertices if the corresponding circles are tangent. Conjecture: in a packing that maximizes $r_1+\cdots +r_n$, the corresponding tangency graph is planar and triangulated. This conjecture looks like it might be related to the Koebe-Andreev-Thurston circle packing theorem. The latter states that for every planar triangulated graph there is a corresponding circle packing of the kind described and that this packing is unique up to conformal transformations. While it may turn out that the KAT theorem can provide some insights on proving the conjecture, I believe that something else is going on. For instance, the radius-sum objective function is not conformally invariant. I have good numerical evidence in support of this conjecture. The optimum configurations I've found up to $n=20$ all have triangulated graphs. I'm posting this on MO because I also have something that looks like it may be "close" to a proof. Perhaps someone can close the gap or convince me that the gap is actually a bottomless chasm -- either would be helpful! Here is my proof strategy: 1. Use convexity to show that an optimal configuration maximizes the number of edges in the tangency graph. 2. Use Euler's theorem to show that a tangency graph that maximizes the number of edges is triangulated. This is a constrained optimization problem in $\mathbb{R}^{3n}$. Consider the constraint that applies to circles 1 and 2: $(x_1-x_2)^2+(y_1-y_2)^2 \ge (r_1+r_2)^2$. This type of constraint is called "reverse convex" (the feasible region is the complement of an open convex set). Feasible regions in reverse convex problems (intersection of open set complements) can be quite complex -- they may not even be connected. On the other hand, they have a very nice property when we are maximizing a convex function: an optimum can always be found at a "vertex" of the feasible region. In a reverse convex problem in $\mathbb{R}^{N}$, a vertex is a point of the feasible region where at least $N$ of the constraints are equalities. We can think of reverse convex problems as generalizing linear programming in a way that inherits all the nice local properties. The existence of a global optimizer requires that the feasible region is non-empty and compact. This is not an issue for the circle packing problem since we can let the radii range over all the real numbers and add reverse convex constraints $r_1\ge 0,\ldots,r_n\ge 0$. The alert reader will already have realized that not all of the constraints in the circle packing problem are reverse convex! The constraints that apply to the fixed unit circle have the wrong sense of the inequality, e.g. $x_1^2 + y_1^2 \le (1-r_1)^2$. One can try to fix this problem by replacing the fixed unit circle with a regular $M$-gon and taking the limit (in some sense) of large $M$. This has two nice consequences. First, the optimization is now truly reverse convex (half-plane constraint for every side of the polygon) and so there is an optimizer where exactly $3n$ constraints are active (at their equality value). To see the second nice feature we have to do some counting. The tangency graph has one new feature when the fixed circle is replaced by a regular $M$-gon: it is no longer simple because it may have doubled edges between the variable-radius circles and the polygon (whenever a circle is tangent to adjacent polygon edges). Let the number of circles with double tangencies be $D$. If $E$ and $F$ are the number of edges and faces of the graph, and $\tilde {E}$ and $\tilde{F}$ are these quantities when the doubled edges are merged into single edges, then $E=\tilde{E}+D$ and $F=\tilde{F}+D$. Since our graph has $n+1$ vertices, and reverse convex programming tells us there is an optimum with $E=3n$ tangencies, Euler's theorem gives $n+1-3n+F=2$, or $F=2n+1$. We therefore have the following formulas for the "reduced graph" after merging doubled edges: $\tilde{E}=3n-D$, $\tilde{F}=2n+1-D$. The reduced graph is simple and planar and satisfies $2\tilde{E}\ge 3\tilde{F}$ where equality implies that the graph is triangulated. Using our formulas this inequality becomes $D\ge 3$. The result of this analysis is that optimum configurations in the $M$-gon have at least 3 circles with double tangency, and that the reduced graph is triangulated when this minimum holds. The number 3 is interesting. I believe it corresponds to the fact that the conformal transformations are fixed by specifying 3 points on the boundary of the region (the $M$-gon) where the circles are mapped. Sacrificing the symmetry of the fixed circle paid off because it allowed the optimization problem to have discrete solutions (whose existence follow from reverse convex programming). There are two gaps in the proof. How do we take results for the $M$-gon and by some limiting process prove a theorem about the circle? Second, how do we prove $D=3$? Optimal configurations with $D>3$ become more unlikely as $M$ becomes large because in that case more than the minimum number of active constraints arise from double tangencies. After all, the pair of constraints at a double tangency become degenerate at $M=\infty$. I believe the conjecture is true for the class of objective functions $r_1^p+\cdots + r_n^p$ where $1\le p < 2$. The case $p\ge 2$ is uninteresting because the optima degenerate into a single circle that completely fills the fixed circle, the rest having zero radius. mg.metric-geometry oc.optimization-control global-optimization packing 1 You are probably already familiar with Appollonian circle packings. Their tangencies form triangulations, and much is known about their bends (inverse radii), largely through the work of Graham, Lagarias, Mallows, Wilks, and Yan. Perhaps the bend equations may be of some help in establishing your conjecture. – Joseph O'Rourke Sep 9 '10 at 0:09 add comment 1 Answer active oldest votes You have an elaborate set of ideas, and I haven't thought through all of what you outlined, but here's a suggestion: Oded Schramm generalized the circle packing theory to include arbitrary convex shapes, and showed they work in much the same way. (The famous case of packing squares is one instance included in this generalization). His theory even allows the shape up vote 14 to be a function of position and size, but that generality seems unnecessary here. The suggestion: consider a set of regular N-gons packed inside an N-gon. At corners, they will touch at down vote more than one point, but it is a connected set, so there is a well-defined adjacency graph with out doubled edges, and no room for extra disks to try to hide in the corners. The reverse accepted convex constraints become piecewise linear. I think the limiting process should be straightforward to analyze. add comment Not the answer you're looking for? Browse other questions tagged mg.metric-geometry oc.optimization-control global-optimization packing or ask your own question.
{"url":"http://mathoverflow.net/questions/38086/a-circle-packing-conjecture","timestamp":"2014-04-16T22:02:36Z","content_type":null,"content_length":"56825","record_id":"<urn:uuid:2ec1ef6e-a63d-442c-b320-cc11f3f30fb6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
ultiple : What Problem Statement 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder. What is the smallest number that is evenly divisible by all of the numbers from 1 to 20? This is essentially a requirement to compute the least common multiple for the values 1 through 20. We first need to find the prime factors for each number. For some of the numbers, some of the prime factors occur more than once. eg. for 12, the prime factors are 2 and 3, of which while 3 occurs once, 2 occurs twice. Thus for each number we create a hashmap of the prime factors and the number of occurrences. To do so we define a inc_count(dict_,key) which increments the occurence count of the key in the dictionary. This dictionary for each number is computed once and is referred to as new_factors. We need to ensure that we eventually create yet another dictionary which keeps track of the maximum count for each factor across all the numbers. We define yet another dictionary factors which is used to keep track of the maximum occurences of a given factor across all the new_factor instances. We finally fold the factors dictionary by compute a product of all the factors with each factor being used as many times as it occurs in the factors dictionary. That gives us the least common multiple, which is the solution to the problem. 1 from itertools import chain 3 # function to take the first value of a generator and ignore the rest 4 def first(gen): 5 try: 6 return gen.next() 7 except StopIteration : 8 return None 10 # generator to return all the prime factors of a given number 11 def prime_factors(n): 12 while n > 1 : 13 ff = first(val for val in chain( 14 xrange(2,int(n**0.5+1.0)),[n]) if n % val == 0) 15 yield ff 16 n = n / ff 18 # increment the occurrences value of a key in a dictionary 19 def inc_count(dict_,key): 20 dict_[key] = dict_[key] + 1 21 return dict_ 23 # keep track of the maximum occurrences of a key in a dictionary 24 def set_max_count(dict_,(key,val)): 25 if dict_[key] < val : 26 dict_[key] = val 27 return dict_ 29 # Actual solution 30 # Initialise a dictionary with all keys with occurrences set to zero 31 factors = dict((n,0)for n in range(2,21)) 33 # For each number for whom we are computing the least common multiple 34 for num in range(2,21) : 35 # Compute the prime factor occurences dictionary for the number 36 new_factors = reduce( 37 inc_count, 38 prime_factors(num), 39 dict((n,0)for n in range(2,21))) 40 # Update the tracking dictionary to keep track 41 # of the maximum occurrences of a key (factor) 42 factors = reduce(set_max_count, new_factors.items(),factors) 44 # Generate a product by multiplying all the factors 45 number = reduce( 46 lambda num,(key,val) : num * (key ** val), 47 factors.items(), 48 1) 49 print number
{"url":"http://codeblog.dhananjaynene.com/2010/01/least-common-multiple-what-is-the-smallest-number-that-is-evenly-divisible-by-all-of-the-numbers-from-1-to-20/","timestamp":"2014-04-19T09:24:27Z","content_type":null,"content_length":"18455","record_id":"<urn:uuid:c3b359e7-fb1e-490e-9b14-60d815f095cd>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Extending REG-EXP to 2-Dimension. rockwell@nova.umd.edu (Raul Deluth Miller) Sat, 3 Dec 1994 18:38:20 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.compilers From: rockwell@nova.umd.edu (Raul Deluth Miller) Keywords: lex Organization: University of Maryland University College References: 94-11-137 94-12-021 Date: Sat, 3 Dec 1994 18:38:20 GMT Jan-Peter de Ruiter: : [ Request for regexp systems that work on "boxes" of text ] : As far as I know it has not been solved in any way. The problem is : that you need to extend the notion of linearity (characters : following other characters) in 2 dimensions. Or, more generally, you would have to take a step back and enumerate some notions of structure, and the information required for such : This could perhaps be done by using a 'circular' approach, : for instance like this: : CCCCC : CBBBC : CBABC : CBBBC : CCCCC This is a fun example. Here's some interpretations: (A) you're looking for an exact match. -- easily implementable using a finite automata, but boring. : So in the expression "ABC", A, B and C are all regexps that : describe properties of a 'circle' of text. These expressions : themselves should be modified to be able to describe circular : structures, and the relations between these circular expressions : should be formalized in some way or other. So vague as to be useless. More specifically, describing an arbitrary "circle" is not a problem for a finite automata -- like parenthesis matching, it involves counting. (C) Perhaps there's some sort of general "finite automata that can make turns". Here, a circle might be a closed sequence involving four right turns. (D) Perhaps there's some sort of concept of "a restricted indefinite automata that spawns finite automata." Here, you'd want to invent some sort of concept of a rendevous of (e.g. 2) finite automata to achieve anything meaningful. I suspect this would be turing equivalent for the general case (where it can be thrown at arbitrarily large regions of text). (E) You could introduce the concept of a finite automata without backtracking which is first used to transform the region of text in one direction followed by a similar finite automata without backtracking which is used to transform the region of text in some other direction. Call this the "ledger" model. (F) And then there's the whole field of cellular automata. (e.g. John Horton Conway's game of "Life"). Perhaps define a class of cellular automata which reduce the outer borders with each generation? Raul D. Miller Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/94-12-033","timestamp":"2014-04-18T10:42:18Z","content_type":null,"content_length":"7404","record_id":"<urn:uuid:9ab7d097-587f-41fe-9421-2607c5a51e4e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Complier Average Causal Efect (CACE) estimation Jichuan Wang posted on Monday, January 21, 2002 - 12:58 pm Two questions: 1). For only two treatment conditions (i.e., TX vs control): is it possible to have three levels of compliance (e.g., compliance, partial compliance, and non-compliance) so that the training variables could look like the following: c1 c2 c3 1 1 1 (in control group) 1 0 0 (compliance) 0 1 0 (partial compliance) 0 0 1 (non-compliance) Accordingly, the latent class variable would be specified with three classes in the mixture intervention model. 2). Is it possible to run a CACE model with three treatment conditions (e.g., tx1, tx2, and control) and two levels of compliance (i.e., compliance vs non-compliance)? It looks like we could include two TX dummy variables in the model, but how to define the training variables and interpret the latent class variable? Thank you very much for your help! Booil Jo posted on Wednesday, January 23, 2002 - 2:27 pm 1) It is possible to arrange the training variables as you did to reflect three levels of compliance. However, this model is not identified, even though you impose the exclusion restriction on noncompliers. In this case, you have to build the identifiability relying on more structural assumptions than the exclusion restriction. 2)This is another underdeveloped area in CACE modeling. I will compare tx1 vs. control, and tx2 vs. control using regular 2-class CACE models. However, It is not clear how to interpret the results when you compare two active treatments using CACE estimation, as you already pointed out (unless double-blining is possible). JeremyMiles posted on Wednesday, March 24, 2004 - 3:59 am I wondered if anyone reading knew if this was an appropriate problem for CACE modelling. We have a problem related to, but not exactly the same as, non-compliance - differential non-recruitment. We carried out a cluster randomised study of a new form of therapy versus standard care. The clusters were clinics, and there were 13 clinics allocated to standard, and 13 allocated to the new form of care. The problem that we encountered was that the clinics allocated to the new form of care - intervention - thought this was very exciting, and recruited a large number of patients (~800), whereas the usual care - control - didn't try so hard, and recruited only ~400. The control group differs - probably in initial severity, and maybe on other characteristics. We have a wide range Is this an example of somewhere we could use a CACE model? We have two classes in the intervention arm, and one class in the control arm. It's sort of related to ITT issues, but only sort of. Linda K. Muthen posted on Wednesday, March 24, 2004 - 6:25 am As I understand it, once you have lost your randomization, the method would no longer apply. Anonymous posted on Wednesday, November 10, 2004 - 6:42 am Dear Bengt and Linda I have some questions regarding CACE models in MPlus 3.11. First, I wonder whether my data are suited to this model. I have a study where respondents to a survey are randomly allocated to either treatment or control. In the treatment condition they are shown a film about genomic science, the control receive no information. I want to look at the effect of watching the film on subsequent attitude questions. However, approximately 20% of the treatment group report not having understood the film. I would like to treat this group as non-compliers and estimate the complier-average causal effect of viewing the film. Would this seem appropriate to you? Second, I am not sure how to interpret the MPlus output when fitting the CACE model in example 7.24. Can you point me somewhere that I might be able to find some pointers on what I should be looking for? In particular, which parameter denotes the causal effect? Is it the regression of Y on x2 in latent class 2? I have sent my output separately to MPlus Product support. Thank you, bmuthen posted on Sunday, November 14, 2004 - 12:04 pm Yes on the question in the first paragraph. Example 7.24 uses the x2 variable to represent the treatment-control dummy in line with ex 7.23. So, yes, the regression coefficients for y on x2 is the causal effect. Anonymous posted on Monday, November 15, 2004 - 6:25 am Dear Bengt thanks for your reply. I have a few further questions about this: 1. the output suggests to me that latent class 2 in my analysis are the non-compliers (this is because of the relative size of the classes. However, the regression of Y on X2 is fixed to zero in this class. Should class 1 be the non-compliers in the setup for this model? 2. what is the role of the x1 variable? Should I be including variables here that are predictive of complier/non-complier status? Can I include more than one variable for X1? 3. How do I deal with differential nonresponse in the CACE model? Can I simply specify a weight variable in the usual way? thanks again, bmuthen posted on Monday, November 15, 2004 - 7:17 am 1. There should be no doubt from the data about which class is the non-complier class - it is the class with no compliance for the treatment group (so an observable matter). The standard CACE model assumes no treatment effect for the non-compliance class since they do not receive treatment (see, however, modifications made in Booil Jo papers on our web site). 2. x1 is an example of a variable that strengthens the analysis much like covariates with ancova in randomized studies. You can have many such variables pointing to either the latent class variable or the outcome or both. See the Little & Yau article in Psych Methods on our web site. 3. By differential nonresponse, do you mean different across the 2 classes? If so, this is a topic studied by Frangakis and Rubin within the area of non-ignorable missingness - it can be handled by Mplus using the latent classes to predict missingness. But maybe I am misunderstanding your question. Michael Beets posted on Tuesday, October 10, 2006 - 5:34 pm We have a multiyear evaluation trial of a school-based program using student reports of outcomes, students nested within classrooms. In our initial CACE analysis we dichotomized the implementation measure to create complier and non-complier categories. Results were good, but our child-report covariates didn't predict compliance well. We have teacher-report covariates that might predict compliance better. Can Mplus do a CACE model that includes teacher-report covariates to predict compliance? Because students are nested within classroom, all students who had the same teacher have the same values for the teacher-report covariates. Is a multilevel CACE analysis needed here to correctly use the teacher-report covariates? Bengt O. Muthen posted on Tuesday, October 10, 2006 - 5:59 pm Is compliance a between level (classroom) variable with students being the within level? If so, the CACE latent class variable is a between variable. This feature is not in the current Mplus Version 4.1, but will be in the next version 4.2 which is due out in a couple of weeks. If compliance varies across students within classrooms, this is related to work by Booil Jo and you may want to contact her about it. Bengt O. Muthen posted on Wednesday, October 11, 2006 - 10:33 am If you send me an email, I have a paper and some communication with Booil Jo to share with you on this. Michael Beets posted on Tuesday, November 07, 2006 - 7:29 pm I am currently evaluating a multiyear trial of an elementary school primary prevention program. In the program, we have varying levels of exposure to the program (an unintended side effect of varying implementation of teachers). I am interested in the CACE approach to model the effects of the program. Additionally, I’ve run propensity score analyses, with favorable results. My question centers on locating a reference that may provide a substantive discussion on the key similarities and differences of CACE and propensity score methods. Would you be able to point me a direction? Thank you. Elizabeth Stuart posted on Thursday, November 09, 2006 - 12:05 pm One of the main differences between the CACE and propensity score methods are the underlying assumptions, so you will want to think about which is more reasonable for your setting. In particular, the CACE models use the fact that the original treatment assignment was randomized (I'm not sure if it was in your example), and then make exclusion restrictions and the monotonicity assumption to estimate impacts. So they rely on having some "instrument" (the thing that was randomized) that affects the exposure level that someone gets. Propensity score methods don't assume anything was randomized, but instead rely on an assumption of unconfounded treatment assignment: they assume that there are no hidden biases between the exposed and unexposed groups. In your case, this would imply that there are no unobsrved differences between the high exposure and low exposure groups--that all differences are captured by your observed covariates. So they assume that only observed variables affect the exposure level that someone gets. I hope this helps. In another post I will list some references that you could look at. Liz Stuart Elizabeth Stuart posted on Thursday, November 09, 2006 - 12:07 pm This is a followup to my previous post, with references for propensity score and CACE assumptions and comparisons. For the CACE assumptions I like the original Angrist, Imbens, and Rubin paper: Angrist, J.D., Imbens, G.W., and Rubin, D.B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91, 444-455. For propensity score analyses I like this paper by Rubin: Rubin, D.B. (2001). Using propensity scores to help design observational studies: Application to the tobacco litigation. Health Services and Outcomes Research Methodology, 2, 169-188. And there are two references I know of that compare instrumental variables (CACE) models and propensity score models: Posner, M.A., Ash, A.S., Freund, K.M., Moskowitz, M.A., and Shwartz, M. (2001). "Comparing standard regression, propensity score matching, and instrumental variables methods for determining the influence of mammography on stage of diagnosis." Health Services and Outcomes Research Methodology, 2, 279-290. Landrum, M.B. and Ayanian, J.Z. (2001). "Causal effect of amulatory specialty care on mortality following myocardial infarction: A comparison of propensity score and instrumental variable analyses." Health Services and Outcomes Research Methodology, 2, 221-245. Liz Stuart Michael Beets posted on Thursday, November 09, 2006 - 1:32 pm I really appreciate your assistance on this topic. I will delve into these references and if any new question should arise I will be sure to ask. Have a beautiful day. Scott Grey posted on Wednesday, February 21, 2007 - 10:37 am I'm not sure what this error output means. Can you help? alc30_11 ON alc30_7; class#1 ON latino black am_ind asian oth_race male family age alc30_7 risk2 risk3 risk4; age WITH latino black am_ind asian oth_race male family alc30_7; alc30_11 ON treatms; alc30_11 ON treatms@0; FILE IS results; RECORDLENGTH = 1000; *** FATAL ERROR Thuy Nguyen posted on Wednesday, February 21, 2007 - 11:17 am Please send your input, output and data to support@statmodel.com. Not enough information is available here to determine the cause of the error message. Charles B. Fleming posted on Tuesday, February 05, 2008 - 10:45 am I have data from an experimental study in which I have both "noncompliers" in the experimental condition and "always-takes" in the control condition. I have run some CACE models using Booil's syntax for the two class CACE model (i.e., compliers and noncompliers) but extending the logic to a three class model (i.e., compliers, noncompliers, and always-takers). The results seem to make sense and match up reasonably closely with the estimate of the treatment effect I get with an instrumental variable approach or running an ANCOVA model that controls for covariates related to compliance. The standard errors for the treatment effect are somewhat larger using the CACE model compared to the ANCOVA model, which seems right. (I have not figured out how to get the standard errors or include covariates using the instrumental variable approach). I am wondering whether I am in uncharted waters with the three class CACE model. I looked through Booil's publications and have not found a similar example. Linda K. Muthen posted on Tuesday, February 05, 2008 - 2:02 pm Following is a response from Booil Jo: It is possible to do 3-class CACE modeling in Mplus. The model will be identified by imposing the exclusion restriction both on never-takers and always-takers. Monotonicity is also necessary. Under this condition, Mplus should provide estimates close to those from the IV approach. However, since we are dealing with a parametric model, substantial deviation from normality may lead to erroneous solutions. Including covariates is not only possible, but also prevent this event from happening. If one or both of the exclusion restrictions are relaxed, identification of the model will depend more on covariate information and normality, and therefore more caution is needed. Cross-validating the results by using both parametric and semi or non-parametric approaches might be a good idea in this case. I have not tried many examples of 3-class CACE modeling, and have not seen many published examples using parametric approaches (and none using Mplus). However, there are many published examples of multi-class CACE modeling using the Bayesian approach. Anonymous posted on Wednesday, September 16, 2009 - 3:04 pm I have a few questions regarding a CACE model I am attempting to run. 1) What is the difference between these two error messages? *** WARNING Data set contains cases with missing on all variables. These cases were not included in the analysis. Number of cases with missing on all variables: 2 *** WARNING Data set contains cases with missing on x-variables.These cases were not included in the analysis. Number of cases with missing on x-variables: 84 2)How can I maximize the variables used without effecting the Entropy? Linda K. Muthen posted on Wednesday, September 16, 2009 - 3:24 pm The first message is about missing on all analysis variables. The second message is about observations with missing on one or more observed exogenous variables. I don't understand your second question. Anonymous posted on Wednesday, September 16, 2009 - 4:05 pm Thanks for the answer and I apologize for being vague in my second question. Regarding the number of cases with missing observations on the exogenous variables, I have found on other posts using other types of analyses that mentioning the variances works to include cases with missing data on the covariates. Is this also true for CACE models (perhaps by saying x2; again in the %overall% model)? If so, will this change the entropy of the model (the model continues to run well without these cases)? I hope this is more clear, and thank you in advance. Linda K. Muthen posted on Thursday, September 17, 2009 - 8:35 am If you change the sample by bringing in the observations with missing on x's, you change the sample and the entropy will most likely change also. Anonymous posted on Monday, January 04, 2010 - 2:04 pm How large of a sample size is needed to conduct a CACE model? Linda K. Muthen posted on Monday, January 04, 2010 - 5:24 pm Following is an answer from Booil Jo: Assuming that parametric estimation methods (e.g., 2-class mixture in Mplus) are used, around 100 or more subjects in each compliance class will result in good CACE estimates and standard errors. Around 50 subjects in each compliance class will still yield reasonably good estimates. However, if parametric assumptions and/or model identifying assumptions (e.g., exclusion restriction) are not met, the quality of estimates will deteriorate. Covariates may play important roles here. By including good covariates (i.e., predictors of compliance type) in the estimation model, one may obtain reasonably good CACE estimates with smaller samples. These covariates also tend to reduce sensitivity of CACE estimates to deviations from underlying model assumptions. Sam Vuchinich posted on Monday, May 03, 2010 - 2:50 pm I have data from a randomized trial and am interested in the noncompliance Example 7.24 in the Mplus manual. In particular, can this analysis be done with missing data on the outcome variable? I do have baseline covariates that predict missingness, the outcome variable and compliance, and am planning to use these to identify the CACE. I have read the recent Jo, Ginexi & Ialongo (in press) paper posted at your web site which nicely describes missing outcomes and CACE in Mplus. But they identify the CACE by imposing restrictions on the relationship between outcome missingness and noncompliance. That would be useful as sensitivity analyses, but for the primary analysis I want to take advantage of the informative covariates, which were carefully chosen. Is it possible to use example Mplus 7.24 with multiple imputation? If so, which Mplus multiple imputation method would be best? Bengt O. Muthen posted on Tuesday, May 04, 2010 - 10:21 am It is an interesting question what is best here. You could do imputation where you specify that tx is binary. But note that regular imputation does not acknowledge that you have a mixture of compliers and non-compliers. So the imputations would seem biased to some degree. Regular ML mixture modeling would seem more straightforward, with later follow-up using the latent ignorability NMAR approach. Note that ML under MAR estimates [y | x] from those with complete data on y. The subjects with data on only x would contribute only to the marginal [y]. You have the parameters you want already in [y|x]. I am saying that because there doesn't seem to be a reason to bring the x into the model in ML in this case. If your covariates that predict missingness are not part of your model, you could take the missing data correlates approach of UG ex 11.1. Sam Vuchinich posted on Tuesday, May 04, 2010 - 11:21 am Thank you for the suggestion to use the AUXILIARY command. I am interested in applying the GMM described in the 2003 Jo & Muthen chapter in the Reise & Duan book, and see that the AUXILIARY command provides a convenient way to deal with missing outcomes, although covariates that predict compliance and the outcome are still needed in the model proper to identify it. Is it possible to use the AUXILIARY command in a three-level model in Mplus, as in a longitudinal cluster-randomized trial (e.g., level 1 = time points, level 2 = person, level 3 = cluster)? Linda K. Muthen posted on Wednesday, May 05, 2010 - 8:46 am The AUXILIARY (m)option is not available for multilevel modeling. Sam Vuchinich posted on Wednesday, May 05, 2010 - 2:39 pm Is there any chance that the AUXILIARY option for multilevel modeling will be included in future versions of Mplus? Bengt O. Muthen posted on Thursday, May 06, 2010 - 8:13 am We will put it on our list for future developments. In the meanwhile note that this type of modeling can be set up by the user. The principle for the single-level approach is described in the web movie Missing Data Correlates using ML which you find at That principle can be generalized to twolevel models. Sam Vuchinich posted on Thursday, May 06, 2010 - 4:05 pm Thanks very much for this suggestion, I will check it out. Sam Vuchinich posted on Monday, May 10, 2010 - 5:50 pm The "Missing Data Correlates using ML" video was clear. I see how to add in the WITH statements rather than using the AUXILIARY command in that kind of model. I am interested in using this method in the multilevel growth mixture model format of UG Example 10.9. That would represent a couple of extensions from the video example. First, consider the case of UG Example 10.9 with some missing data in the outcomes, y1-y4. If there were three Time-1 covariates that were not measures of y (as they are in the video), that are associated with missingness in y1-y4, could they be used in the same manner as the z1-z5 in the video as auxiliary variables? That is, would this be done by adding to the %WITHIN% section of the Example 10.9 model the 3 correlations among the 3 auxiliary variables, and the 12 correlations between the 3 auxiliaries and the 4 outcome observations? If this is incorrect could give me a hint on how specify the auxiliary effects in MODEL = TWOLEVEL MIXTURE? Second, would the same approach extend to data that was missing from whole clusters of individuals (such as schools) that were missing from some waves of the study? That is, assuming there are 3 cluster-level covariates that are associated with whether whole clusters (data on students in a school) are missing from some waves of data. I can see how this might be done in the %BETWEEN% section of Example 10.9, analogous to the %Within% section as described above. Bengt O. Muthen posted on Tuesday, May 11, 2010 - 9:57 am These are good questions and they are research questions that have to be studied, including via simulations. I'm afraid I don't have the specific answers since I haven't done this myself yet. Sam Vuchinich posted on Tuesday, May 11, 2010 - 6:22 pm Though the AUXILIARY approach to dealing with missingness in the outcomes (y1-y4) of multilevel Example 10.9 is premature, would it not still be possible to provide evidence for MAR if there were within-level covariates that were predictive of the outcome, predictive of missingness, and of some substantive interest in the model. Such covariates could influence the intercept or slope. Substantive interest could come from a need to adjust model estimates for something like gender. Granted, I am talking about very informative covariates. But if you found a couple good ones, with moderately strong associations, and no interactions, wouldn't that address missingness in multilevel models such as Example 10.9? Bengt O. Muthen posted on Wednesday, May 12, 2010 - 6:15 pm If you have covariates that are predictive of the outcome and missingness as well, I would not hesitate to include them in the model and thereby make MAR more plausible. The missing correlates situation is different because the correlates don't have a role in the model as predictors of the outcome, only missingness. Hence, they shouldn't be included as covariates because the influence of the substantive covariates becomes distorted. I would first ignore the multilevel angle and see if missing data correlates have an effect or not. Anonymous posted on Monday, June 28, 2010 - 5:25 pm In attempting to run a CACE model I receive the following warning: However, on the website i noticed that the same error is shown in an GMM example: Can this warning be ignored? If not, how should one proceed in ameliorating the error? Linda K. Muthen posted on Monday, June 28, 2010 - 7:46 pm No, this message should not be ignored. For the variable w9xr, you should look for a negative residual variance or a correlation greater than one. If you can't see the problem, please send your full output and license number to support@statmodel.com. ywang posted on Tuesday, October 12, 2010 - 11:54 am Dear professors, I have two questions about CACE: 1. If y (outcome variable) is a categorical variable, can I do a CACE model with example 7.23 by only including a command of "categorical=y"? What else do I need to add to the input? 2. Is it possible to examine the interaction between treatment (not complier) and a covariate in the CACE model? If so, is it correct to simply generate an interaction term between treatment and the covariate, and include it as another covariate in the model? Thanks in advance! Linda K. Muthen posted on Wednesday, October 13, 2010 - 12:17 pm 1. Yes. The other difference is that instead of a mean and variance for the dependent variable, you would have a threshold. 2. Yes. ywang posted on Tuesday, October 19, 2010 - 11:10 am For CACE model, is there any paper or example of input file that combines latent growth modeling and CACE? Bengt O. Muthen posted on Tuesday, October 19, 2010 - 12:47 pm Yes, look under Papers, Non-compliance and you will find a Jo-Muthen paper related to that. Marie-Helene Veronneau posted on Tuesday, April 12, 2011 - 1:41 pm I have a question on how to interpret part of the output of a CACE model. Is the following section giving me the means of my predictors within class 1 (i.e., non-engagers in my model)? . ESTIMATED MODEL AND RESIDUALS (OBSERVED - ESTIMATED) FOR CLASS 1 . Model Estimated Means And is the following section giving me the means of my predictors within class 2 (i.e., engagers in my model)? . ESTIMATED MODEL AND RESIDUALS (OBSERVED - ESTIMATED) FOR CLASS 2 . Model Estimated Means I would assume that these are the values I would get if I saved the estimated engager status for each participant and requested means for each group separately, but the subtitle saying "RESIDUAL OUTPUT" is confusing to me. Thank you for your help. Bengt O. Muthen posted on Tuesday, April 12, 2011 - 6:11 pm Yes, those are the model-estimated means you want. sunyoung yoon posted on Wednesday, January 25, 2012 - 11:55 am Hi I'm working on a cluster principal stratification model. I wonder if I created group dummies properly or not according to the error message. Treatment was assigned in school level, and level of compliance is school level as well. Outcome level is student level. Since compliance variable in control group was not observed, I coded "0" in both c1 and c2. And treatment group has compliance (1) and non-compliance (0) in c1 and c2 respectively. c1 c2 0 0 control (assumed to be zero treatment effect) 1 0 treatment (compliance) 0 1 treatment (non-compliance) But then, I got an error message like the below. I greatly appreciate your help. *** ERROR There is at least one observation in the data set where all training variables are zero. Please check your data and format statement. Linda K. Muthen posted on Wednesday, January 25, 2012 - 12:07 pm See the Topic 5 course handout starting at slide 44. Instead of 0 0 you should have 1 1. This means they can be in either class. sunyoung yoon posted on Wednesday, January 25, 2012 - 2:56 pm Thank you so much! I just wonder if I'm doing it correctly still for dealing with cluster design. Could you look at the below syntax please? CLUSTER = school; BETWEEN = treat; CLASSES = c(2); TRAINING = c1-c2;ANALYSIS: TYPE IS TWOLEVEL random MIXTURE; score ; score ; score on treat; score on treat @0; score on treat; OUTPUT: tech1 tech2; Bengt O. Muthen posted on Wednesday, January 25, 2012 - 8:36 pm You should ask yourself if - the treatment is on the cluster level (the answer seems to be yes) - the latent compliance classes are on the person or cluster level (the way you have done it, the compliance is on the person level) You also want the mean of "score" to vary across the 2 compliance classes. Then you check if your training data agrees with class 1 being the non-compliers. sunyoung yoon posted on Wednesday, March 07, 2012 - 1:27 am I have two questions. Treatment and complier occur in school level, outcome (score) is in student level. But latent class size looks like complier variables were created in student level in the output. Also, I have an error message like below. Do you see any problems in the syntax so I can fix or is there any example of multi level CACE model? I appreciate your help so much. NAMES ARE school treat score c1 c2; USEVARIABLES ARE treat score school c1 c2; CLUSTER = school; BETWEEN = treat; CLASSES = c(2); TRAINING = c1-c2; TYPE IS TWOLEVEL random MIXTURE; score on treat; scoreon treat ; OUTPUT: tech1 tech2; Error message: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE CONDITION NUMBER IS 0.808D-11. PROBLEM INVOLVING PARAMETER 6. Latent Classes 1 571 0.21354 2 2103 0.78646 Linda K. Muthen posted on Wednesday, March 07, 2012 - 8:56 am If you want the classes to be between classes, add BETWEEN=c; to the VARIABLE command. You need to fix score on treat to zero in the non-complier class. See Example 7.23. sunyoung yoon posted on Thursday, March 08, 2012 - 11:07 pm Thank you so much. I added "BETWEEN=c;" But still I had students latent classes numbers. My compliance occurs in the school level meaning that latent classes size should be presented the number of And since my "non-complier class" is low quality of dosage, it's not the zero-treatment effect. In this case, still I have to fix to zero? If I run after fixing to zero, then coefficient of class 1 is 0 too. NAMES ARE id school treat score c1 c2; USEVARIABLES ARE treat score school c1 c2; MISSING IS .; CLUSTER = school; BETWEEN = treat c; CLASSES = c(2); TRAINING = c1-c2; TYPE IS TWOLEVEL random MIXTURE; score on treat; score on treat; OUTPUT: tech1 tech2; Linda K. Muthen posted on Friday, March 09, 2012 - 3:44 pm No, you don't have to fix it at zero. Check that your training data are between-level variables, that is, that all members of a cluster have the same value. If you continue to have problems, send your files and license number to QianLi Xue posted on Wednesday, August 29, 2012 - 6:47 am The User's Guide gives out two ways to implement the CACE model (i.e., Ex7.23 and Ex7.24). I noticed that the two approaches treat missing outcome variable (i.e., Y in the examples) differently. Ex7.23 will delete "cases with missing on all variables except x-variables." So it will delete all cases with Y missing. However in Ex7.24, only cases with both Y and u missing will be deleted, which means only cases in the control (not treatment) group with missing Y will be deleted. If this is correct. Which model do you recommend to use when there is missing data in Y? Thanks in advance for your help! Linda K. Muthen posted on Wednesday, August 29, 2012 - 8:04 am What you say does not seem correct. Can you please send the two outputs and your license number to support@statmodel.com so I can see what you mean. Elina Dale posted on Monday, May 06, 2013 - 8:23 pm Dear Dr. Muthen, I am trying to estimate CACE as I have RCT with non-compliance (55% of those in treatment group did not get it). My outcome variable is a cont latent variable measured through cat factor indicators. My treatment variable is trx (1/0) and I have compliance indicator p4p (1/0), which shows whether ind i actually received the trx. I ran my model and I got the following warning message: Final stage loglikelihood values at local maxima, seeds, and initial stage start numbers: -12933.575 533738 11 -12933.575 407168 44 Unperturbed starting value run did not converge. 2 perturbed starting value run(s) did not converge. I reran then with STARTS = 100 ; which is twice the number of starts before where I had 50. However, I got the same warning message, i.e. RERUN WITH AT LEAST TWICE THE RANDOM STARTS TO CHECK THAT THE I am not sure what perturbed and unperturbed staring values and what I should do as the next step to get the best loglikelihood and replicate it. Thank you! Linda K. Muthen posted on Tuesday, May 07, 2013 - 11:52 am We give the message to rerun every time you run. We have no way of knowing if this is your first or second run. If you have doubled the original starts and replicated the best loglikelihood several times, you should be fine. Unperturbed starting values are the default starting values which are used for the perturbed starting values. Elina Dale posted on Friday, May 10, 2013 - 7:38 pm Dear Dr. Muthen, Thank you for all your help! It seems my model ran now and I got the CACE estimates that seem to make sense. I am struggling now presenting and interpreting results, specifically assigning units. My outcome is a continuous latent variable measured through a set of observed indicators on a Likert scale. I have an assigned treatment variable (TRX=0/1) and I estimated CACE. My F1 on TRX is -2.673. This is my CACE estimate. What are the units of measurement here? For each of my factors 1st loading is fixed to 1. Will my results be easier to interpret if I fixed factor variances to 1? I want to present it to policy-makers and would like to find a good way of interpreting the results. Also, if you have good publications you could refer me to that is similar to my case, I'd greatly appreciate it. Thank you!!! Linda K. Muthen posted on Saturday, May 11, 2013 - 8:56 am I would interpret the standardized STD coefficient. Then the metric of f1 is mean zero and variance one. Elina Dale posted on Saturday, May 11, 2013 - 8:24 pm Dear Dr. Muthen, I tried to get the STD coeff but it seems that to get them one has to specify ALGORITHM=INTEGRATION. With default setting, I got a message that there were 50625 integ points & I had to reduce their number or use MC Integration. So, I used MC with the following commands: TYPE = COMPLEX MIXTURE ; The Model took a few hours to run but eventually it terminated normally and I got the usual output without any warning messages. BUT, the estimates that I got using these specifications differ significantly from the estimates that I got previously, before I specified MC Integration, using these commands: TYPE = COMPLEX MIXTURE ; !3rd RERUN WITH MORE STARTS TO MAKE SURE STARTS = 400 20 ; STITERATIONS = 20 ; My estimates in all 3 runs w/out integration were all the same and my loglikelihood was replicated. So, I trusted those estimates. Now it seems with MC I get different ones. Which one of the two should I trust? I've read on the Board that default numerical integ algorithm is better / more stable than MC. I couldn't do default b/c I had 4 dimensions & high number of integ points. But I still need to obtain STD results as per your earlier suggestion. Thank you! Linda K. Muthen posted on Sunday, May 12, 2013 - 10:27 am Send the output without STD and the output with STD and your license number to support@statmodel.com. Elina Dale posted on Thursday, August 15, 2013 - 1:44 am Dear Dr. Muthen, I am trying to estimate CACE as I have RCT with non-compliance. Here is my specification of the original model A: f1 BY item1 item2 item3 f2 BY item4 item5 item6 f3 BY item7 item8 item9 f4 BY item10 item11 item12 f1 ON trx ; f2 ON trx ; f3 ON trx ; f4 ON trx ; c ON X1 X2 X3 ; f1 ON trx; f2 ON.... etc I ran my model A 3 times and my best likelihood was replicated each time. All the factor loadings and beta coefficients stayed the same. Following this, I've added 3 more predictors, in addition to trx, and ran model B. I didn't alter the measurement part, i.e. factor indicators stayed the same. All other parts of how I specified the model also stayed unaltered. The part that changed in Model B input: f1 ON trx type1 type2 type3; f2 ON trx type1 type2 type3; f3 ON trx type1 type2 type3; f4 ON trx type1 type2 type3; Model B results show that Factor Loadings have changed(!) as well as beta coefficient of trx. I expected the latter, but I thought factor loadings should not have been altered. I rerun the model with more starts & iterations but got same results. Should factor loadings remain unaltered between these two models? If not, what can I do to find the reason for this error? Many thanks! Linda K. Muthen posted on Thursday, August 15, 2013 - 8:43 am The factor loadings can change. There may be a need for direct effects between the type and item variables due to measurement non-invariance. Elina Dale posted on Tuesday, November 12, 2013 - 6:03 pm Dear Dr. Muthen, I am writing to you to just confirm again that CACE estimation method can be used with observed treatment and latent outcome variable b/c the articles on CACE that I found all seem to use observed outcomes (such as PIRC Study). My Y or rather Y's are 4 factors measured through 20 items. My treatment is financial incentives, but I have high % of noncompliers, so am using CACE. If I can use MPlus to estimate CACE with a latent outcome, is the specification below correct? f1 BY item1 item2 item3 f2 BY item4 item5 item6 f3 BY item7 item8 item9 f4 BY item10 item11 item12 f1 ON trx ; f2 ON trx ; f3 ON trx ; f4 ON trx ; c ON X1 X2 X3 ; f1 ON trx; f2 ON.... etc Thank you! Bengt O. Muthen posted on Wednesday, November 13, 2013 - 8:58 am Yes, this is doable and your setup is fine. But you should be aware that there are several possibilities for how many measurement parameters for the DVs that should be invariant across the two classes (factor loadings, indicator intercepts, residual variances), and you could study the sensitivity of the results to that. Elina Dale posted on Wednesday, November 13, 2013 - 8:51 pm Yes, thank you, Dr. Muthen! I fixed the factor loadings & indicator intercepts to equal across two classes. I thought it would be reasonable to assume that, at least as a starting point. 1. I am wondering if there is good paper (like Booil Jo, 2002 on CACE) on how to check model assumptions etc with a CACE model where mediating variable is a LATENT variable consisting of 4 correlated X--> M --> Y where M is a latent variable that consists of 4 factors. 2. I am getting very strange estimates. For example, in my CACE model with just X & M where M was my outcome, I consistently got negative coefficients for trx. Now that I added Y and my M is acting as a mediating variable, coefficients of X on M are positive on 2 of the factors. The positive coefficients go also against exploratory data analysis results. I am wondering if the results are trustworthy. I have 805 subjects, I have a trx variable, a mediating variable (4 factors, 20 items), and an observed continuous outcome. I wonder if the sample is too small for such a complicated model or if there is something else I am doing wrong. Thank you!!! Bengt O. Muthen posted on Thursday, November 14, 2013 - 8:13 am 1. Not that I know. You may want to contact Booil Jo at Stanford. 2.I would break down the modeling into small parts to understand what is happening. For instance, first do X-->M without bringing in CACE. And look at each M factor separately (first making sure that the M factor analysis model fits well). The sample size should be sufficient. Elina Dale posted on Thursday, November 14, 2013 - 10:47 am Thank you, Dr. Muthen! I did do (2). I first fit a CFA and checked a model fit. Then I fit a model with just my treatment and my mediator as a my outcome (X-->M) without CACE. Then, I fit CACE for Then, since X and my final Y are observed variables, I fit regular regression to check X and Y association. Now, I am trying to fit the whole model X-->M-->Y using CACE. In this "big" model coefficients of X on M are getting reversed (what was neg before becoming positive, which doesn't make sense). Plus, I am getting a message that the model may not be identified. There is something wrong but unlike regular regressions, I do not know of diagnostic tools that we could use after fitting the model to check our assumptions etc. Could you please, help? Thank you! Bengt O. Muthen posted on Friday, November 15, 2013 - 4:38 pm Send the output to Support, including TECH1, TECH4, and TECH8. Elina Dale posted on Friday, November 15, 2013 - 7:20 pm Thank you!!! I am rerunning it now since I didn't specify TECH4 output. Will send it as soon as it finishes. Thank you! Elina Dale posted on Sunday, January 26, 2014 - 11:15 pm Dear Dr. Muthen, I was listening to your presentation on categorical factor indicators. There you say that when we use MLR as an estimator, MPlus uses logistic regression, so the coefficients are interpreted as OR. To estimate CACE, MPlus uses mixture modeling with MLR estimator. If everything is set up as in Ex 7.24, except the outcome is a latent variable measured on ordinal scale, do coefficients need to be In May 2013, I wrote that my outcome was a continuous latent variable measured through a set of observed indicators on a Likert/Ordinal scale. I would like to just confirm again that the coefficient for the treatment variable that I get in the output does not need to be exponentiated and I haven't misunderstood your response in May. Thank you! Bengt O. Muthen posted on Monday, January 27, 2014 - 9:36 am No exponentiation needed because your DV is a (latent) continuous variable. Elina Dale posted on Monday, January 27, 2014 - 11:15 am Thank you! This is very helpful! But does the point remain that we are fitting logistic regression with MLR estimator in mixture analysis? I know we have logistic regression for predicting compliance status and MPlus even gives OR at the end. But I wonder the part of the model where we predict outcome based on compliance. Since it uses MLR and factor indicators are categorical, is it a linear or logistic regression? I promise this is my last question. Thank you! Bengt O. Muthen posted on Monday, January 27, 2014 - 2:59 pm Just go with what the DV is. The compliance status is binary. The factor is continuous - it doesn't matter that the factor indicators are categorical since they are DVs only for the factor predicting the indicators, not in the prediction of the factor. Elina Dale posted on Monday, January 27, 2014 - 3:20 pm Thank you! This makes sense now. Greatly appreciate it! Back to top
{"url":"http://www.statmodel.com/discussion/messages/13/158.html?1384308235","timestamp":"2014-04-19T02:24:46Z","content_type":null,"content_length":"137159","record_id":"<urn:uuid:b605d5ce-ee6f-4c19-89e1-de2a2cd72f92>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Hidden Markov Models This page is under construction. The following presentation is adapted from [Rabiner & Juang, 1986] and [Charniak, 1993]. Notational conventions T = length of the sequence of observations (training set) N = number of states (we either know or guess this number) M = number of possible observations (from the training set) Omega_X = {q_1,...q_N} (finite set of possible states) Omega_O = {v_1,...,v_M} (finite set of possible observations) X_t random variable denoting the state at time t (state variable) O_t random variable denoting the observation at time t (output variable) sigma = o_1,...,o_T (sequence of actual observations) Distributional parameters A = {a_ij} s.t. a_ij = Pr(X_t+1 = q_j |X_t = q_i) (transition probabilities) B = {b_i} s.t. b_i(k) = Pr(O_t = v_k | X_t = q_i t) (observation probabilities) pi = {pi_i} s.t. pi_i = Pr(X_0 = q_i) (initial state distribution) A hidden Markov model (HMM) is a five-tuple (Omega_X,Omega_O,A,B,pi). Let lambda = {A,B,pi} denote the parameters for a given HMM with fixed Omega_X and Omega_O. 1. Find Pr(sigma|lambda): the probability of the observations given the model. 2. Find the most likely state trajectory given the model and observations. 3. Adjust lambda = {A,B,pi} to maximize Pr(sigma|lambda). A discrete-time, discrete-space dynamical system governed by a Markov chain emits a sequence of observable outputs: one output (observation) for each state in a trajectory of such states. From the observable sequence of outputs, infer the most likely dynamical system. The result is a model for the underlying process. Alternatively, given a sequence of outputs, infer the most likely sequence of states. We might also use the model to predict the next observation or more generally a continuation of the sequence of observations. Hidden Markov models are used in speech recognition. Suppose that we have a set W of words and a separate training set for each word. Build an HMM for each word using the associated training set. Let lambda_w denote the HMM parameters associated with the word w. When presented with a sequence of observations sigma, choose the word with the most likely model, i.e., w* = arg max_{w in W} Pr(sigma|lambda_w) Forward-Backward Algorithm Define the alpha values as follows, alpha_t(i) = Pr(O_1=o_1,...,O_t=o_t, X_t = q_i | lambda) Note that alpha_T(i) = Pr(O_1=o_1,...,O_T=o_T, X_T = q_i | lambda) = Pr(sigma, X_T = q_i | lambda) The alpha values enable us to solve Problem 1 since, marginalizing, we obtain Pr(sigma|lambda) = sum_i=1^N Pr(o_1,...,o_T, X_T = q_i | lambda) = sum_i=1^N alpha_T(i) Define the beta values as follows, beta_t(i) = Pr(O_t+1=o_t+1,...,O_T=o_T | X_t = q_i, lambda) We will need the beta values later in the Baum-Welch algorithm. Algorithmic Details 1. Compute the forward (alpha) values: a. alpha_1(i) = pi_i b_i(o_1) b. alpha_t+1(j) = [sum_i=1^N alpha_t(i) a_ij] b_j(o_t+1) 2. Computing the backward (beta) values: a. beta_T(i) = 1 b. beta_t(i) = sum_j=1^N a_ij b_j(o_t+1) beta_t+1(j) Viterbi Algorithm Compute the most likely trajectory starting with the empty output sequence; use this result to compute the most likely trajectory with an output sequence of length one; recurse until you have the most likely trajectory for the entire sequence of outputs. Algorithmic Details 1. Initialization: For 1 <= i <= N, a. delta_1(i) = pi b_i(o_1) b. Phi_1(i) = 0 2. Recursion: For 2 <= t <= T, 1 <= j <= N, a. delta_t(j) = max_i [delta_t-1(i)a_ij]b_j(o_t) b. Phi_t(j) = argmax_i [delta_t-1(i)a_ij] 3. Termination: a. p* = max_i [delta_T(i)] b. i*_T = argmax_i [delta_T(i)] 4. Reconstruction: For t = t-1,t-2,...,1, i*_t = Phi_t+1(i*_t+1) The resulting trajectory, i*_1,...,i*_T, solves Problem 2. Baum-Welch Algorithm To solve Problem 3 we need a method of adjusting the lambda parameters to maximize the likelihood of the training set. Suppose that the outputs (observations) are in a 1-1 correspondence with the states so that N = M, varphi(q_i) = v_i and b_i(j) = 1 for j = i and 0 for j != i. Now the Markov process is not hidden at all and the HMM is just a Markov chain. To estimate the lambda parameters for this Markov chain it is enough just to calculate the appropriate frequencies from the observed sequence of outputs. These frequencies constitute sufficient statistics for the underlying distributions. In the more general case, we can't observe the states directly so we can't calculate the required frequencies. In the hidden case, we use expectation maximization (EM) as described in [Dempster et al., 1977]. Instead of calculating the required frequencies directly from the observed outputs, we iteratively estimated the parameters. We start by choosing arbitrary values for the parameters (just make sure that the values satisfy the requirements for probability distributions). We then compute the expected frequencies given the model and the observations. The expected frequencies are obtained by weighting the observed transitions by the probabilities specified in the current model. The expected frequencies so obtained are then substituted for the old parameters and we iterate until there is no improvement. On each iteration we improve the probability of O being observed from the model until some limiting probability is reached. This iterative procedure is guaranteed to converge on a local maximum of the cross entropy (Kullback-Leibler) performance measure. The probability of a trajectory being in state q_i at time t and making the transition to q_j at t+1 given the observation sequence and model. xi_t(i,j) = Pr(X_t = q_i, X_t+1 = q_j | sigma, lambda) We compute these probabilities using the forward backward variables. alpha_t(i) a_ij(o_t+1) beta_t+1(j) xi_t(i,j) = ------------------------------------- Pr(O | lambda) The probability of being in q_i at t given the observation sequence and model. gamma_t(i) = Pr(X_t = q_i | sigma, lambda) Which we obtain by marginalization. gamma_t(i) = sum_j xi_t(i,j) Note that sum_t=1^T gamma_t(i) = expected number of transitions from q_i sum_t=1^T xi_t(i,j) = expected number of transitions from q_i to q_j Algorithmic Details 1. Choose the initial parameters, lambda = {A, B, pi}, arbitrarily. 2. Reestimate the parameters. a. bar{pi}_i = gamma_t(i) sum_t=1^T-1 xi_t(i,j) b. bar{a}_ij = ------------------------ sum_t=1^T-1 gamma_t(i) sum_t=1^T-1 gamma_t(j) 1_{o_t = k} c. bar{b}_j(k) = ------------------------------------ sum_t=1^T-1 gamma_t(j) where 1_{o_t = k} = 1 if o_t = k and 0 otherwise. 3. Let bar{A} = {bar{a}_ij}, bar{B} = {bar{b}_i(k)}, and bar{pi} = {{bar{pi}_i}. 4. Set bar{lambda} to be {bar{A}, bar{B}, bar{pi}}. 5. If lambda = bar{lambda} then quit, else set lambda to be bar{lambda} and return to Step 2. Bayesian Network Algorithms The Bayesian network representation is shown in Figure 1. X_0 X_1 X_2 X_3 X_T-1 X_T o ----> o ----> o ----> o ... o ----> o | | | | | | | | | | | | v v v v v v o o o o ... o o O_0 O_1 O_2 O_3 O_T-1 O_T Fig. 1: Bayesian network representation for an HMM In the description of the Baum-Welch algorithm provided above, the computation of the expected sufficient statistics depends on computing the following term for all i and j in Omega_X. xi_t(i,j) = Pr( X_t=q_i, X_t+1=q_j | sigma, lambda) These computations in turn rely on computing the forward and backward variables (the alpha's and beta's). alpha_t(i) a_ij(o_t+1) beta_t+1(j) xi_t(i,j) = ------------------------------------- Pr(sigma | lambda) Generally, the forward and backward variables are computed using the forward-backward procedure which uses dynamic programming to compute the variables in time polynomial in |Omega_X|, |Omega_O|, and T. In the following paragraphs, we show how the xi's can be computed using standard Bayesian network inference algorithms in the same big-Oh complexity. One advantage of this approach is that it extends easily to the case in which the hidden part of the model is factored into some number of state variables. In the network shown in Figure 1 the O_t's are known. In particular, we have that O_i=o_i. If we assign the O_t's accordingly, use the probabilities indicated by lambda, and apply a standard Bayesian network inference algorithm, we obtain for every X_t the posterior distribution Pr(X_t|sigma,lambda). This isn't exactly what we need since X_t and X_t+1 are clearly not independent. If they were independent, then we could obtain Pr(X_t,X_t+1|sigma,lambda) from the product of Pr(X_t|sigma,lambda) and Pr(X_t+1|sigma,lambda). There are a number of remedies but one approach which is graphically intuitive involves adding a new state variable (X_t,X_t+1) which is the obvious deterministic function X_t and X_t+1. This addition results in the network shown in Figure 2. X_0,X_1 X_1,X_2 X_2,X_3 ... X_T-1,X_T o o o o o / ^ / ^ / ^ ^ / ^ / | / | / | | / | / | / | / | | / | o ----> o ----> o ----> o ... o ----> o | | | | | | | | | | | | v v v v v v o o o o ... o o O_0 O_1 O_2 O_3 O_T-1 O_T Fig. 2: Bayesian network with joint variables, (X_t,X_t+1) If we update the network in Figure 2 with O_t=o_t then we obtain Pr((X_t,X_t+1)|sigma,lambda) directly. It should be clear that we can eliminate the X_t (except for X_0) to obtain the singly connected network shown in Figure 3 which can be updated in time polynomial in |Omega_X|, |Omega_O|, and T. X_0 X_0,X_1 X_1,X_2 X_2,X_3 ... X_T-1,X_T o ----> o ----> o ----> o o ----> o | | | | | | | | | | | | v v v v v v o o o o ... o o O_0 O_1 O_2 O_3 O_T-1 O_T Fig. 3: Bayesian network with X_t's eliminated The extension to HMMs with factored state spaces (e.g., see Figure 4) is graphically straightforward. The computational picture is more complicated and depends on the specifics of the update algorithm. It is important to point out, however, that there is a wide range of update algorithms, both approximate and exact, to choose from. X_1 o ----> o ----> o ----> o ... o ----> o \ \ \ \ \ \ \ \ \ \ \ \ X_2 o ----> o ----> o ----> o ... o ----> o | | | | | | | | | | | | v v v v v v O o o o o ... o o Fig. 4: Bayesian network representation for an HMM with factored state space Omega_X = Omega_X_1 times Omega_X_2. The state variable is two dimensional X_t = X_{1,t},X_{2,t}. See [Rabiner & Juang, 1986] and [Rabiner, 1989] for a general introduction and applications in speech. See [Charniak, 1993] for applications in natural language processing including part of speech tagging. Charniak [1993] provides lots of examples that provide useful insight. See [Rabiner, 1989] and [Fraser & Dimitriadis, 1994] for details regarding numerical issues that arise in implementing the above algorithm. Rabiner and Juang [1986] also discuss variant algorithms for continuous observation spaces using multivariate Gaussian models. Back to Tutorial
{"url":"http://cs.brown.edu/research/ai/dynamics/tutorial/Documents/HiddenMarkovModels.html","timestamp":"2014-04-16T16:06:42Z","content_type":null,"content_length":"14732","record_id":"<urn:uuid:64eb0f75-0fd5-433e-ab0b-3870ee2f3d20>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Fairview Village Algebra Tutor Find a Fairview Village Algebra Tutor ...I use multi-sensory techniques for teaching reading, writing, and math, and also use student interests to begin work in an area of difficulty. In May 2010, I completed my Master's degree in both special education and early childhood education. My training in Orton-Gillingham helps me teach students that are struggling with phonics in both their reading and writing. 20 Subjects: including algebra 1, reading, geometry, dyslexia ...When I was in college, I was the editor of a biweekly magazine for two years: because I went to an engineering school, many of my writers didn't have some of the basics down. So, before every issue, I would work one-on-one with each of my writers to introduce them to new writing techniques and work to rewrite their articles to prepare for print. I'm generally a math and writing nerd. 25 Subjects: including algebra 2, algebra 1, chemistry, writing ...My name is Katie, and I am a 2014 high school graduate. I attended Merion Mercy Academy and graduated top of my class. During my senior year I attended Saint Joseph's University as a part-time student studying physics and economics. 42 Subjects: including algebra 2, algebra 1, reading, calculus ...I took AP Calculus in HS but have not dabbled in mathematics or the sciences much since HS and undergrad. Additionally, I do private baseball/softball instruction as well as fitness training (not a certified strength and fitness coordinator). I look forward to the opportunity to improve your ch... 27 Subjects: including algebra 1, reading, writing, grammar ...I have a BS in Chemistry from University of Bucharst, Romania, as well as MS in Chemistry from Long Island University, where I have succesfully studied Inorganic Chemistry, Organic Chemistry, Physical Chemistry, Analytical Chemistry. I have been a certified teacher in Romania, where I have been ... 7 Subjects: including algebra 1, chemistry, geometry, organic chemistry
{"url":"http://www.purplemath.com/fairview_village_algebra_tutors.php","timestamp":"2014-04-20T19:31:24Z","content_type":null,"content_length":"24339","record_id":"<urn:uuid:ca2a55eb-4df6-414c-9d8a-bf21eefdde7a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving IVP August 27th 2012, 04:22 PM #1 Junior Member Dec 2010 Solving IVP The birth rate of B(t) of a population P(t) decreases exponentially with time, so that B(t)=B0e^at where a,B>0. Therefore the population dynamics are governed by the differential equation: dP/dt=B(t)P where P(0)=P0 Solve the above initial value problem to find an expression for P(t) in terms of P0,a,B. Use this expression to deduce the behavior of P(t) as t->infinity. **I know people would like to advice me on the steps to take, but I would understand alot better if I was shown what the steps were, so I can understand the steps on my own** Re: Solving IVP The birth rate of B(t) of a population P(t) decreases exponentially with time, so that B(t)=B0e^at where a,B>0. Therefore the population dynamics are governed by the differential equation: dP/dt=B(t)P where P(0)=P0 Solve the above initial value problem to find an expression for P(t) in terms of P0,a,B. Use this expression to deduce the behavior of P(t) as t->infinity. **I know people would like to advice me on the steps to take, but I would understand alot better if I was shown what the steps were, so I can understand the steps on my own** The equation is separable, so divide both sides by P, then integrate. Re: Solving IVP what is the resulting expression after integration?? Re: Solving IVP Why don't you tell me? August 27th 2012, 06:35 PM #2 August 28th 2012, 05:55 AM #3 Junior Member Dec 2010 August 28th 2012, 05:59 AM #4
{"url":"http://mathhelpforum.com/differential-equations/202620-solving-ivp.html","timestamp":"2014-04-18T13:42:25Z","content_type":null,"content_length":"39517","record_id":"<urn:uuid:c5c4fd94-4bf6-450c-9a00-335b8f550ea6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Holiday Hills, IL Geometry Tutor Find a Holiday Hills, IL Geometry Tutor ...I am currently refurbishing my Spanish skills, but am confident in my ability to teach elementary and intermediate Spanish. I can also teach general translation techniques for any length of text and travel crash-courses. While abroad, I participated as a conversational English tutor through KOSMIC, Keio University's international language group. 27 Subjects: including geometry, English, reading, Spanish Hi! My name is Carolyn, and I taught math and physics in Wisconsin for ten years. Currently, I am certified to teach math in both Wisconsin and Illinois, and I do substitute teaching at about ten different schools in both Illinois and Wisconsin. 22 Subjects: including geometry, physics, statistics, accounting ...I love math and helping students understand it. I first tutored math in college and have been tutoring for a couple years independently. My students' grades improve quickly, usually after only a few sessions. 26 Subjects: including geometry, Spanish, chemistry, special needs ...For 2 years I taught 7th grade geometry. I taught in Carpentersville for District 300 for 2 years and am now in my third year in Round Lake. I have also worked at Huntington Learning Center for 1 year as a math tutor. 18 Subjects: including geometry, reading, writing, algebra 1 ...My highest ACT Math score was 35 and the lowest was 30. I am changing my major to secondary education in mathematics because I love helping others. I have many friends who have troubles getting through a math class successfully. 6 Subjects: including geometry, algebra 1, algebra 2, precalculus Related Holiday Hills, IL Tutors Holiday Hills, IL Accounting Tutors Holiday Hills, IL ACT Tutors Holiday Hills, IL Algebra Tutors Holiday Hills, IL Algebra 2 Tutors Holiday Hills, IL Calculus Tutors Holiday Hills, IL Geometry Tutors Holiday Hills, IL Math Tutors Holiday Hills, IL Prealgebra Tutors Holiday Hills, IL Precalculus Tutors Holiday Hills, IL SAT Tutors Holiday Hills, IL SAT Math Tutors Holiday Hills, IL Science Tutors Holiday Hills, IL Statistics Tutors Holiday Hills, IL Trigonometry Tutors Nearby Cities With geometry Tutor Deer Park, IL geometry Tutors Fox River Grove geometry Tutors Ingleside, IL geometry Tutors Island Lake geometry Tutors Lakemoor, IL geometry Tutors Mccullom Lake, IL geometry Tutors Mchenry, IL geometry Tutors North Barrington, IL geometry Tutors Oakwood Hills, IL geometry Tutors Prairie Grove, IL geometry Tutors Ringwood, IL geometry Tutors Round Lake Heights, IL geometry Tutors Third Lake, IL geometry Tutors Volo, IL geometry Tutors Wauconda, IL geometry Tutors
{"url":"http://www.purplemath.com/Holiday_Hills_IL_Geometry_tutors.php","timestamp":"2014-04-19T06:55:03Z","content_type":null,"content_length":"24239","record_id":"<urn:uuid:65ccd8ff-95b2-49d6-83bf-91f800ddb687>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
West Orange Science Tutor Find a West Orange Science Tutor ...Recruiters at the busy Fort Dix were impressed with the record speed the test was taken with. With the training I have done since, I can easily and quickly upgrade your score by a significant percentage, by engaging you in my specialized course, boosting your critical thinking over short periods... 5 Subjects: including philosophy, ASVAB, algebra 1, chess ...I have personally tutored everything from Algebra to Advanced Calculus and English to AP Biology and everything in between. I also have experience teaching to the SAT and ACT, and my scores on these exams reflect my understanding and ability to convey not only the test subjects, but test strategies as well. I scored 800 on the Math section of the SAT, and a 34 for my overall ACT score. 22 Subjects: including physics, chemistry, biology, anatomy ...I have extensive experience tutoring students ranging from 6th grade to graduate level, and a strong foundation in math, biology, chemistry, and physics at the undergrad level. I understand that every student is different and that strategies must be adjusted accordingly. I engage my students in... 24 Subjects: including physics, algebra 2, biology, chemistry ...Throughout college, I worked as both an English tutor and a chemistry tutor for college students, and gained experience with a wide variety of learning types. As a recent student myself, I understand many of the issues that even good students face when studying science and English. I also have experience tutoring SAT prep, and offer tutoring in all 3 SAT areas. 17 Subjects: including chemistry, reading, writing, English ...My knowledge and teaching style have helped many students enhanced their academic performance. As a NJ certified chemistry teacher, I will help you succeed in chemistry tests throughout your school years. I also teach Chinese. 11 Subjects: including organic chemistry, chemistry, calculus, linear algebra
{"url":"http://www.purplemath.com/west_orange_science_tutors.php","timestamp":"2014-04-21T10:52:40Z","content_type":null,"content_length":"24081","record_id":"<urn:uuid:e862dac5-52f8-4f56-932b-bb147af3bdda>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Issaquah Geometry Tutor Find a Issaquah Geometry Tutor ...It is one of my favorites. If you need help getting it to do what you need, let me know. I can help! 46 Subjects: including geometry, English, reading, algebra 1 ...I eventually majored in mathematics at Rice University, where I was an academic fellow, tutoring students in college-level mathematics, among other subjects. When tutoring math, my goal is not just to help the student with whatever specific assignment he or she is working on at the time, but to ... 35 Subjects: including geometry, English, reading, writing ...I hope to provide the same benefit to any student who is looking to excel.I am competent in algebra concepts and problem-solving techniques and would be happy to help your student gain a greater understanding of them. I took a Conceptual Physics class and an AP Physics class in high school, and ... 21 Subjects: including geometry, reading, algebra 1, English ...Learning terms and their definitions is a critical step in understanding biology. Not only are many classes centered around definitions, but by knowing what things are called and why they are so named, a student often immediately understands the overall concept. Beyond terminology, I focus on how key processes build upon one another to facilitate life. 22 Subjects: including geometry, chemistry, reading, English ...I have been a musician from a very young age, having played clarinet in bands and orchestras from 4th grade through adulthood. I have had several years of classical training in piano with two of those at University of Puget Sound and Oregon State. I am an excellent sight reader and have been paid as an accompanist and have training in music theory. 43 Subjects: including geometry, chemistry, calculus, physics
{"url":"http://www.purplemath.com/issaquah_wa_geometry_tutors.php","timestamp":"2014-04-21T04:38:10Z","content_type":null,"content_length":"23740","record_id":"<urn:uuid:e78e2142-c497-4948-93d5-5a67173fd2be>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
The mysteries of Venn diagrams Above, a Venn diagram I made visually depicting every combination of Dwarf from the fairy tale. Not included: Snow White. The other day I was tying to make some multi-set Venn diagrams with polar symmetry, which it turns out is harder than you'd think and has ties with prime number theory. It's an obscure but important area of combinatorics. Everybody knows a 3-set one (n=3), and I made a 5-set one with ellipses (pictured to the right). After that, I was stumped. That led me to a great web section maintained by Professors Frank Ruskey and Mark Weston. The Dwarf diagram at the top is based on that work, with Sleepy a little more opaque so you can see the shape. Ruskey and Weston display lots of lovely diagrams, including the elusive n=7 minimum vertex Venn diagram and the remarkable n=11 Venn diagram. Andrea James is a writer, director, producer and activist based in Los Angeles. Her work often focuses on consumer activism, the free culture movement, exogenous mysticism, humor, and LGBT rights. 14 Responses to “The mysteries of Venn diagrams” 1. skeletoncityrepeater says: This Venn diagram describes who has slept with whom, including any combination thereof. Snow White’s name has been removed from the center, in another Disney lawsuit. 2. Anonymous says: I was taught its never a complete Venn diagram until you show the universal set… 3. maxoid says: the n=11 diagram is a beautiful piece of work, truly. one of those things where, at first glance, you know it’s not random, but it takes a lot of careful study to figure out exactly what it means. and from such a simple question! 4. Jonathan Badger says: The biostatistican A.W.F Edwards wrote a fascinating (yes, really) book on multi-set Venn diagrams called “Cogwheels of the Mind” 5. bkad says: You are cool, Andrea. 6. Anonymous says: You forgot Trippy 7. adamkrasowski says: 9. valdis says: When their numbers had been reduced from 50 to 8, the other dwarves began to suspect Hungry… 10. zikman says: this blows my mind in ways I didn’t even know it could be blown 11. ill lich says: I always preferred Zen diagrams, which consist of a single circle and nothing written inside. 12. Phikus says: …but when you mix all colors on the dwarven wheel, you are supposed to end up with (Snow) White… 13. DJBudSonic says: I’m not sure I get it. Isn’t the Venn diagram used to illustrate overlapping sets? As the Seven Dwarfs don’t have common parts why would they be diagrammed in this way? Many moons ago I did a set of diagrams for an annual report that I was told turned out to be Venn diagrams (I had never heard of it before). They were illustrating the overlap of various engine product markets, ie: a “TYPE 2″ engine served the Marine and Truck Market, but not the Generator Market, while a “Type 3″ served the Generator and Truck Market, etc. As I recall most were two sets creating a third (the overlap). I guess I need to learn more, ’cause they do look cool. Thanks for the article Boing Boing – I’m gonna go dig up my Tufte books and get to the bottom of this… □ Andrea James says: You can have a set of 1. This Venn diagram illustrates all possible combinations of 7 sets, each of which happens to contain 1 item (a Dwarf) in this case. Let’s say this is a chart of the Dwarfs whom Snow White likes. If Snow White likes all the Dwarfs except Grumpy, she could show where that combination exists on this diagram. If she likes Sleepy, Doc, and Bashful but no others, she can point to where that combination exists on this diagram. Same for every other possible combination of Dwarfs. Or it could be a chart of who has the day off. Every combination of Dwarfs available to work is represented on the diagram.
{"url":"http://boingboing.net/2010/01/04/the-mysteries-of-ven.html","timestamp":"2014-04-21T14:55:42Z","content_type":null,"content_length":"58656","record_id":"<urn:uuid:2e702c19-8d0e-4896-9c96-833a1a534d70>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: color definitions Eric Gauthier (eric@h61-24.centre.edu) Sat, 23 Sep 1995 15:11:45 -0400 (EDT) the first thing to say is that the html hex colors are listed in <RGB>. The trick to it is that the numbers listed are hexadecimal. The red,green and blue components are listed on a scale of intensity ranging from zero to 255. A quick lesson on HEX: 00 = the number zero. 09 = the number 9 0a = the number 10. ff = the number 255. Basically instead of having 9 digits you have 16 digits. Anyways, I'm sure you can find a book on hex numbers. So, the first two digits are the value for the red, the next are for the green and the last set is for the blue. The <CMY> color scheme is almost the same as <RGB>. In this numbering scheme: C = 255 - R M = 255 - G Y = 255 - B R = 255 - C G = 255 - M B = 255 - Y The <HSV> (also called the <HSB>) model is really different from these two. A good standard text to look in is: Computer Graphics by Foley, van Dam, Feiner, and Hughes (by Addison-Wesley Pub. Co). The second edition chapter 13.3.4. The <HSV> model is complicated. If you'd like to know more detail about it and cannot find the book mail me at: eric@gauthier.centre.edu Eric Gauthier
{"url":"http://1997.webhistory.org/www.lists/www-html.1995q3/0636.html","timestamp":"2014-04-17T21:30:27Z","content_type":null,"content_length":"2825","record_id":"<urn:uuid:c36482e7-631f-46f1-b99f-119e656e1e30>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
FW: RE [Haskell-cafe] Monad Description For Imperative Programmer Dan Weston westondan at imageworks.com Wed Aug 1 19:05:03 EDT 2007 I knew someone was going to catch me wandering into the deep end of the Having read large parts of your blog, I would never presume to tell you anything about Haskell or category theory, but what the hell... > I mostly sympathise with your rant, but I think you need to be clearer > about what exactly is concatenated. In general you can't concatenate > Monads. What you *can* concatenate are Kleisli arrows (ie. things of > type Monad m => a -> m b). You can also apply Kleisli arrows to > Monads, and that's what >>= does. > I feel that talking about Monads without Kleisli arrows is like > talking about category theory without arrows, or at least sets without > functions. In each case, without the latter, the former is more or > less useless. OK, I'll be clearer. I did actually mean Kleisli arrows, though I disagree about your statement about concatenating monad instances and claims of "useless": Prelude> print "Hello" >> return 3 Granted, >> is not as "general" as >>=, so combining one monad instance with another is not as "general" as with a Kleisli arrow: concatenating degenerates to simple sequencing. Sequencing print statements is more rather than less useless to many people, but I see your point. Actually, you have made my point! :) The forgetful action of Kleisli arrows acting on a monad (or conversely the free algebra of a monad as a subspace of Kleisli arrows) is to my understanding intimately connected with the specialness of the IO monad. It is the continuous nature of Haskell monads that gives non-IO monads value. So I guess my rant really was about Kleisli arrows not all being forgetful functors, used only for their sequencing effect. It just sounded too hard to pull that argument off without reinforcing the myth that you need to know category theory to have a rant about Haskell > Also, I'm having a terminological difficulty that maybe someone can help with: > 'Monad' is a type class. Actually I thought it was a type class constructor. The monad Monad m => m a is continuous in its instance type a, which is important in establishing the relationship between >>= and >>. The Haskell type 'IO ()' is a monad instance that is also isomorphic to the discrete trivial monad, but that is not a Haskell Monad capital-M. I used the term "instance" because the type IO () is an instance of the typeclass IO, not for any more profound reason. Forgive the display of wanton ignorance above. After all, isn't that what ranting is all about? Dan Weston Dan Piponi wrote: > On 8/1/07, Dan Weston <westondan at imageworks.com> wrote: >> The moral of the story is that monads are less than meets the eye. You >> can create them and concatenate them > I mostly sympathise with your rant, but I think you need to be clearer > about what exactly is concatenated. In general you can't concatenate > Monads. What you *can* concatenate are Kleisli arrows (ie. things of > type Monad m => a -> m b). You can also apply Kleisli arrows to > Monads, and that's what >>= does. > I feel that talking about Monads without Kleisli arrows is like > talking about category theory without arrows, or at least sets without > functions. In each case, without the latter, the former is more or > less useless. > Also, I'm having a terminological difficulty that maybe someone can help with: > 'Monad' is a type class. > So what's 'IO'? Is the correct terminology 'instance' as in 'IO is an > instance of Monad'. I consider 'IO' to be 'a monad' as that fits with > mathematical terminology. But what about an actual object of type 'IO > Int', say? Some people have been loosely calling such an object 'a > monad'. That doesn't seem quite right. Maybe it's 'an instance of IO > Int', though that's stretching the word 'instance' to meaning two > different things. And if an object of type IO Int is in instance of IO > Int, is it reasonable to also call it an 'instance of IO', or even 'an > instance of Monad'? I'm sure there are proper words for all these > things if someone fills me in. > -- > Dan > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2007-August/029857.html","timestamp":"2014-04-16T04:50:29Z","content_type":null,"content_length":"7827","record_id":"<urn:uuid:4eca66eb-6519-4617-b49f-2cdf0097de97>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Stacks Project Blog Non zero-divisors Following the example of xkcd I somtimes try to figure out what is the correct terminology by searching different spellings and observing the number of hits. I did this for variants on the phrase in the title but I didn’t find the results convincing. (Google thinks of “-” and ” ” both as whitespace.) “non zero divisor” 8,630 results “non zero-divisor” 7,490 results “non zerodivisor” 9,190 results “nonzero divisor” 1,900 results “nonzerodivisor” 9,560 results “non zero divisor” 5K results “non zero-divisor” 4K results “non zerodivisor” 5K results “nonzero divisor” 2K results “nonzerodivisor” 61 results “non zero divisor” 75,300 results “non zero-divisor” 75,300 results “non zerodivisor” no results found “nonzero divisor” 6,490 results “nonzerodivisor” 4,120 results Yuhao Huang emailed to say he prefers “non zero-divisor”. I guess that is better and I’ll probably make a global change in the stacks project later today. Any objections or suggestions? Update (3PM): I’ve decided to go with Jason’s suggestion, see here for changes. 7 thoughts on “Non zero-divisors” 1. nonzerodivisor 2. I get 53,900 results on Google for “non-zerodivisor”, my preferred choice. Google recommends “Did you mean ‘non-zero divisor?’, which is just wrong. □ I get only 3,470 results for “non-zerodivisor”. Are you sure you did your search correctlY? When google recommends something you have to click on the link: non-zerodivisor to get the actual search results! ☆ Yes, I’m sure. It didn’t say “showing results instead for…”, but “did you mean…”. I didn’t use quote marks, which I guess is the difference — when I add them in (and then click on ‘search instead for…’), I get the 3470 you get. Google’s treatment of quotes has gotten really confusing since they killed the ‘+’ operator. In any case, here’s my reasoning: I like a hyphen after ‘non’ in all contexts: non-singular, non-free locus, non-maximal, etc. Double hyphens, as in ‘non-zero-divisor’, is ambiguous, in that it could be ‘non-zero divisor’. So I go with non-zerodivisor. ○ OK, that makes sense. I’ve now decided to go with Jason’s suggestion because I like “nonsingular”, “nonnegative”, etc without the hyphen. But I’m sure I’ve been inconsistent with this. Thanks!!! (And sorry for doubting your google skills!) □ Well, when I started doing commutative algebra, I certainly learned that the correct hyphenation is: “zero-divisor” and “non-zero divisor” even though the second is very illogical. I remember this very clearly as this jarred my brain every time I wrote it. My impression is that most authors (used to) follow this “convention”. Perhaps there is a general rule in English that in any word with 2 or more hyphens, like “non-zero-divisor”, one keeps the first? Certainly, people nowadays, e.g., Eisenbud I think, write “nonzerodivisor” to avoid this conundrum. I think there are other examples of “triple words” in mathematics where I have learned that the correct hyphenation is “A-B C” even when it is illogical. I was almost sure that Atiyah-MacDonald and Matsumura would use “non-zero divisor” but they actually both use “non-zero-divisor” as far as I can see! 3. OK, so I think the real question to ask is whether it is “zerodivisor” or “zero-divisor” or “zero divisor”? Once we have answered this question we just put “non” in front. I think the problem comes from the fact that it seems more correct to write “zero divisor” but “nonzero divisor” is obviously wrong (because non should modify the whole thing and not just zero). Also, apparently in American English one should just add non without a hyphen. So it is starting to look like Jason’s suggestion is the best, because I don’t like the looks of “nonzero-divisor” (because it seems again as if non is only modifying zero). Alternatively, Bhargav suggests using “regular element” everywhere…
{"url":"http://math.columbia.edu/~dejong/wordpress/?p=2380&cpage=1","timestamp":"2014-04-20T20:54:57Z","content_type":null,"content_length":"22333","record_id":"<urn:uuid:29eed274-24ed-4c1c-89ab-75acb41873d2>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
A question about primitive recursive functions up vote 4 down vote favorite I have a question about primitive recursive functions. Maybe it's trivial, if it is I will move it into math.stackexchange. Is there a primitive recursive function $f$ which is a bijection of $N$ onto $N$ such that $f^{-1}$ is not primitive recursive ? Exercise 5.6 in this book claims that bijective primitive functions are a group, i.e. such a function $f$ exists: books.google.co.il/… – Denis Apr 18 '13 at 13:39 2 DK, you mean to say that they are not a group. Frank, the inverse of Ackermann is primitive recursive, but this is not a bijection. But you can fix it up via the even/odd trick as in my argument and also as in DK's link (and those arguments are fundamentally similar). – Joel David Hamkins Apr 18 '13 at 14:03 add comment 2 Answers active oldest votes The answer is yes. First, let $g$ be a total computable function whose rate of growth is too fast for it to be primitive recursive, such as the diagonal Ackermann function. Now, define $f(k)=2n$, if $k$ is the number coding up (in some canonical way) the computation of $g(n)$. That is, $k$ should encode a list of the entire computation sequence for $g(n)$, including snapshots of the configuration of each stage of computation, what is on the tape, where the head is, the state and so on. Now, for numbers $k'$ that are not codes of computations, we let $f(k')$ be the smallest odd number not yet used. Thus, we have a bijection $f:\mathbb{N}\to\mathbb{N}$. up vote 9 Furthermore, $f$ is primitive recursive, because for a given $k$, we can bound the length of time it takes to compute $f(k)$---the algorithm need only unpack $k$ and verify whether it is down vote a proper code or not, and then do some easy computations on the side. Meanwhile, the inverse function is not primitive recursive. The point here is that $k$ is far larger than $n$. We cannot get from $n$ or $2n$ to a code $k$ for the computation of $g(n)$, because we assumed that the growth rate of $g$ was too high for it to be primitive recursive. add comment More concisely, it is not far from the truth to say that the purpose of (adding) the minimalisation operation for general recursion is to define inverse functions. That this is up vote 3 down Difficult is put to practical use in many methods of encryption. add comment Not the answer you're looking for? Browse other questions tagged computability-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/127961/a-question-about-primitive-recursive-functions/127964","timestamp":"2014-04-18T21:54:25Z","content_type":null,"content_length":"56002","record_id":"<urn:uuid:3efcd943-b89b-46aa-86b2-a3e868ca01bd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
This paper has two parts. Part one is mainly intended as a general introduction to the problem of sectioning vector bundles (in particular tangent bundles of smooth manifolds) by everywhere linearly independent sections, giving a survey of some ideas, methods and results.\par Part two then records some recent progress in sectioning tangent bundles of several families of specific manifolds.
{"url":"http://www.dml.cz/handle/10338.dmlcz/701567","timestamp":"2014-04-21T07:03:48Z","content_type":null,"content_length":"11329","record_id":"<urn:uuid:6c7b5243-fe64-4355-90ad-981bf1bf5676>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning Center SmartTutor - Logic Disjunctive Addition Disjunctive Addition is a rule of inference pertaining to the OR operator. Disjunctive Addition adds any statement, true or false, to a given true statement. Let's consider a statement "The Moon revolves around the Earth". We know that this statement is true -- it is a proven fact. Now that we are given this true statement, we can add any other statement to it by applying Disjunctive Addition. This is how it is done formally: p: "The Moon revolves around the Earth." p v q: "The Moon revolves around the Earth or the Earth is larger than the Moon." The given statement p is above the line of dashes, and the new expression p v q formed by applying Disjunctive Addition is below the line. We can also add a statement known to be false to a given true statement: p: "The Moon revolves around the Earth." p v q: "The Moon revolves around the Earth or the Earth is smaller than the Moon." This is possible because, by definition, Disjunctive Addition can add any statement, true or false, to a given true statement. This is not the extent of the application of Disjunctive Addition. We can add absolutely any statement to a given true statement, even if there does not seem to be a connection between the p: "The Moon revolves around the Earth." p v q: "The Moon revolves around the Earth or smoking causes lung cancer." This is possible because of the inherent property of the OR operator: A disjunction is true if at least one of its statements is true. Therefore, p v q "The Moon revolves around the Earth or smoking causes lung cancer" is a true statement because we know for a fact that its first part is true -- the moon does revolve around the earth. Knowing that, we don't have to worry about the second part ("smoking causes lung cancer") -- regardless of whether it is true or not, it is not going to affect the entire statement p v q. In this case, the second part happens to be true -- smoking does cause lung cancer, but we might as well have picked any other, true or false, statement. Other examples of Disjunctive Addition A: "The water is cold." A v B: "The water is cold or the day is hot." X: "The painting is extraordinary." X v ~Y: "The painting is extraordinary or the artist is not talented." Links to Relevant Problems These are links to validity proof problems whose solutions contain Disjunctive Addition. 2-step problem
{"url":"http://lc.brooklyn.cuny.edu/smarttutor/logic/disadd.html","timestamp":"2014-04-19T11:58:10Z","content_type":null,"content_length":"9514","record_id":"<urn:uuid:0777921f-e84e-4eaf-b1f6-7cbb8b1e0a01>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Simple financial functions for NumPy Gael Varoquaux gael.varoquaux@normalesup.... Fri Apr 4 10:09:55 CDT 2008 On Fri, Apr 04, 2008 at 09:11:37AM -0500, Travis E. Oliphant wrote: > There are only two reasons that I can think of right now to keep them in > NumPy instead of moving them to SciPy. > 1) These are "basic" functions and a scipy toolkit would contain much more. > 2) These are widely used and would make NumPy attractive to a wider > audience who don't want to install all of SciPy just to get > these functions. > NumPy already contains functions that make it equivalent to a basic > scientific calculator, should it not also contain the functions that > make it equivalent to the same calculator when placed in "financial" mode? My concern is consistency. It is already pretty hard to define what goes in scipy and what goes in numpy, and I am not even mentioning code lying around in pylab. I really thing numpy should be as thin as possible, so that you can really say that it is only an array manipulation package. This will also make it easier to sell as a core package for developpers who do not care about "calculator" features. My 2 cents, More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/032427.html","timestamp":"2014-04-17T18:49:26Z","content_type":null,"content_length":"3846","record_id":"<urn:uuid:5743108b-1935-472b-9963-4ff8b53f0b99>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
Geek Explains: Java, J2EE, Oracle, Puzzles, and Problem Solving! Puzzle: A man has entered into a tunnel and has crossed only 1/4 of it while it hears a whistle from a train behind him. He turns and runs towards the train with the same speed and he could barely get out of tunnel before the train (running with a constant speed) could hit him at the entrance of the tunnel. Had he moved in the same direction with the same speed then also he could have crossed the tunnel before the train could have hit him at the exit of the tunnel. Assume that the speed of the man is uniform, he takes zero time to turn back, and instantly gets his speed while turning back. How faster is the train moving as compared to the man? Solution: Let the speed of the train be X and that of the man be Y. Let the length of the tunnel be T and the distance of the train from the entrance of the tunnel at the time the man turned back is Now, we can easily form two euqations with the given data. One, The man covered T/4 distance and the train covered E distance in the same time, hence E/X = (T/4)/Y => X/Y = E/(T/4) => X/Y = 4E/T ..... (i) Two, The man could have covered 3T/4 distance and the train could have covered E + T in the same time, hence (E+T)/X = (3T/4)/Y => X/Y = (E+T)/(3T/4) => X/Y = 4(E+T)/3T ..... (ii) Comparing (i) & (ii), we get 4E/T = 4(E+T)/3T => E = (E+T)/3 => 3E = E + T => T = 2E putting this value in equation (i), we get X/Y = 4E/2E => X/Y = 2 That means the train is running twice as fast as the man. No comments:
{"url":"http://geekexplains.blogspot.com/2008/05/train-tunnel-and-man-how-faster-is.html","timestamp":"2014-04-17T21:23:18Z","content_type":null,"content_length":"88786","record_id":"<urn:uuid:09e0b62a-f67f-49e1-b6b0-d39a3dba4117>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
on with title = {{Image Interpolation with Contour Stencils}}, author = {Getreuer, Pascal}, journal = {{Image Processing On Line}}, volume = {1}, year = {2011}, doi = {10.5201/ipol.2011.g_iics}, % if your bibliography style doesn't support doi fields: note = {\url{http://dx.doi.org/10.5201/ipol.2011.g_iics}} Pascal Getreuer, Image Interpolation with Contour Stencils, Image Processing On Line, 1 (2011). http://dx.doi.org/10.5201/ipol.2011.g_iics Communicated by François Malgouyres Demo edited by Pascal Getreuer Image interpolation is the problem of increasing the resolution of an image. Linear methods have traditionally been preferred, for example, the popular bilinear and bicubic interpolations are linear methods. However, a linear method must compromise between artifacts like jagged edges, blurring, and overshoot (halo) artifacts. These artifacts cannot all be eliminated simultaneously while maintaining linearity. More recent works consider nonlinear methods, especially to improve interpolation of edges and textures. An important aspect of nonlinear interpolation is accurate estimation of edge orientations. For this purpose we apply contour stencils, a new method for estimating the image contours based on total variation along curves. This estimation is then used to construct a fast edge-adaptive Online Demo An online demo of this algorithm is available. Contour Stencils The idea in contour stencils is to estimate the image contours by measuring the total variation of the image along curves. Define the total variation (TV) along curve C where γ is a smooth parameterization of C. The quantity ||u||[TV(C)] can be used to estimate the image contours. If ||u||[TV(C)] is small, it suggests that C is close a contour. The contour stencils strategy is to estimate the image contours by testing the TV along a set of candidate curves, the curves with small ||u||[TV(C)] are then identified as approximate contours. Contour stencils are a discretization of TV along contours. As described in [4], [5], a ''contour stencil'' is a function v. Stencil is applied to v at pixel k ∈ Defining [v](m,n) := |v[m] − v[n]|, this quantity is (with an abuse of notation) a cross-correlation over (k,k). The stencil edges are used to approximate a curve C so that the quantity approximates ||u||[TV(C+k)] (where C + k := {x+k : x ∈ C}). The image contours are estimated by finding a stencil with small TV. The best-fitting stencil at pixel k is where Σ is a set of candidate stencils. It is possible that the minimizer is not unique, for example in a locally constant region of the image. For simplicity, we do not treat this situation specially and always choose a minimizer even if it is not unique. This best-fitting stencil provides a model of the image contours in the neighborhood of pixel k. The stencils used in this work are shown below. For the set of candidate stencils Σ, we use 8 line-shaped stencils that were designed to distinguish between the functions The edge weights α, β, δ, γ are selected so that on the function f(x) = x[1]sinθ − x[2]cosθ, In this way, the stencils can fairly distinguish 8 different orientations. An estimate of the local contour orientation at point k is obtained by noting which stencil is the best-fitting stencil . Normalized stencil total variations vs. θ. Left: The first three stencils, j = 0, 1, 2. Right: All eight stencils. For a color image, the image is converted from RGB to a luma+chroma space and the stencil TV is computed as the sum of applied to each color channel. Given image v known on u on where h is the (assumed known) point spread function and ∗ denotes convolution. The goal is to incorporate deconvolution yet maintain computational efficiency. To achieve this, the global operation of deconvolution is approximated as a local one, such that pixels only interact within a small window. Local Reconstructions For every pixel k in the input image, we begin by forming a local reconstruction where ⊂ is a Gaussian oriented with the contour modeled by the best-fitting stencil . The c[n] are chosen such that u[k] satisfies the discretization model locally, This condition implies that the c[n] satisfy the linear system where is a matrix with elements . By defining the functions u[k] can be expressed directly in terms of the samples of v, Global Reconstruction The u[k] are combined with overlapping windows to produce the interpolated image, The window should satisfy ∑[k] w(x − k) = 1 for all x ∈ w(k) = 0 for k ∈ . Iterative Refinement This global reconstruction satisfies the discretization model approximately, ↓(h ∗ u) ≈ v. The accuracy may be improved using the method of iterative refinement. Let denote the global reconstruction such that u = v (where we consider as fixed parameters so that is a linear operator). Then the deconvolution accuracy is improved by the iteration Each iteration should reduce the residual in satisfying the discretization model, The residual reduces quickly in practice, usually three or four iterations is sufficient for accurate results. The following parameters are fixed in the experiments: • h is a Gaussian with standard deviation 0.5, • w is the cubic B-spline, and three iterations of iterative refinement are applied (one initial interpolation and two correction passes). For sake of demonstration, the examples below use a PSF with a substantial amount of blur, σ[h] = 0.5. The default value for σ[h] is 0.35 in the online demo associated with this article, which better models the blurriness of typical images. The interpolation is computationally efficient. We first consider the complexity without iterative refinement. The matrices can be precomputed for each stencil ∈Σ, allowing the c[n] coefficients to be computed in 6 ^2 + 3 operations per (color) input pixel. Furthermore, since w has compact support, u only depends on the small number of u[k] where w(x − k) is nonzero. Let W be a bound on the number of nonzero terms, We suppose that W is O( ). Given the c[n], each evaluation of u(x) costs O( ^2) operations. So for factor-d scaling, the total computational cost is O( ^2d^2) operations per input pixel. For scaling by rational d, samples of w and can also be precomputed, and scaling costs 6 Wd^2 operations per input pixel. For the settings used in the examples, this is 864d^2 operations per input pixel. With iterative refinement, the previous cost is multiplied by the number of steps and there is the additional cost of computing the residual. If h is quickly decaying, then it is accurately approximated by an FIR filter with O(d^2) taps and the residual can be computed in O(d^2) operations per input pixel. This software is distributed under the terms of the simplified BSD license. Please see the readme.html file or the online documentation for details. Implementation notes: • Fixed-point arithmetic is used to accelerate the main computations. • For efficiency in the correction passes of iterative refinement, the u[k] for which |v[k]| is small are not added (so that they do not need to be computed), Here we perform an interpolation experiment to test the performance of the proposed interpolation strategy. First, a high-resolution image u[o] is smoothed and downsampled by factor 4 to obtain a coarsened image v = ↓(h ∗ u[o]) where h is a Gaussian with standard deviation 0.5 in units of input pixels, σ[h ] = 0.5. This amount of smoothing is somewhat weak anti-aliasing, so the input data is slightly aliased. The value of σ[h] should estimate the blurriness of the PSF used to sample the input image. It is better to underestimate σ[h] rather than overestimate: if σ[h] is smaller than the true standard deviation of the PSF, the result is merely blurrier, but using σ[h] slightly to large creates ripple artifacts. The method works well for 0 ≤ σ[h] ≤ 0.7. For σ[h] above 0.7, the method produces visible ringing artifacts (even if the true PSF used to sample the input image has standard deviation σ[h]). One could expect this effect, since there is no kind of regularization in the deconvolution. In the online demo, the default value for σ[h] is 0.35, which reasonably models the blurriness of typical Interpolation is then performed on v to produce u approximating the original image u[o]. The interpolation and the original image are compared with the peak signal-to-noise ratio (PSNR) and mean structural similarity (MSSIM) metrics (How are these computed?). Image Quality Metrics L^p Metrics Let A and B be two color images to be compared, each with N pixels. We consider the images as vectors in {0, 1, …, 255}. Several standard metrics can then be defined in terms of Maximum absolute difference Mean squared error (MSE) Root mean squared error (RMSE) Peak signal-to-noise ratio (PSNR) For the first three metrics, a smaller value implies a smaller discrepancy between A and B. For PSNR, a larger value implies a smaller discrepancy, with PSNR = ∞ when A = B. The mean structural similarity (MSSIM) index is a somewhat more complicated metric designed to agree better with perceptual image quality. We first describe MSSIM on grayscale images. Let w be a Gaussian filter with standard deviation 1.5 pixels, and compute the following local statistics: At every pixel, the structural similarity (SSIM) index is calculated as where C[1] = (0.01 ⋅ 255)^2 and C[2] = (0.03 ⋅ 255)^2. The mean SSIM (MSSIM) is the average SSIM value over the image. For color images, we compute the MSSIM over each channel and take the average, The MSSIM index is always between 0 and 1. A larger value implies smaller discrepancy. Computation Time The computation time shown in the demo is computed using the UNIX gettimeofday function to obtain the system time in units of nanoseconds. Note that the computation is affected by other tasks running simultaneously on the server, so the reported computation time is only a rough estimate. Comparison with Other Methods Original Image (332×300) Input Image (83×75) Estimated Contour Orientations Contour Stencil Interpolation PSNR 25.77, MSSIM 0.7165, CPU time 0.109s The following table shows the convergence of the residual r^i = v − ↓(h ∗ u) where the image intensity range is [0,1]. Iteration i ||r^i||[∞] 1 0.05409007 2 0.01677390 3 0.00661765 For comparison, the same experiment is performed with standard bicubic interpolation, Muresan's AQua-2 edge-directed interpolation [2], Genuine Fractals fractal zooming [6], Fourier zero-padding with deconvolution, Malgouyres' TV minimization [1], and Roussos and Maragos' tensor-driven diffusion [3]. The first three of these methods do not take advantage of knowledge about the point spread function, while the later three do (notice their sharper appearance). Bicubic AQua-2 [2] PSNR 24.36, MSSIM 0.6311, CPU time 0.012s PSNR 23.97, MSSIM 0.6062, CPU time 0.016s Fractal Zooming [6] Fourier Zero-Padding with Deconvolution PSNR 24.50, MSSIM 0.6317 PSNR 25.70, MSSIM 0.7104, CPU time 0.049s TV Minimization [1] Tensor-Driven Diffusion [3] PSNR 25.87, MSSIM 0.7181, CPU Time 2.72s PSNR 26.00, MSSIM 0.7297, CPU Time 5.11s The contour stencil interpolation has good quality similar to tensor-driven diffusion but with an order magnitude lower computation time. Geometric Features The following experiment on a synthetic image tests the method's ability to handle different geometric features. Original Image (320×240) Input Image (80×60) Estimated Contour Orientations Contour Stencil Interpolation PSNR 21.23, MSSIM 0.8548, CPU time 0.078s Because the method is sensitive to the image contours, oriented textures like hair can be reconstructed to some extent. Interpolation of rough textures with turbulent contours is less successful. Original Image (392×304) Original Image (332×304) Input Image (98×76) Input Image (83×76) Contour Stencil Interpolation Contour Stencil Interpolation PSNR 33.24, MSSIM 0.7762, CPU time 0.129s PSNR 22.47, MSSIM 0.6051, CPU time 0.115s Noisy Images A limitation of the method is the design assumption that noise in the input image is negligible. If noise is present, it is amplified by the deconvolution. The sensitivity to noise increases with the PSF standard deviation σ[h], which controls the deconvolution strength. Similarly, if σ[h] is larger than the standard deviation of the true PSF that sampled the image, then the method produces significant oscillation artifacts because the deconvolution exaggerates the high frequencies. The top row shows the input images and the bottom row shows their interpolations. Clean Input JPEG Compressed Quantized Colors Contour Stencil Interpolation Contour Stencil Interpolation Contour Stencil Interpolation PSNR 26.48, MSSIM 0.8196 PSNR 20.38, MSSIM 0.5244 PSNR 18.30, MSSIM 0.3393 This material is based upon work supported by the National Science Foundation under Award No. DMS-1004694. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Work partially supported by the MISS project of Centre National d'Etudes Spatiales, the Office of Naval Research under grant N00014-97-1-0839 and by the European Research Council, advanced grant “Twelve labours.”
{"url":"http://www.ipol.im/pub/art/2011/g_iics/","timestamp":"2014-04-20T20:55:49Z","content_type":null,"content_length":"48764","record_id":"<urn:uuid:15f82c00-bdc9-43c8-9151-ed707860227f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Stuggling with a rational exponent. March 5th 2009, 08:26 PM #1 Junior Member Jan 2009 Stuggling with a rational exponent. I am trying to figure out what $(-3)^\frac{2}{3}$ is. I calculated it, possibly incorrectly as $\approx{2.08}$ by $\sqrt[3]{(-3)^2}$. I would calculate $-\sqrt[3]{3}, as \approx{-1.4422}$ then square that to get $\approx{2.08}$ All the maths applications I have calculate it as -1.0400 + 1.8014i. Which is $(\sqrt[3]{3}[\frac{1+i\sqrt{3}}{2}])^2$. This complex fraction is courtesy of my HP. My questions are twofold. Why if cube roots are defined for all values on the number line do the calculators, and matlab produce an answer with an imaginary part? I thought that all even rational exponents produce positive real results, and not imaginary results? If the imaginary result above is correct, and I can trust the calculator, how on earth did it get to that complex fraction? I have only used matlab once before (in the past week). I don't know if this works for only integers but have you tried the "nthroot" function? I found matlab gave an imaginary solution to a cube root as well rather than the real root, strangely. Right answer? I know, it's very strange. Would you also calculate it as $\approx{+2.08}$ Yes that is the real cube root, correct. Matlab has given you a correct complex solution but it may not be much use in many cases! March 5th 2009, 09:57 PM #2 Dec 2008 Auckland, New Zealand March 5th 2009, 10:01 PM #3 Junior Member Jan 2009 March 5th 2009, 10:11 PM #4 Dec 2008 Auckland, New Zealand
{"url":"http://mathhelpforum.com/pre-calculus/77180-stuggling-rational-exponent.html","timestamp":"2014-04-19T19:50:15Z","content_type":null,"content_length":"37593","record_id":"<urn:uuid:d8c3e437-bf82-45de-b90f-9fc7bda214fb>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Author: H. D. Vinod, Fordham University, New York Dates: Noted in the software itself All The code on this page is provided gratis without any guarantees or warrantees. Part A has GAUSS software code and Part B has some math typing tricks in MS-Word Proprietary modifications of this code are not permitted. Please make appropriate attribution if you use the code in a research project. PART A: R code #it is a good idea to clean out old objects from R memory and record the date #__________________________ Cut here ____________________________ objects() # these objects are already in memory rm(list=ls()) #this cleans them ls() #this lists what is left options(prompt="R>") #this changes the prompt print(paste("Following executed on", date())) #__________________________ Cut here ____________________________ # I have written the following function to get outliers automatically #First copy and paste all lines of the following “function” in R get.outliers = function(x) { #this left curly brace begins function #function to compute the number of outliers automatically #author H. D. Vinod, Fordham university, New York, 24 March, 2006 #revised April 16, 2006 # input a column vector of values, # output: various quantities used in outlier detection # such as interquartile range, limits and # xnew= revised vector after outliers are deleted xnew=x #initialize the xnew found after removal of outliers if (ncol(as.matrix(x))>1) {print("Error: input to get.outliers function has 2 or more columns") iqr=su[5]-su[2]#inter quartile range dn=su[2]-1.5*iqr #dn denotes lower limit for outlier detection LO=x[x<dn]#vector of values below the lower limit print(c(" Q1-1.5*(inter quartile range)=", as.vector(dn),"number of outliers below it are=",as.vector(nLO)),quote=F) if (nLO>0){ print(c("Actual values below the lower limit are:", LO),quote=F) print(c(“sequence number of outlier(s) for possible deletion are:”, or[x<dn]),quote=F) } #this right curly brace ends the if statement print(c(" Q3+1.5*(inter quartile range)=", as.vector(up)," number of outliers above it are=",as.vector(nUP)),quote=F) if (nUP>0){ print(c("Actual values above the upper limit are:", UP),quote=F) print(c(“sequence number(s) of outlier(s) for possible deletion are:”, or[x>up]),quote=F) xnew=x[-c(or[x<dn],or[x>up])]#the minus means remove those observations } #this right curly brace ends the if statement above #now outputs from the function are ready for extraction # with the use of the dollar symbol and are listed as follows list(below=LO,nLO=nLO,above=UP,nUP=nUP,low.lim=dn,up.lim=up, xnew=xnew)} #this right curly brace ends the function formally #TEST Example x=c(1,-4,3,4,5,55) #xx$xnew extracts xnew=revised x without outliers #xx$be extracts actual values below the lower outlier limit and so on # the “$b” is an abbreviation for “$below” # b alone works since nothing else in the “list” has b at the start # = = = = = = function ends here = = = = = = # WARNING on xnew for regression! It will not work! # If you are removing outliers in a regression be sure to remove # the complete matched set of observations for all variables. # e.g., if fifth observation is outlier in y but not in x or z and # if lm(y~x+z) is used, remove fifth observation from x, y and z # This will have to be done manually rather than by using xnew above # xnew works only if the model has only one variable #now assuming x, y and z are already in memory, type #__________________________ Cut here ____________________________ #object is to also provide greater digits in mean and sd and info about length #print(apply(xx,2,mean, na.rm=T)) print("standard deviations") print(apply(xx,2,sd, na.rm=T)) #_____________________Cut here ____________________________ get.skewkurt = function(x) #object compute third and fourth powers of deviations from mean #INPUT x =data # OUTPUT # sum3= sum of cubes of deviations from the mean # sum4= sum of fourth powers of deviations from the mean # devfromm=vector of deviations from the mean #new variance is (1+a)^2 times var(x) #new range is (1+a) times old range max(x)-min(x) while (i<=n) { i=i+1 } list(sum3=sum3, sum4=sum4, devfromm=devfromm) #_____________________Cut here ____________________________ sort.matrix =function(x,j) #sort matrix x by column j # and carry along the remaining columns #author H. D. Vinod, June 14, 2006. if (is.numeric(dd[1])){ #print("Error in sort.matrix function") fn=function (x,oo) {y=x[oo]; return(y)} return(y) } #_____________________Cut here ____________________________ cen.moments = function(x) #object compute 4 sample central moments and cumulants { n=length(x) m2=sum((x-m)^2)/(n-1) #WARNING dividing by n-1 not n here list(m2=m2, m3=m3, m4=m4, k1=k1, k2=k2, k3=k3, k4=k4) #_____________________Cut here ____________________________ PART B: GAUSS code 1) A code for testing the numerical accuracy of any software, written in GAUSS software. This link has useful gauss procedures for computing accurate mean and variance. 2) Following simple proc helps in reshaping the data without giving number of rows. @The following test program should be run to understand what it does. Note that since 8 is not divisible by 3, it ignores the last two data points if you want to reshape into 3 columns. of course, reshape is typically used for getting large data from ascii files, not for data typed in the way it is shown below. x={1, 2, 3, 4, 5, 6, 7, 8}; proc (1)=reshape2(x,ncol); @Author: H. D. Vinod, May 2, 1983. proc returns the reshaped matrix with correct number of rows local n,n1; clear n1; n=rows(x);"number of rows before reshaping= " n; " (number of rows before reshaping)/(no of columns) " n1; PART C: Some Great Tricks for Math Typing in MS Word If you want to type mathematical functions in MS Word without much difficulty, use the autocorrect in the Tools menu. The following file has many preset ideas. For example, \a gives alpha /app= gives approximately equal and numerous other useful symbols. the attached file called normal.dot can be downloaded. Use this to replace your normal.dot file typically in the location C:\Program Files\Microsoft Office\Templates C:\Documents and Settings\user\Application Data\Microsoft\Office\Recent Microsoft keeps changing this, but you can find it! Be careful though. Keep a backup copy before replacing. It may not work for your configuration. It has worked for many of my graduate students.
{"url":"http://www.fordham.edu/economics/vinod/softw-hd.htm","timestamp":"2014-04-20T11:06:36Z","content_type":null,"content_length":"33198","record_id":"<urn:uuid:be14a701-e0b5-42a1-91ed-8ee42c1667f9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00333-ip-10-147-4-33.ec2.internal.warc.gz"}