content
stringlengths
86
994k
meta
stringlengths
288
619
Coordinate-free derivation of the Einstein's field equation from the Hilbert action. up vote 7 down vote favorite It is well-known that the equation for stationary solutions of the Einstein-Hilbert functional (without matter and cosmological constant, which is irrelevant here): $$S = \int_M R \mu_g,$$ is given by the Einstein's field equation: $$Ric -\frac{1}{2}R g = 0, $$ where $\mu_g$ is the canonical volume form given by the metric $g$, $Ric$ is the Ricci curvature and $R$ is the Ricci scalar. The standard derivation of the above statement seems to be a not so hard but not so pleasant direct calculation, either in coordinates or abstract indices, expanding everything in terms of the Christoffel symbol and eventually in terms of $g$ and then calculus. My questions is: is there a more geometric and coordinate-free way to derive this? general-relativity dg.differential-geometry mp.mathematical-physics calculus-of-variations add comment 1 Answer active oldest votes This can be found in Besse "Einstein Manifolds", in chapter 4. up vote 9 down The idea is to use Koszul formula for the Levi-Civitta connection to compute the derivative of the curvature with respect to the metric. Bianchi identities also help. vote accepted Thank you very much. That is indeed a coordinate-free calculation. Still, it is just the standard derivation in the coordinate-free notation. So I didn't formulate my problem well as I was hoping for something more geometrical rather than algebra, say in terms of parallel transport or something of that kind. Thanks all the same. – Lizao Li Sep 10 '12 at add comment Not the answer you're looking for? Browse other questions tagged general-relativity dg.differential-geometry mp.mathematical-physics calculus-of-variations or ask your own question.
{"url":"https://mathoverflow.net/questions/106786/coordinate-free-derivation-of-the-einsteins-field-equation-from-the-hilbert-act/106792","timestamp":"2014-04-16T11:04:08Z","content_type":null,"content_length":"52847","record_id":"<urn:uuid:29e60fcc-be53-4f47-81ea-2c019e925319>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial Fraction Decompositions Partial Fraction Decompositions General partial fraction decomposition is technically complicated and involves several cases. ... of the partial factor decomposition depends on the type of ... – PowerPoint PPT presentation Number of Views:525 Avg rating:3.0/5.0 Slides: 22 Added by: Anonymous more less Transcript and Presenter's Notes
{"url":"http://www.powershow.com/view/128c3f-ZjQxN/Partial_Fraction_Decompositions_powerpoint_ppt_presentation","timestamp":"2014-04-19T06:56:51Z","content_type":null,"content_length":"96638","record_id":"<urn:uuid:d0101f7d-e03d-4ed6-b749-02e07ab2daef>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Distances or Dimensions Given Scale Measurements 2.7: Distances or Dimensions Given Scale Measurements Difficulty Level: Created by: CK-12 Practice Distances or Dimensions Given Scale Measurements Mr. Jones lives next door to Alex. He designed a plot with the following scale. 1" = 2.5 feet Mr. Jones drew a plan for his garden showing a square plot with a side length of 4 inches. What is the actual side length of Mr. Jones' garden? What is the area of the plot? What is the perimeter? In this Concept you will learn about scale and actual measurements. By the end of the Concept, you will know how to figure out the answers to these questions. Maps represent real places. Every part of the place has been reduced to fit on a single piece of paper. A map is an accurate representation because it uses a scale. The scale is a ratio that relates the small size of a representation of a place to the real size of a place. Maps aren’t the only places that we use a scale. Architects use a scale when designing a house. A blueprint shows a small size of what the house will look like compared to the real house. Any time a model is built, it probably uses a scale. The actual building or mountain or landmark can be built small using a scale. We use units of measurement to create a ratio that is our scale. The ratio compares two things. It compares the small size of the object or place to the actual size of the object or place. A scale of 1 inch to 1 foot means that 1 inch on paper represents 1 foot in real space. If we were to write a ratio to show this we would write: 1” : 1 ft-this would be our scale. If the distance between two points on a map is 2 inches, the scale tells us that the actual distance in real space is 2 feet. We can make scales of any size. One inch can represent 1,000 miles if we want our map to show a very large area, such as a continent. One centimeter might represent 1 meter if the map shows a small space, such as a room. How can we figure out actual distances or dimensions using a scale? Let’s start by thinking about distances on a map. On a map, we have a scale that is usually found in the corner. For example, if we have a map of the state of Massachusetts, this could be a possible Here $\frac{3}{4}$ What is the distance from Boston to Framingham? To work on this problem, we need to use our scale to measure the distance from Boston to Framingham. We can do this by using a ruler. We know that every $\frac{3}{4}$$\frac{3}{4}$ If the scale is 1”:500 miles, how far is a city that measures $5\frac{1}{2}$$5\frac{1}{2}$ 5 $\times$$=$$=$ By using arithmetic, we were able to figure out the mileage. Another way to do this is to write two ratios. We can compare the scale with the scale and the distance with the distance. Here are a few problems for you to try on your own. Example A If the scale is 1” : 3 miles, how many miles does 5 inches represent? Solution: 15 miles Example B If the scale is 2” : 500 meters, how many meters does 4 inches represent? Solution: 1000 meters Example C If the scale is 5 ft : 1000 feet, how many feet is 50 feet? Solution:10,000 feet Now back to Mr. Jones' garden. Here is the original problem once again. Mr. Jones lives next door to Alex. He designed a plot with the following scale. 1" = 2.5 feet Mr. Jones drew a plan for his garden showing a square plot with a side length of 4 inches. What is the actual side length of Mr. Jones' garden? What is the area of the plot? What is the perimeter? First, we can use the scale to figure out the actual side length of the plot. The side length is the drawing is 4 inches. That is four times the scale. $2.5 \times 4 = 10$ The actual side length of the plot is 10 feet. The perimeter of a square is four times the side length, so the perimeter of this plot is 40 feet. The area of the square is found by multiplying the side length by the side length, so the area of this plot is 100 square feet. Here are the vocabulary words in this Concept. a ratio that compares a small size to a larger actual size. One measurement represents another measurement in a scale. the comparison of two things a pair of equal ratios, we cross multiply to solve a proportion Guided Practice Here is one for you to solve on your own. If the scale is 2” : 1 ft, what is the actual measurement if a drawing shows the object as 6” long? We can start by writing a ratio that compares the scale. $\frac{1 \ ft}{2}=\frac{x \ ft}{6}$ $1 \times 6 &= 6\\2(x) &= 2x\\2x &= 6 \qquad \text{What times two will give us 6?}\\x &= 3 \ ft$ The object is actually 3 feet long. Video Review Here are a few videos for review. Khan Academy Scale and Indirect Measurement Directions: Use the given scale to determine the actual distance. Given: Scale 1” = 100 miles 1. How many miles is 2” on the map? 2. How many miles is $2\frac{1}{2} inch$ 3. How many miles is $\frac{1}{4} inch$ 4. How many miles is 8 inches on the map? 5. How many miles is 16 inches on the map? 6. How many miles is 12 inches on the map? 7. How many miles is $\frac{1}{2} inch$ 8. How many miles is $5 \frac{1}{4} inches$ Given: 1 cm = 20 mi 9. How many miles is 2 cm on the map? 10. How many miles is 4 cm on the map? 11. How many miles is 8 cm on the map? 12. How many miles is 18 cm on the map? 13. How many miles is 11 cm on the map? 14. How many miles is $\frac{1}{2}$ 15. How many miles is $1 \frac{1}{2}$ 16. How many miles is $4 \frac{1}{4}$ Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Concept-Middle-School-Math---Grade-6/r1/section/2.7/","timestamp":"2014-04-20T04:38:01Z","content_type":null,"content_length":"132666","record_id":"<urn:uuid:8b78ab36-0511-4ad1-bcb6-f4bc3c06bfde>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Quintessence of So Subtle For his art did express A quintessence even from nothingness, From dull privations and lean emptiness; He ruined me, and I am re-begot Of absence, darkness, death; things which are not. John Donne, 1633 Descartes (like Aristotle before him) believed that nature abhors a vacuum, and insisted that the entire universe, even regions that we commonly call "empty space", must be filled with (or, more precisely, must consist of) some kind of substance. He believed this partly for philosophical reasons, which might be crudely summarized as "empty space is nothingness, and nothingness doesn't exist". He held that matter and space are identical and co-extant (ironically similar to Einstein's later notion that the gravitational field is identical with space). In particular, Descartes believed an all-pervasive substance was necessary to account for the propagation of light from the Sun to the Earth (for example), because he rejected any kind of "action at a distance", and he regarded direct mechanical contact (taken as a primitive operation) as the only intelligible means by which two objects can interact. He conceived of light as a kind of pressure, transmitted instantaneously from the source to the eye through an incompressible intervening medium. Others (notably Fermat) thought it more plausible that light propagated with a finite velocity, which was corroborated by Roemer's 1675 observations of the moons of Jupiter. The discovery of light's finite speed was a major event in the history of science, because it removed any operational means of establishing absolute simultaneity. The full significance of this took over two hundred years to be fully appreciated. More immediately, it was clear that the conception of light as a simple pressure was inadequate to account for the different kinds of light, i.e., the phenomenon of color. To remedy this, Robert Hooke suggested that the (longitudinal) pressures transmitted by the ether may be oscillatory, with a frequency corresponding to the color. This conflicted with the views of Newton, who tended to regard light as a stream of particles in an empty void. Huygens advanced a fairly well-developed wave theory, but could never satisfactorily answer Newton's objections about the polarization of light through certain crystals ("Iceland spar"). This difficulty, combined with Newton's prestige, made the particle theory dominant during the 1700's, although many people, notably Jean Bernoulli and Euler, held to the wave theory. In 1800 Thomas Young reconciled polarization with the wave theory by postulating the light actually consists of transverse rather than longitudinal waves, and on this basis - along with Fresnel's explanation of diffraction in terms of waves - the wave theory gained wide acceptance. However, Young's solution of the polarization problem immediately raised a new one, namely, how a system of transverse waves could exist in the ether, which had usually been assumed to be akin to a tenuous gas or fluid. This prompted generations of physicists, including Navier, Stokes, Kelvin, Malus, Arago, and Maxwell to become actively engaged in attempts to explain optical phenomena in terms of a material medium; in fact, this motivated much of their work in developing the equations of state for elastic media, which have proven to be so useful for the macroscopic treatment of fluids. However, despite the fruitfulness of this effort for the development of fluid dynamics, no one was ever able to accurately account for all optical and electro-magnetic phenomena in terms of the behavior of an ordinary fluid medium, with or without viscosity and/or compressibility. There were a number of reasons for this failure. First, an ordinary fluid (even a viscous fluid) can't sustain shear stresses at rest, so it can propagate only longitudinal waves, as opposed to the transverse wave structure of light implied by the phenomenon of polarization. This implies either that the luminiferous ether must be a solid, or else we must postulate some kind of persistent dynamics (such as vortices) in the fluid so that it can sustain shear stresses. Unfortunately, both of these alternatives lead to difficulties. The assumption of a solid ether is difficult to reconcile with the fact that the equations of state for ordinary elastic solids always yield longitudinal waves accompanying any transverse waves - typically with different velocities. Such longitudinal disturbances are never observed with respect to optical phenomena. On the other hand, the assumption of a fluid ether with persistent flow patterns to sustain the required shear stresses entails a highly coordinated and organized system of flow cells that could persist only with the active participation of countless tiny “Maxwell demons” working furiously at each point to sustain it. Lacking this, the vortices are inherently unstable (even in an ideal perfect inviscid fluid, in which vorticity is strictly conserved), so these flow cells could not exert the effects on ordinary matter that they must if they are to serve as the mechanism of electromagnetic forces. Even the latter-day concept of an ether consisting of a superfluid (i.e., the viscosity-free quantum hydrodynamical state achieved by some substances such as helium when cooled to near absolute zero) faces the same problem of sustaining its specialized state while simultaneously interacting with ordinary matter in the required ways. As Maxwell acknowledged No theory of the constitution of the ether has yet been invented which will account for such a system of molecular vortices being maintained for an indefinite time without their energy being gradually dissipated into that irregular agitation of the medium which, in ordinary media, is called heat. Thus, ironically, the concept of transverse waves - proposed by Young and Fresnel as a means of accounting for polarization of light in term of a mechanical wave propagating in some kind of material ether - immediately led to considerations that ultimately undermined confidence in the physicality and meaningfulness of that ether. Even aside from the difficulty of accounting for exclusively transverse waves in a material medium, the idea of a substantial ether filling all of space had always faced numerous difficulties. For example, Newton had shown (in his demolition of Descartes' vortex theory) that the evidently drag-free motion of the planets and comets was flatly inconsistent with the presence of any significant density of interstitial fluid. This problem is especially acute when we remember that, in order to account for the high speed of light, the density and rigidity of the putative ether must be far greater than that of steel. Serious estimates of the density of the ether varied widely, but ran as high as 1000 tons per cubic millimeter. It is then necessary to explain the interaction between this putative material ether and all other known substances. Since the speed of light changes in different material media, there is clearly a significant interaction, and yet apparently this interaction does not involve any appreciable transfer of ordinary momentum (since otherwise the unhindered motions of the planets are inexplicable). One interesting suggestion was that it might be possible to account for the absence of longitudinal waves by hypothesizing a fluid that possesses vanishingly little resistance to compression, but extremely high rigidity with respect to transverse stresses. In other words, the shear stresses are very large, while the normal stresses vanish. The opposite limit is easy to model with the Navier-Stokes equation by setting the viscosity to zero, which gives an ideal non-viscous fluid with no shear stresses and with the normal stresses equal to the pressure. However, we can't use the ordinary Navier-Stokes equations to represent a substance of high viscosity and zero pressure, because this would simply zero density, and even if we postulate some extremely small (but non-zero) pressure, the normal stresses in the Navier-Stokes equations have components that are proportional to the viscosity, so we still wouldn't be rid of them. We'd have to postulate some kind of adaptively non-isotropic viscosity, and then we wouldn't be dealing with anything that could reasonably be called an ordinary material substance. As noted above, the intense efforts to understand the dynamics of a hypothetical luminiferous ether fluid led directly to modern understanding of fluid dynamics, as modeled by the Navier-Stokes equation for fluids of arbitrary viscosity and compressibility. This equation can be written in vector form as where p is the pressure, r is the density, F the external force vector (per unit mass), n is the kinematic viscosity, and V is the fluid velocity vector. If the fluid is incompressible then the divergence of the velocity is zero, so the last term vanishes. It’s interesting to consider whether anything can be inferred about the vacuum from this equation. By definition, a vacuum has vanishing density, pressure, and viscosity - at least in the ordinary senses of those terms. Setting these quantities to zero, and in the absence of any external force F, the above equation reduces to dV/dt = -Ńp/r. Since both p and r are to equal zero, this equations can only be evaluated on the basis of some functional relationship between those two variables. For example, we may assume the ideal gas law, p = rRT where R is the gas constant and T is temperature. In that case we can evaluate the limit of Ńp/r as p and r approach zero to give This rather ghostly proposition apparently describes the disembodied velocity and temperature of a medium possessing neither density nor heat capacity. In a sense it is a medium of pure form and no substance. Of course, this is physically meaningless unless we can establish a correspondence between the terms and some physically observable effects. It was hoped by Stokes, Maxwell, and others that some such identification of terms might enable a limiting case of the Navier-Stokes equation to represent electromagnetic phenomena, but the full delineation of Maxwell's equations for electromagnetism make it clear that they do not describe the movement of any ordinary material substance, which of course was the basis for Navier-Stokes equation. Another interesting suggestion was that the luminiferous ether might consist of a substance whose constituent parts, instead of resisting changes in their relative distances (translation), resist changes in orientation. A theory along these lines was proposed by MacCullagh in 1839, and actually led to some of the same formulas as Maxwell's electromagnetic theory. This is an intriguing fact, but it doesn't represent an application (or even an adaptation) of the equations of motion for either an ordinary elastic substance, whether gas, fluid, or solid. It's more properly regarded as an abstract mathematical model with only a superficial resemblance to descriptions of the behavior of material substances. Some of the simplest material ether theories were ruled out simply on the basis of first-order optical phenomena, especially stellar aberration. For example, Stokes' theory of complete convection could correctly model aberration (to first order) only with a set of hypotheses that Lorentz later showed to be internally inconsistent. (Stokes erroneously assumed the velocity of a potential flow stream around a sphere is zero at the sphere’s surface.) Fresnel's theory of partial convection was (more or less) adequate, up until it became possible to measure second-order effects, at which point it too was invalidated. But regardless of their empirical failures, none of these theories really adhered to the laws of ordinary fluid mechanics. William Thomson (Lord Kelvin), who was perhaps the most persistent of all in the attempt to represent electromagnetic phenomena in terms of the mechanics of ordinary macroscopic substances, aptly summarized the previous half-century of progress in this line of research at a jubilee in his honor in 1896: One word characterizes the most strenuous efforts for the advancement of science that I have made perseveringly during fifty-five years; that word is failure. I know no more of electric and magnetic force, or of the relation between ether, electricity, and ponderable matter… than I knew… fifty years ago. We might think this assessment was too harsh, especially considering that virtually the entire science of classical electromagnetism - based on Maxwell’s equations - was developed during the period in question. However, in the course of this development Maxwell and his followers had abandoned the effort to find mechanical analogies, and Kelvin equated progress with finding a mechanical analogy. The failure to find any satisfactory mechanical model for electromagnetism led to the abandonment of the principle of qualitative similarity, which is to say, it led to the recognition that the ether must be qualitatively different from ordinary substances. This belief was firmly established once Maxwell showed that longitudinal waves cannot propagate through transparent substances or free space. In so doing, he was finally able to show that all electromagnetic and optical phenomena can be explained by a single system of "stresses in the ether", which, however, he acknowledged must obey quite different laws than do the elastic stresses in ordinary material substances. E. T. Whittaker’s book “Aether and Electricity” includes a review of the work of Kelvin and others to find a mechanical model of the ether concludes that Towards the close of the nineteenth century… it came to be generally recognized that the aether is an immaterial medium, sui generis, not composed of identifiable elements having definite locations in absolute space. Thus by the time of Lorentz it had become clear that the "ether" was simply being arbitrarily assigned whatever formal (and often non-materialistic) properties it needed in order to make it compatible with the underlying electromagnetic laws, and therefore the "corporeal" ether concept was no longer exerting any positive heuristic benefit, but was simply an archaic appendage that was being formalistically superimposed on top of the real physics for no particular reason. Moreover, although the Navier-Stokes equation is as important today for fluid dynamics as Maxwell's equations are for electrodynamics, we've also come to understand that real fluids and solids are not truly continuous media. They actually consist of large numbers of (more or less) discrete particles. As it became clear that the apparently continuous dynamics of fluids and solids were ultimately just approximations based on an aggregate of more primitive electromagnetic interactions, the motivation for trying to explain the latter as an instance of the former came to be seriously questioned. It is rather like saying gold consists of an aggregate of sub-atomic particles, and then going on to say that those sub-atomic particles are made of gold! The effort to explain electromagnetism in terms of a material fluid such as we observe on a macroscopic level, when in fact the electromagnetic interaction is a much more primitive phenomenon, appears today to have been fundamentally misguided, an attempt to model a low-level phenomenon as an instance of a higher level phenomenon. During the last years of the 19th century a careful and detailed examination of electrodymanic phenomena enabled Lorentz, Poincare, and others to develop a theory of the electromagnetic ether that accounted for all known observations, but only by concluding that "the ether is undoubtedly widely different from all ordinary matter". This is because, in order to simultaneously account for aberration, polarization and transverse waves, the complete absence of longitudinal waves, and the failure of the Michelson/ Morley experiment to detect any significant ether drift, Lorentz was forced to regard the ether as strictly motionless, and yet subject to non-vanishing stresses, which is contradictory for ordinary matter. Even in Einstein's famous essay on "The Ether and Relativity" he points out that although "we may assume the existence of an ether, we must give up ascribing a definite state of motion to it, i.e. we must take from it the last mechanical characteristic...". He says this because, like Lorentz, he understood that electromagnetic phenomena simply do not conform to the behavior of disturbances is any ordinary material substance - solid, liquid, or gas. Obviously if we wish to postulate some new kind of “substance” whose properties are not constrained to be those of an ordinary substance, we can "back out" whatever properties are needed to match the equations of any field theory (which is essentially what Lorentz did), but this is just an exercise in re-stating the equations in ad hoc verbal terms. Such a program has no heuristic or explanatory content. The question of whether electromagnetic phenomena could be accurately modeled as disturbances in an ordinary material medium was quite meaningful and deserved to be explored, but the answer is unequivocally that the phenomena of electromagnetism do not conform to the principles governing the behavior of ordinary material substances. In fact, we now understand that the latter are governed by the former, i.e., elementary electromagnetic interactions underlie the macroscopic behavior of ordinary material substances. We shouldn't conclude this review of the ether without hearing Maxwell on the subject, since he devoted his entire treatise on electromagnetism to it. Here is what he says in the final article of that immense work: The mathematical expressions for electrodynamic action led, in the mind of Gauss, to the conviction that a theory of the propagation of electric action [as a function of] time would be found to be the very keystone of electrodynamics. Now, we are unable to conceive of propagation in time, except either as the flight of a material substance through space, or as the propagation of a condition of motion or stress in a medium already existing in space... If something is transmitted from one particle to another at a distance, what is its condition after it has left the one particle and before it has reached the other? ...whenever energy is transmitted from one body to another in time, there must be a medium or substance in which the energy exists after it leaves one body and before it reaches the other, for energy, as Torricelli remarked, 'is a quintessence of so subtle a nature that it cannot be contained in any vessel except the inmost substance of material things'. Hence all these theories lead to the conception of a medium in which the propagation takes place, and if we admit this medium as an hypothesis, I think it ought to occupy a prominent place in our investigations, and that we ought to endeavour to construct a mental representation of all the details of its action, and this has been my constant aim in this treatise. Surely the intuitions of Gauss and Torricelli have been vindicated. Maxwell's dilemma about how the energy of light "exists" during the interval between its emission and absorption was resolved by the modern theory of relativity, according to which the absolute spacetime interval between the emission and absorption of a photon is identically zero, i.e., photons are transmitted along null intervals in spacetime. The quantum phase of events, which we identify as the proper time of those events, does not advance at all along null intervals, so, in a profound sense, the question of a photon's mode of existence "after it leaves one body and before it reaches the other" is moot (as discussed in Section 9). Of course, no one from Torricelli to Maxwell imagined that the propagation of light might depend fundamentally on the existence of null connections between distinct points in space and time. The Minkowskian structure of spacetime is indeed a quintessence of a most subtle Return to Table of Contents
{"url":"http://www.mathpages.com/rr/s3-05/3-05.htm","timestamp":"2014-04-21T09:37:46Z","content_type":null,"content_length":"35298","record_id":"<urn:uuid:91fedc24-2403-4c02-9851-f84da5c51423>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Fastest way to cull bounded box in frustrum [Archive] - OpenGL Discussion and Help Forums 01-22-2002, 10:56 PM I have a very large dataset that I have partitioned with a BSP. Each partition is axis aligned with the graphics coordinate system and its volume is defined in terms of a centroid and two vertices representing the maximum and minimum coordinates in graphics space. I grab the viewing frustum from the m/p matrices at the start of each frame and store it in the form of 6 plane equations. I am successfully using it to do point and sphere culling. However, I also need to cull the partitions ... Now, I have considered just using the vertices at each corner and the centroid in a point culling algorithm (5 point tests). I have also considered just using a sphere test in which the culling radius was based on the maximum dimension of the partitions. I tossed the sphere idea since the dimensions of the partitions varied considerably giving rise to a spherical volume much larger than the partition. I am considering using plane/line intersection testing to do the job also. I need to make this portion of code as fast as I possibly can since there may be hundreds of partition volumes in the scene. My question is this ... Has anyone done any testing to benchmark the performance of plane/line intersection testing versus say 5 point tests? [This message has been edited by pleopard (edited 01-22-2002).]
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-147067.html","timestamp":"2014-04-21T00:00:23Z","content_type":null,"content_length":"15564","record_id":"<urn:uuid:4fb4d427-8545-4cd2-8654-79dfddaf7cab>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding a centroid April 18th 2009, 02:28 AM #1 Finding a centroid Find the centroid of the rectangular region cut from from the first quadrant by the line $x+y=3$. What i've tried: $M= \int_0^3 \int_0^{3-x} dy \ dx = \int_0^3 3-x \ dx = \left[ 3x-\frac{x^2}{2} \right]_0^3=9-\frac{9}{2}=\frac{9}{2}$ $M_x=\int_0^3 \int_0^{3-x} y \ dy \ dx=\frac{1}{2} \int_0^3 (3-x)^2 \ dx=\frac{1}{2} \int_0^3 9-6x+x^2 \ dx$$=\frac{1}{2} \left[9x-3x^2+\frac{x^3}{3} \right]_0^3=\frac{1}{2} \left(27-27+\frac{27} {3} \right)=\frac{27}{6}=\frac{9}{2}$ $M_y=\int_0^3\int_0^{3-x} x \ dy \ dx=\int_0^3 3x-x^2 \ dx= \left[\frac{3x^2}{2}-\frac{x^3}{3} \right]_0^3=\frac{27}{2}-\frac{27}{3}=\frac{27}{6}=\frac{9}{2}$ $\overline{x}=\frac{9}{2} \left( \frac{2}{9} \right)=1$ $\overline{y}=\frac{9}{2} \left( \frac{2}{9} \right)=1$ This means that the centre of mass is $(1,1)$. However, intuitively this doesn't seem right to me. I would expect it to be at $\left( \frac{3}{2}, \frac{3}{2} \right)$ since that would be the middle of the two sides (and the expression for the density is constant). So can anyone see if $(1,1)$ is correct. If it isn't correct, where have I gone wrong in my integrals? What i've tried: $M= \int_0^3 \int_0^{3-x} dy \ dx = \int_0^3 3-x \ dx = \left[ 3x-\frac{x^2}{2} \right]_0^3=9-\frac{9}{2}=\frac{9}{2}$ $M_x=\int_0^3 \int_0^{3-x} y \ dy \ dx=\frac{1}{2} \int_0^3 (3-x)^2 \ dx=\frac{1}{2} \int_0^3 9-6x+x^2 \ dx$$=\frac{1}{2} \left[9x-3x^2+\frac{x^3}{3} \right]_0^3=\frac{1}{2} \left(27-27+\frac{27} {3} \right)=\frac{27}{6}=\frac{9}{2}$ $M_y=\int_0^3\int_0^{3-x} x \ dy \ dx=\int_0^3 3x-x^2 \ dx= \left[\frac{3x^2}{2}-\frac{x^3}{3} \right]_0^3=\frac{27}{2}-\frac{27}{3}=\frac{27}{6}=\frac{9}{2}$ $\overline{x}=\frac{9}{2} \left( \frac{2}{9} \right)=1$ $\overline{y}=\frac{9}{2} \left( \frac{2}{9} \right)=1$ This means that the centre of mass is $(1,1)$. However, intuitively this doesn't seem right to me. I would expect it to be at $\left( \frac{3}{2}, \frac{3}{2} \right)$ since that would be the middle of the two sides (and the expression for the density is constant). So can anyone see if $(1,1)$ is correct. If it isn't correct, where have I gone wrong in my integrals? It's only in the centre of the object if the object is symmetrical about that axis. You can see that the triangle is symmetrical if you look at it from the point of view of it's hypotenuse. And the coordinate (1,1) is indeed in its centre with respect to that axis. So all is well. You wouldn't expect it to be in the centre on any other axis because it isn't symmetrical on any other axis. (1,1) is fine. April 18th 2009, 04:39 AM #2 Super Member Dec 2008
{"url":"http://mathhelpforum.com/calculus/84257-finding-centroid.html","timestamp":"2014-04-16T05:09:45Z","content_type":null,"content_length":"38664","record_id":"<urn:uuid:377a02ea-eb36-4b46-b358-4b28ee6b5eef>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Department, Princeton University High Energy Theory Seminar - Rak-Kyeong Seong - Imperial College London - "An evolution of Brane Tilings" One way in which supersymmetric gauge theories can arise in string theory is as worldvolume theories of probe branes at toric Calabi-Yau singularities. For N=1 4d worldvolume theories of D3 branes at Calabi-Yau 3-fold singularities, the field theory information can be represented by a bipartite graph on a torus, which is known as a brane tiling. Bipartite graphs as brane tilings have recently caught much attention in relation to quantum integrable systems, SYM scattering amplitudes and superconformal indices. The talk will give an introduction to brane tilings and explore recent developments. In particular, I will outline how a new correspondence called specular duality between moduli spaces of brane tilings can be used to study a new class of field theories. Location: PCTS Seminar Room Date/Time: 10/22/12 at 2:30 pm - 10/22/12 at 3:30 pm Category: High Energy Theory Seminar Department: Physics
{"url":"http://www.princeton.edu/physics/events/viewevent.xml?id=449","timestamp":"2014-04-21T00:19:00Z","content_type":null,"content_length":"10893","record_id":"<urn:uuid:102bb0d2-df2e-4ba4-838c-4c1d5d6204d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Cambridge is renowned for the excellence of its Mathematics course. Equally challenging and rewarding, it offers the opportunity to study a wide range of subjects: everything from black holes to abstruse logic problems. UCAS code G100 BA/Math Duration Three or four years Available at all Colleges except Wolfson Colleges Most Colleges don't encourage deferred entry Applications per place: 6 2013 entry Number accepted: 234 Department open day - 26 April, 3 May, booking required, see the Faculty website Open days and events 2014 College open days (sciences) Cambridge Open Days - 3 July, 4 July 2014 Related courses Contact details admissions@maths.cam.ac.uk Flexibility: a course that suits you The Cambridge Mathematics course is often considered to be the most demanding undergraduate Mathematics course available in Britain and, correspondingly, one of the most rewarding. Two other aspects of the course that our students greatly appreciate are its flexibility and the breadth of subjects offered. The amount of choice increases each year, and after Year 1 the workload isn't fixed, so you can choose the number of options you study to suit your own work pattern. Some students take as many options as they can; others take fewer and study them very thoroughly. Our Faculty Since Sir Isaac Newton was Lucasian Professor (1669-96), mathematics teaching and research here have been enhanced by a string of brilliant mathematicians including six Fields Medallists and even Nobel Prize winners. Most current Faculty members are leading international authorities on their subjects. Our Faculty is also closely linked with the Isaac Newton Institute, which attracts specialists from all over the world to tackle outstanding problems in the mathematical sciences. Changing course About 10 per cent of student change from Mathematics each year. Many of these have taken the Mathematics with Physics option in their first year, with the intention of changing to Physics (Natural Sciences). Over the years, mathematicians have changed successfully to nearly every other subject taught at Cambridge. However, it's not advisable to apply for Mathematics intending to transfer to a subject other than Physics. A Cambridge Mathematics degree is versatile and very marketable. The demand for our mathematicians is high in business, commerce and industry, as well as the academic world. Around 45 per cent of our students go on to further study, while others follow a wide variety of careers. Recent graduates include a metrologist, sports statistician, journalist, and an avionics, radar and communications engineer, as well as teachers, actuaries, accountants, IT specialists, financiers and consultants. Course outline In Year 1, you typically have 12 lectures and two supervisions each week. In the following years, the greater choice and flexibility means that the pattern of lectures and supervisions is more irregular, but the average workload is roughly the same. You sit four written examination papers each year. In addition, there are optional computer projects in Years 2 and 3. In the fourth year, each course is examined individually. In the first year, you choose one of two pathways: • option (a) Pure and Applied Mathematics, for students intending to continue with Mathematics • option (b) Mathematics with Physics, for students who may want to study Physics after the first year You can still continue with Mathematics in the second year if you take option (b). Part IA introduces you to the fundamentals of higher mathematics, including: • the study of algebraic systems (such as groups) • analysis of calculus • probability • mathematical methods (such as vector calculus) • Newtonian dynamics and Special Relativity You take eight subjects. Those taking Mathematics with Physics replace two Mathematics subjects with Part IA Physics from Natural Sciences, covering, for example, kinetic theory, Fourier analysis, and electromagnetism. In Part IB, you choose from 17 options available. In most of these, the topics of the first year are studied in much greater depth but some new topics are offered, for example: • geometry • electromagnetism, quantum mechanics and fluid dynamics • applicable mathematics, which includes statistics and optimisation (a rigorous treatment of topics from decision mathematics) • numerical analysis There are also optional computational projects (assessed by means of notebooks and programmes submitted before the summer examinations), using computers to solve mathematical problems. Year 3 gives you the opportunity to explore your mathematical interests in detail. There is a very wide choice including papers, for example: • cryptography • algebraic topology • number theory • cosmology • general relativity • stochastic financial models • waves There are also optional computational projects. Part III has a world-wide reputation for training the very best research mathematicians. Progression to Part III, in which more than 80 options are offered, normally requires a first in Part II or a very good performance in Parts IB and II, and successful completion leads to a BA/M Math. See the Faculty website for more details. Entry requirements Typical offers require A Level: A*A*A + STEP IB: 40-41 points, with 776 at Higher Level + STEP For other qualifications, see our main Entrance requirements pages. Essential A Level Mathematics and AS Level Further Mathematics/IB Higher Level Mathematics Highly desirable A Level Further Mathematics, Mechanics modules The table below relates to A Level (or equivalent) subject preferences of individual Colleges for admission to study Mathematics. STEP Mathematics is required as part of almost all conditional offers in Mathematics (including Mathematics with Physics). See Entrance requirements and our Subject Matters leaflet for additional advice about general requirements for entry, qualifications and offers. Essential It is likely that you will be rejected without interview if you do not have this subject Preferred Qualifications in these subjects are preferred for admission but they are not essential Useful If you don't have this subject your application will not be disadvantaged, but it may affect your ability to cope with the course and limit the options available to you. M = Mathematics, FM = Further Mathematics College Essential Preferred Useful Comments Christ's M, FM (AS) FM Churchill M, FM (AS) FM Clare M, FM (AS) FM Corpus Christi M, FM (AS) FM Science subjects Applicants with broader selection of subjects not precluded Downing M, FM (AS) FM Emmanuel M, FM (AS) FM Fitzwilliam M, FM (AS) FM Girton M, FM (AS) FM Gonville & Caius M, FM (AS) FM Homerton M, FM (AS) FM Hughes Hall M, FM (AS) Science subjects A qualification in a third science subject would be advantageous Jesus M, FM (AS) FM King's M, FM (AS) FM Lucy Cavendish M, FM (AS) FM Magdalene M, FM (AS) FM Murray Edwards M, FM (AS) FM Newnham M, FM (AS) FM Science subjects Pembroke M, FM (AS) FM Peterhouse M, FM (AS) Queens' M, FM (AS) FM Robinson M, FM (AS) FM St Catharine's M, FM (AS) FM St Edmund's M, FM (AS) St John's M, FM Selwyn M, FM (AS) FM Sidney Sussex M, FM (AS) FM Trinity Please consult the College website Trinity Hall M, FM Wolfson Subject not available The table below sets out the ways in which each College assesses applicants for this subject. For more information about these methods of assessment and why we use them, see the main Admissions tests and written work page. College Assessment of applicant for this subject Christ's Maths STEP; Test at interview Churchill Maths STEP; Test at interview Clare Maths STEP Corpus Christi Maths STEP; Test at interview Downing Maths STEP; Test at interview Emmanuel Maths STEP Fitzwilliam Maths STEP ; Preparatory study at interview Girton Maths STEP; Test at interview Gonville & Caius Maths STEP ; Preparatory study at interview Homerton Maths STEP; Test at interview Hughes Hall Maths STEP; Test at interview Jesus Maths STEP; Preparatory study at interview King's Maths STEP; Test at interview Lucy Cavendish Maths STEP; Test at interview Magdalene Maths STEP; Test at interview Murray Edwards Maths STEP; Test at interview Newnham Maths STEP; Preparatory study at interview Pembroke Maths STEP; Preparatory study at interview Peterhouse Maths STEP Queens' Maths STEP Robinson Maths STEP; Test at interview St Catharine's Maths STEP St Edmund's Maths STEP; Test at interview St John's Maths STEP; May be given unseen maths problem at interview Selwyn Maths STEP Sidney Sussex Maths STEP Trinity Maths STEP; Test at interview Trinity Hall Maths STEP Wolfson Not available at this College If you are interested in applying for this course, please see our Applying section for more details. Further Resources Find out more about Mathematics at Cambridge • Course website - Explore the Mathematics degree in more detail on the course website. • Course guide - A detailed guide to the Mathematics degree. Improve your knowledge of Mathematics • Maths study skills - A guide to the study skills that will help you study Mathematics at Cambridge, and also give you a flavour of the teaching styles here. • NRICH Mathematics - Maths problems, games and history aimed at AS and A-level students wishing to enrich their experience and understanding of mathematics. • Plus Magazine - An online magazine 'opening a door to the world of maths', as part of the Cambridge-based Millennium Maths Project. Tools to help you with your Mathematics application • STEP guidance - Information about the STEP examination which applicants to study Maths at Cambridge are required to take. • Application information - A detailed guide to applying to study Mathematics at Cambridge. Unistats info Contextual Information From September 2012, every undergraduate course of more than one year's duration will have a Key Information Set (KIS). The KIS allows you to compare 17 pieces of information about individual courses at different higher education institutions. However, please note that superficially similar courses often have very different structures and objectives, and that the teaching, support and learning environment that best suits you can only be determined by identifying your own interests, needs, expectations and goals, and comparing them with detailed institution- and course-specific information. We recommend that you look thoroughly at the course and University information contained on these webpages and consider coming to visit us on an Open Day, rather than relying solely on statistical You may find the following notes helpful when considering information presented by the KIS. 1. The KIS relies on superficially similar courses being coded in the same way. Whilst this works on one level, it leads to some anomalies. For example, Music courses and Music Technology courses can have exactly the same code despite being very different programmes with quite distinct educational and career outcomes. Any course which combines several disciplines (as many courses at Cambridge do) tends to be compared nationally with courses in just one of those disciplines, and in such cases a KIS comparison may not be an accurate or fair reflection of the reality of either. For example, you may find that when considering a degree which embraces a range of disciplines such as biology, physics, chemistry and geology (for instance, Natural Sciences at Cambridge), the comparison provided is with courses at other institutions that primarily focus on just one (or a smaller combination) of those subjects. 2. Whilst the KIS makes reference to some broad types of financial support offered by institutions, it cannot compare packages offered by different institutions. Different students have different circumstances and requirements, and you should weigh up what matters to you most: level of fee; fee waivers; means-tested support such as bursaries; non-means-tested support such as academic scholarships and study grants; and living costs such as accommodation, travel. 3. The KIS provides a typical cost of private (ie non-university) accommodation. This is very difficult to estimate as prices and properties vary. University accommodation can be substantially cheaper, and if you are likely to live in College for much or all of the duration of your course (as is the case at Cambridge), then the cost of private accommodation will be of less or no relevance for you. The KIS also provides the typical annual cost of university accommodation and the number of beds available. Note that since most universities offer a range of residential accommodation, you should check with institutions about the likelihood of securing a room at a price that suits your budget. Knowing the number of beds available is not necessarily useful: it may be much more important to find out if all students are guaranteed accommodation. 4. Time in lectures, seminars and similar can vary enormously by institution depending on the structure of the course, and the quality of such contact time should be the primary consideration. 5. Whilst starting salaries can be a useful measure, they do not give any sense of career trajectory or take account of the voluntary/low paid work that many graduates undertake initially in order to gain valuable experience necessary/advantageous for later career progression. The above list is not exhaustive and there may be other important factors that are relevant to the choices that you are making, but we hope that this will be a useful starting point to help you delve deeper than the face value of the KIS data.
{"url":"http://www.study.cam.ac.uk/undergraduate/courses/maths/","timestamp":"2014-04-17T08:25:48Z","content_type":null,"content_length":"58769","record_id":"<urn:uuid:ecc05d9d-a3eb-4c60-8376-31379aa2aa3a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
sullivan precalculus answers edition 8 26/11/10 10:13 sullivan precalculus answers edition 8 Posted in * pdf Topics Study Guidelines for the 8th edition of Sullivan’s Precalculus Study Guidelines for the 8th edition of Sullivan’s Precalculus … should complete all of these problems, check your answers in the back of the textbook, and get … examples 7-8 and their matched problems … http://www.eng.iastate.edu/orientation/chapter 10 conic sectioncs w practice.pdf * pdf Math 127, D03 (3 credits) PRECALCULUS II Syllabus 1. Use degrees … Text: Sullivan and Sullivan, PRECALCULUS Enhanced with Graphing Utilities, Fifth Edition … withdraw you for nonattendance (TMCC catalog, page A-8) to make room for another student possibly waiting to … may print these homework sets, work out the problems without the computer on, and then enter the answers … http://www.scsr.nevada.edu/~elsi/classes/127d03/f09m127d03syllabusx.pdf * pdf Math 127, E01 (3 credits) Fall 2009 PRECALCULUS II Syllabus Class … 24 Aug 2009 … Text: Access code for MyMathLab based on Sullivan and Sullivan, … with Graphing Utilities, Fifth Edition …. out these sets and then enter the answers later. … Math Tutoring Hours: M-W from 8:30 AM to 7:30 PM … http://www.scsr.nevada.edu/~elsi/online_classes/f09m127e01syllabusx.pdf * pdf 1. For and find the following composite functions and state the … Book: Sullivan: Precalculus: Concepts through Functions, Right Triangle. ENHANCED. Assignment: Practice Test Four. 1. 2. 3. 4. 5. 6. 7. 8. Answers – Page 1 … http://www2.ctc.edu/~lzhang/math154/ Practice Test Four.pdf * pdf Precalculus,Second Edition, is designed and written to help … Precalculus is not simply a condensed version of my Algebra and Trigonometry … equations (Section P.8) and inequalities (Section P.9). …. for preparing the answer section and serving as accuracy checker; Jim Sullivan, … ftp://ftp.prenhall.com/pub/esm/sample_chapters/mathematics/blitzerpre/pdf/preface.pdf * pdf M305G – Precalculus Course text: Precalculus, Custom Edition for the University of Texas at Austin (from … numbered problems (whose answers are provided in the text) and … Sullivan’s book is very nicely laid out as math textbooks go (and I tend to be …. 8. If you needed to, could you make it to my office hours from 1 PM to 2 PM … http://www.ma.utexas.edu/users/cpatters/fdh.pdf * pdf Math 126, D06 (3 credits) PRECALCULUS I Syllabus 22 Aug 2009 … Text: Sullivan and Sullivan, PRECALCULUS Enhanced with Graphing Utilities, Fifth Edition. With MyMathLab … may print these homework sets, work out the problems without the computer on, and then enter the answers into the computer later on. … Math Tutoring Hours: M-W from 8:30 AM to 7:30 PM … http://www.scs.unr.edu/~elsi/classes/126d06/f09m126d06syllabusx.pdf * pdf MATH 135: Pre-Calculus MATH 135: Pre-Calculus. Mon, Wed: 6:30pm-8:45pm. Instructor: Dr. Lemee Nakamura. Section 1098. Office: 3621 … Required Text: Sullivan, “PRECALCULUS, Enhanced with Graphing Utilitiesâ€,. 4th edition, Prentice Hall (2006) ISBN 0-13-149092-3 …. the MLC are available to answer homework questions that you may have. … http://www.miracosta.edu/home/lnakamura/Math 135 Course Syllabus-Sullivan 4th-F09.pdf * pdf SYLLABUS for Math 135, Precalculus, Sec. 1221, Summer 2009 Time … Required Text: Precalculus – Enhanced with Graphing Utilities, Fifth Edition,. Sullivan, Sullivan, 2009, ISBN 978-0-13-601578-9. Bring your text to each class meeting. … copy from someone else’s test/quiz, or who change answers on a test after it has been … approximately 8 tests, approximately one each week. … http://www.miracosta.edu/home/jtowers/summer09/m135_syl_smr09.pdf * pdf 1 PRE-CALCULUS ALGEBRA MAC 1140.B001 Summer B, 2008 Monday through Thursday 8:00-9:50 AM, BA 0119. Text Book: Precalculus by Michael Sullivan, Pearson-Prentice Hall, and 8th. Edition. Instructor: Dr. Ram N. Mohapatra … If you fail to write your name in your answer sheet you may not … http://www.math.ucf.edu/syllabi/Summer2008/mac1140.B01.pdf * pdf MAC 1140 Pre-Calculus Algebra Spring 2009 Textbook: Precalculus, by M. Sullivan, custom edition for UCF (with MyMathLab (MML) access …. Email communication: I try to answer students emails within 24 hours. … 8. Food and Drinks. No food or drinks may be brought into the Lab. … http://www.math.ucf.edu/syllabi/Spring2009/mac1140.06.pdf * pdf Math 1051 Precalculus I Syllabus for Spring 2009 Textbook: Sullivan, Michael, Precalculus, eighth edition, Prentice Hall, 2008. This comes …. “communicate†to us what you know about math, not just the answers, …. Walk-in hours for urgent student needs are Monday through Friday 8 … http://www.tc.umn.edu/~droberts/S09/1051S09/5 Syllabus S09.pdf * pdf Math 1051 Precalculus I Syllabus for Spring 2010 Textbook: Sullivan, Michael, Precalculus, eighth edition, Prentice Hall, 2008. … be similar to those that will appear on the exams, and answer any …. open M-F from 8 a.m.-6 p.m. & on Saturdays from Noon-5 p.m. A list of the free … http://www.tc.umn.edu/~droberts/Spring2010/1051S10/1051 Syllabus S10.pdf * pdf PRE-CALCULUS Honors Teacher: Ms. Botelho, Room D-1 2009 – 2010 … Precalculus: Enhanced with Graphing Utilities, 4th ed, Sullivan and Sullivan, … assignments with just answers on them. They will be returned to you so you can redo … Pre-Calculus. Period 8. Free. I am available both before and after … http://www.palmahs.org/teachers/botelho/Pre Calculus Honors/Pre-Calculus (H) first day handout.pdf * pdf PreCalculus Magnet A – (2008 – 2009) Textbook: Precalculus Enhanced With Graphing Utilities – Fourth Edition. By Sullivan Sullivan. Teacher: Goretti Nguyen. School phone number: 713-741-2410. Email:gnguyen@houstonisd.org Conference Time: 8:35 a.m. -10:00 a.m. (A Days) … answers. Process steps must be shown for each problem. … http://hs.houstonisd.org/DeBakeyhs/syllabi/ * pdf Syllabus MAC 1140-004 Precalculus Algebra – 3cr. – Fall’09 with … 4 Sep 2009 … Topics by section of Text: PRECALCULUS – 8th Ed – by M. Sullivan … 8. 9. W. 2.6. Applications and extensions ….. Answers are at the back of the text. Detailed solutions are on Blackboard. The advantage of doing them … http://www.math.fau.edu/syllabi/Syllabi09Fall/MAC1140General09F.pdf * pdf Syllabus MAC 1114-005 Trigonometry – 3cr. – Fall ’09 with … includes an electronic version of the text: PRECALCULUS – by M. Sullivan, Prentice Hall 8th Ed. …. 8. 2. M. 9.2 Simple polar equations and their graphs; …. Answers are at the back of the text. Detailed solutions are on Blackboard. … http://www.math.fau.edu/syllabi/Syllabi09Fall/MAC1114General09F.pdf * pdf Pre-Calculus Algebra and Trigonometry Syllabus  Summer 2009 Text: Algebra and Trigonometry, by Sullivan, 8th edition … number of complete solutions an assignment with just the answers to the problems will not receive full … You may bring one 8 ½ x 11 sheet of notes (with handwriting on both … http://www.mathtoearth.org/math4notes/syllabus.pdf * pdf PRECALCULUS Enhanced with Graphing Utilities, 5e, Sullivan, Sullivan, Pearson … Falsifications, including forging signatures, altering answers after they have … 8. solve quadratic equations by factoring, square roots, completing the … http://www.hpregional.org/hp_info/distadmin/curriculum/Mathematics/Honors Pre-Calculus Course Outline 2008.pdf * pdf MA 111 – PRECALCULUS Internet – Section 602 Summer II 2009 North … 1 Jul 2009 … Precalculus: Concepts Through Functions, A Right Triangle Approach to Trigonometry 1st edition, by. Sullivan and Sullivan, Prentice Hall … http://delta.ncsu.edu/apps/coursedetail/ * pdf Math Syllabi for Geometry Algebra II Precalculus AP Calculus AB AP … 8. I have read this course syllabus and will support the teacher in educating my child: ….. notable in that there are 6 points for a correct answer, 1.5? points for a blank …… Sullivan, Michael. Precalculus. Seventh Edition. … http://www.andrews.edu/~calkins/math/syllall.pdf * pdf MAC 1147 Pre-Calculus Alg/Trig – 51264 Algebra and Trigonometry, 7th Edition by Sullivan (2005) … We’ve got the answer to your studying needs! With SMARTHINKING, you can …. 8. Determine which functions are one-to-one; find the inverse of those functions which are one- … http://www.hccfl.edu/facultyinfo/fprescott/files/CEABDD5756A649C69049BBE75E26DAFF.pdf * pdf Math 2412: Pre-Calculus Algebra and Trigonometry, 8 th. Edition by Sullivan (ISBN 9780132329033) … Even though the homework answers are submitted online, all students are … http://www.hccfl.edu/facultyinfo/ssippel/files/ * pdf Prentice Hall Precalculus Enhanced with Graphing Utilities, 5th Edition (Sullivan) İ 2009. Correlated to: …. Standard 8 Data Analysis. PC.8.1 Use linear models using the median fit … Use estimation to decide whether answers are reasonable. … http://mathforindiana.com/media/pdf/apcorrelations/Precalculus_EGU_5th_Ed_2009.pdf * pdf Math 1304 Pre-Calculus Fall 2009 Textbook: Precalculus–Custom Edition for Baylor University by Michael Sullivan, John Wiley & Sons Publisher … Math Lab: is in SR 326, hours are Mon – Thurs: 3:00 to 5:00 pm & 6:00 to 8:00 pm. Friday: 3:00 to 5:00 pm. … answers. Please note that there will be no make-up exams or make-up quizzes, no exceptions! … http://bearspace.baylor.edu/Brittany_Noble/www/1304syllabus_fa09.pdf * pdf The Learning Center Resource Center Textbooks and Videos Answer Book, Basic Technical Mathematics and Basic Technical Mathematics with …. 8. Precalculus Functions and Graphs: A Graphing Approach,. 2 nd. Edition by …. edition by Sullivan/Sullivan III. 5. Pre-Calculus: Graphs & Models: A … http://www.essex.edu/learningcenter/pdfs/resources2006.pdf * pdf Southern Polytechnic State University Math 1113 — Precalculus is not merely to obtain an answer but also to extend and cultivate the ability to think independently and ….. from Algebra & Trigonometry, 8th edition by Michael Sullivan. … 618 NC: 1-8, 9-29 odd, 34-44 even, 57-65 odd C: 45-53 odd. … http://dradler.org/spsu/Math1113/2009.01.Math1113.Syllabus.pdf * pdf Florida State College at Jacksonville MAC 1140 … College Algebra Enhanced with Graphing Utilities, 5th edition, by Sullivan and Sullivan … Suggested calculators are TI-83, TI-83 Plus, TI-83 Silver Edition, TI-84, …. classmates can benefit from the answers; email your instructor regarding …. Precalculus Algebra. 308048. Page 8 of 15. CALENDAR OF ACTIVITIES … http://www1.fccj.cc.fl.us/jbroussa/MAC1140_308048_Broussard.pdf * pdf COURSE NAME: Pre-Calculus COURSE NUMBER: MAT132 CREDIT HOURS: 5 … 8. Analyze the graph of a function* to answer questions about the … MATERIALS REQUIRED: Algebra and Trigonometry, 8th Edition, Michael Sullivan; TI- … http://www.regents.state.oh.us/ articulation_transfer/AT/OAN/Math – PreCalc and Stats/btc/BLTC-OMT002-MAT132-(1of1)-Ver1syllabus.pdf * pdf Course Material Submission Form OAN Match Definition Form by XNC Match – Related articles http://www.regents.state.oh.us/articulation_transfer/AT/OAN/Math – PreCalc and Stats/mtc/MRTC-OMT001-MTH1200A-_1 of 1_-Ver1.pdf * pdf MAC 1140 PRE-CALCULUS ALGEBRA. Reference #525020 … Algebra & Trigonometry, 8th edition, by Michael Sullivan, Prentice Hall publishers. … correct answers are certainly necessary to receive full credit, most of the credit awarded for each test …. 8. 6.6. Logarithmic and exponential equations (1–60, 75–86) … http://faculty.mdc.edu/dorr/Syllabi/ * pdf MA 1108 Precalc I/Science Precalculus, Graphing, Data, and Analysis by Michael Sullivan and Michael … Edition is recommended although a TI-82 is also acceptable. … the problems on the Algebra I and Algebra II Final Exam Study Guides and check your answers. … 8. Determine an appropriate WINDOW for real-life applications which involve … http://www.middlesex.mass.edu/Online/course_descriptions/MA 1108 Syllabus.pdf * pdf 2009 Spring Precalculus 1093 Syllabus … “official†book for the course is the 8th edition of Sullivan’s. “Trigonometry: A Unit Circle Approach†whose ISBN is 10:0-32158453-8. … the ParScore forms that are pink and have 50 answers on each side. ParScores can be … Material Covered From Sullivan’s Trigonometry: A Unit Circle Approach, 8th Edition … http://www.cowboycapitol.com/ Welcome/Welcome_To_Precalculus_files/2009 Spring Precalculus 1093 Syllabus.pdf * pdf solutions manual International Financial Management – With Map … 1 Jul 2009 … Accounting 8th edition by horngren test bank and solution manual …. Consumer Behavior, 8/E Michael R. Solomon test bank …. Sullivan instructor manual. International Business, 12/E John Daniels Lee Radebaugh Daniel …. Precalculus 4e blitzer. Prentice Hall − Solutions Manual; … http://sci.tech-archive.net/pdf/Archive/sci.physics/2009-07/msg00047.pdf * pdf Fundamental Accounting Principles, 18/e John J. Wild Barbara … 29 Jun 2009 … Accounting Information Systems 7E Edition Ulric J. Gelinas, Richard …. Consumer Behavior, 8/E Michael R. Solomon test bank …. Sullivan instructor manual. International Business, 12/E John Daniels Lee Radebaugh Daniel …. Precalculus 4e blitzer. Prentice Hall − Solutions Manual; … http://sci.tech-archive.net/pdf/Archive/sci.physics/2009-06/msg01353.pdf * pdf Math 1151-20 Precalculus II Syllabus Fall 2007. Instructor … Math 1151-20 Precalculus II Syllabus Fall 2007. (last updated 11/08/2007) … Textbook: M.Sullivan, Precalculus, Prentice Hall, Seventh Edition. Prerequisites: 31 … to communicate to us what you know about math, not just the answers, so your work must … 1–8, 10, 11, 14, 17, 19, 25, 33, 39, 53, 57, 71, 83, 84 … http://www.math.umn.edu/~rejto/1151/1151_07f.pdf * pdf Math-1051: Precalculus I – Fall 2005; Lecture: 10:10 AM-11:00 AM … 6 Sep 2005 … A correct answer unsupported by an explanation will receive little credit. Text Book: Sullivan’s Precalculus, 7th edition. …. 3, 5, 8, 10, 16, 23, 24, 26, 28, 38, 42,56,57, 58, 65,72,78, 82, 91,94,96, 98, 100, 104. … http://www.math.umn.edu/~rejto/1051/1051_05F.pdf * pdf North Carolina Textbook Adoption School Price List 2003 Answer Transparencies for Checking Homework. Larson, et al. McDougal Littell … Prepared by Textbook Adoption Services and Textbook Warehouse 4/8/2004 ….. PRECALCULUS Pupil’s Edition with Learning Tools CD-ROM. Larson, et al. McDougal Littell … Precalculus Enhanced with Graphing Utilities. Sullivan & Sullivan … http://www.ncpublicschools.org/docs/textbook/adopted/math-9-12.pdf * pdf MATH 245 (31/32) — College Math II (Spring, 2007) Textbook: Algebra & Trigonometry (8th edition), by Michael Sullivan – … Homework: All sections of the Precalculus course at WIT are now required to use … Facilitated Study Group will be held on Thursdays from 5 to 8 pm headed by Prof. Amanda … Academic Honesty: Test answers unsupported by detailed work are … http://myweb.wit.edu/ams/syllabi/Spring09/Spring09_Lang_MATH250.pdf * pdf Dr. Hattaway/Math 510 Sullivan, Algebra and Trigonometry (8th Edition) with Mymathlab Package. ISBN: 0136150624 … To understand algebraic concepts on a precalculus level …. select a security question and type in your answer. 8. … http://myweb.wit.edu/ams/syllabi/Spring09/Spring2009_Hattaway_MATH250-03,04.pdf * pdf 2008 Approved Listing – Recommended High School Math Note: Acceptable to use with grade 8 students. Key Features: Students become better … Algebra 1: Concepts and Skills, Answer Transparencies for Checking Homework. 9780618078837 ….. Precalculus with Limits Teacher Edition. 9780618753130 … Graphing Utilities Student Edition. (HS Binding). Sullivan, Sullivan … http://www.sde.idaho.gov/site/curricular_materials/ag_docs/math/ * pdf MAT 101: Introduction to Calculus & Analytic Geometry Syllabus … 1 – Custom Edition for Princeton, Pearson, 2006. 2. A. Banner (2005) Lecture Notes. … The first few classes will provide an accelerated review of notions from pre-calculus and … Answers should not just be a string of formulas. 4.2 Quizzes … MAT 101, Fall 2006  Blair D. Sullivan. 5. 8 Course Outline … http://www.math.princeton.edu/~bdowling/mat101/syllabus.pdf * pdf 299 Market Street Phone: (800) 822-1080 PeoplesCollegePrep.com … Little Books of Big Ideasྠfor calculus and precalculus address specific topics or large important …. Brief Calculus. An Applied Approach. Eighth Edition. By Michael Sullivan …. 8. To request a sample or buy online, visit PeoplesCollegePrep.com …. answers to help make your lessons stick. CliffsStudySolver … http://www.peoplescollegeprep.com/documents/pdf/ * pdf SPECIAL EDITION: “A Year in Review†pre-calculus, student development, principals of humanities and English … Dr. Monty Sullivan, VCCS. Vice Chancellor, gave closing remarks. Five of the twenty-two graduates …. Question-and-Answer Session. 8:30 – 9:30 p.m.. Sponsors: … http://www.germanna.edu/publications/forms/exchange/exchange_8-21-07.pdf * pdf Vanden Inst Materials A First Course in Calculus, 3rd edition, 1983 Lynch, Ginn Publishing. 8/25/1987. Fundamental Math … 8/25/1987. Pre-calculus with Limits: A Graphing Approach; … Economic: Principles in Action O’Sullivan/Sheffrin;. Prentice Hall; 2001. 2/15/1991 … key, lab manual, lab manual answer key, student text on … http://www.travisusd.k12.ca.us/TravisUSD/tusd/Administration/Departments/ Educational_Services/News/Textbooks for 9-12.pdf * pdf Course Syllabus Elementary Functions 22M:009.081 γεà‰ έà„àηà„οà‚ ηδε à‚ … Textbook: Precalculus, 8th edition, by Michael Sullivan. A check with the publisher and local book- … Your answers don’t always have to be correct, but should at least be … Exam 3 (Chapters 5-7 or 8) – Mid to late April. The … http://www.math.uiowa.edu/~cbmckinn/teaching/009spring08/downloads/files/syllabus.pdf * pdf MATD 0370, ELEMENTARY ALGEBRA (Computer Mediated), Synonym 44060 … Edition, Sullivan & Struve; Pearson. (ISBN 0-321-56752-8) ….. A student who completed some precalculus, elementary analysis, or trigonometry in high … http://www.austincc.edu/aasutton/ * pdf Math 120 Fall 2009 Instructor: Terri Contenza Office: Yost 319 … Textbook: PreCalculus, 8th edition, by Sullivan. … of problems: your textbook (submit your answers online for immediate feedback), online practice … The cumulative final exam will be given Tuesday, December 8, from 4-7 pm. … http://filer.case.edu/txc114/120/120syl.pdf * pdf MATH 152 Section S10N01 Precalculus – Custom Edition for VIU by Michael Sullivan. Course Outline: … Chapter 8: Applications of Trigonometric Functions (8.1-8.4) … Answers to odd-numbered problems are in the back of the text. Quizzes: … http://web.viu.ca/pughg/Spring2010/math152S10N01/math152S10N01intro.pdf * pdf Trigonometry Grades 11-12 2008 Rely on reasoning, rather than answer keys, teachers, or peers, …. Find exact values to integral multiples of the measures in 8(pi/4,pi/3,pi/6) …. PRECALCULUS, Paul Sullivan, Edition Six, Prentiss Hall Publishing, Upper Saddle … http://www.hopatcongschools.org/pdf/CURR08/Trigonometry11-12.pdf load in : 20 queries. 2.102
{"url":"http://gelepedia.com/books/sullivan-precalculus-answers-edition-8.html","timestamp":"2014-04-18T20:43:19Z","content_type":null,"content_length":"36472","record_id":"<urn:uuid:7b85f0d0-8f0c-4ae1-a75f-1d0e9d7338fc>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Derivatives to Using NFL Derivatives to Profit in an Efficient Market There are many ways to wager American football including point spreads, moneylines and totals as well as first half betting, betting by the quarter, team totals, alternate point spreads, teasers, pleasers, and a plethora of propositions. One of the most powerful concepts to profiting from all this is a rarely discussed betting strategy, called derivative betting. In short a betting line derived from another is called a derivative. The most common example is first half odds; these are derived from full game odds. In this article I’ll cover the handicapping behind a simple derivative and explain how this derivative can be wagered profitably. Understanding the Betting Market A prerequisite to successful derivative betting is an understanding of how the overall betting market works. For NFL football, oddsmakers handicap games well in advanced and sometime between Sunday late afternoon and Tuesday early AM point spreads, totals and money lines are posted for all games. When the lines are first opened they are considered rough and have rather low maximum betting limits. The initial bets placed on them are taken for the purpose of sharpening the line. As time passes the sportsbooks keep moving their lines and increasing limits. Their goal is to find a point spread and price for both teams that large smart money has no interest in betting. Once achieved, the odds are efficient enough that sportsbooks are willing to accept large bets. Around this time they begin opening derivatives for betting such as half times, quarters, team totals, and various props. A key point to understand is come game time, if the consensus no-vig point spread closes as Jets +7 +100 / Patriots -7 +100, the betting market gave each team a 50% chance of covering. For reasons covered in my article on fade the public the betting market is dominated by professional sports bettors, so there is a pretty good chance the betting market prediction of 50% is very close to the true probability. Knowing this, even a recreational bettor can profit by following line movement and betting derivatives. Which Team Will Score First The easiest NFL derivative to handicap is the proposition which team will score first. This particular prop is a derivative of the games first half point spread and total. The best method to handicap this prop is to look to sharp bookmakers and see what their first half lines are. Sharp betting sites are ones that open lines first, allow for the largest betting limits, and/or operate on reduced juice. A good line up to use for this is Pinnacle Sports, Bookmaker.eu and 5Dimes. To give an example Week 13 of the 2011 NFL season Detroit Lions and New Orleans Saints had the following first half time lines: • Pinnacle Sports: Saints -6 -109 / Lions +6 -101 (o27.5 -115 u 27.5 +102) • 5Dimes: Saints -6 -103 / Lions +6 -108 | (o28 -105 / u28 -115) • Bookmaker: Saints -6 -110 / Lions +6 -110 | (o28 -110 / u28 -110) To explain a bit further: Bookmaker is a high limit full juice book so their lines make sense. 5Dimes and Pinnacle are reduced juice and differ a little on the point spread with one siding +6 is more likely and the other that -6 is more likely. For reason Pinnacle takes the largest bets on this market I’ll give Pinnacle slight credit and say the market price here is Saints winning by 5.9. For the over/under I can see 28 too high and 27.5 too low. It looks safe to predict this one at Saints by 5.9 with 27.8 points scored. So my next step is to come up with a predicted score that fits. I do this by taking 27.8-5.9= 21.9. I then give half these points to each side and then give the Saints the 5.9 to come up with a predicted first half score of: • Saints: 16.85 • Lions 10.95 To confirm we see these total 27.8 and have the Saints winning by 5.9. From here I just need to use the magic equation: -100*(Favorite Score/Underdog Score), so in this case -100*(16.85/10.95)= -153.9. This tells me the fair prices on the proposition which team will score first is Saints -153.9 / Lions +153.9. How to use this information for wagering on football games is simple. Let’s say you’re monitoring point spread at oddsportal.com, Don Best or a similar service and all of all of a sudden the lines at every betting site are moving in a hurry for a given game. There are many reasons this could be the case, perhaps a key player is out, or maybe the sharp bettors were waiting until later in the week to place their bets. Whatever the specifics might be, there is always going to be a valid reason for the line movement. If you can act fast you can take advantage of the move at recreational betting sites such as Bovada or BetOnline which are much slower to move their lines. If you find yourself too late, the which team to score first prop is a strongly correlated derivative you can turn to. So for example if you watch the Patriots go from -7 to -7.5 to -8.5 in a hurry and are unable to bet -7. Here take a look at props using the handicapping methods I just gave you, you’ll very often find Patriots to score first is +EV, because the line is still based on the old line of Patriots -7 for the game. What I shared in this article is enough to help you make extra profits this season via NFL derivatives. Similar derivatives exist for all sports, but due to the nature of the betting market, don’t expect anyone is going to give you full handicapping advice on all derivatives. The good news is you now have the full keys to one of them, and in time you’ll get sharper and start to find many additional derivatives you can beat. We at OnlineBetting.com wish you the best of luck with that and will hint Bovada.lv often has the most value on NFL props.
{"url":"http://www.onlinebetting.com/football/nfl-bet-types/derivatives/","timestamp":"2014-04-20T00:37:46Z","content_type":null,"content_length":"12751","record_id":"<urn:uuid:07c75190-b4aa-4fde-8292-8e6a882b246a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Stoke's Theorem April 11th 2010, 06:02 AM #1 Apr 2010 Stoke's Theorem I've used Stokes's theorem to show that for $\bold{F}(\bold{x})$ a vector field, $\oint_C d\bold{x} \times \bold{F} = \int_S (d\bold{S} \times abla) \times \bold{F}$ where the curve C bounds the open surface S. I'm then asked to "Verify this result when C is the unit square in the xy plane with opposite vertices at (0,0,0) and (1,1,0) and $\bold{F}(\bold{x}) = \bold{x}$". I do not know how to go about evaluating either of these integrals. Any advice would be appreciated. Actually, I can evaluate the LHS by writing it as $\oint_C \bold{x}'(t) \times \bold{x}(t) dt$. I don't know how to do the other, though. Actually, I'm an idiot. I can do it. Thanks anyway April 11th 2010, 06:07 AM #2 Apr 2010 April 11th 2010, 06:19 AM #3 Apr 2010
{"url":"http://mathhelpforum.com/calculus/138468-stoke-s-theorem.html","timestamp":"2014-04-19T04:58:06Z","content_type":null,"content_length":"33543","record_id":"<urn:uuid:819dd7ef-1326-4a65-bbeb-7120c564400d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible pathological properties of positive definite matrix MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Suppose $A$ is a positive definite matrix such that $$I \preceq A \preceq 1.01I.$$ Is it possible that $$\sum_{i=1}^n A_{1i}$$ can be arbitrarily large? Thanks, Jack up vote 2 down vote favorite linear-algebra na.numerical-analysis add comment Suppose $A$ is a positive definite matrix such that $$I \preceq A \preceq 1.01I.$$ Is it possible that $$\sum_{i=1}^n A_{1i}$$ can be arbitrarily large? Outline of one possible answer (but am doing this in a rush so have not written anything down to check). Consider the matrix $\Gamma$ which has 0 in top-left corner, 1 in rest of top row and rest of 1st column, and 0 in all other entries. (Note to hovering LaTeX hawks: please don't feel the need to enclose all those numbers in dollars.) up vote 4 down vote accepted Clearly $\Gamma$ is hermitian and a hasty calculation seems to show its eigenvalues are $\pm\sqrt{n-1}$ and 0. Take $A= I + 0.005 (I+ (n-1)^{-1/2}\Gamma )$. This ought to satisfy your sandwich condition but it is clear that the sum of entries in 1st column will be ${\rm O}(\sqrt{n}$). add comment Outline of one possible answer (but am doing this in a rush so have not written anything down to check). Consider the matrix $\Gamma$ which has 0 in top-left corner, 1 in rest of top row and rest of 1st column, and 0 in all other entries. (Note to hovering LaTeX hawks: please don't feel the need to enclose all those numbers in dollars.) Clearly $\Gamma$ is hermitian and a hasty calculation seems to show its eigenvalues are $\pm\sqrt{n-1}$ and 0. Take $A= I + 0.005 (I+ (n-1)^{-1/2}\Gamma )$. This ought to satisfy your sandwich condition but it is clear that the sum of entries in 1st column will be ${\rm O}(\sqrt{n}$).
{"url":"http://mathoverflow.net/questions/136836/possible-pathological-properties-of-positive-definite-matrix","timestamp":"2014-04-19T12:03:30Z","content_type":null,"content_length":"53791","record_id":"<urn:uuid:73a09d86-48c5-4b97-9abb-4f2c8ec3cc58>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Can someone double check these derivatives? March 6th 2010, 12:16 PM Can someone double check these derivatives? A) y = log2 [(e^(-x))*cos(pi x)] My solution was -(1+tan(pi x))/ln2 B) arctan(xy) = 1 + (x^2)y My solution was y(2x+2x^3y^2-1)/x(1-x-x^3y^2) They are kind of complicated and I just want to make sure I have done them right. Did anyone else get these same answers or something different? Thank you! March 6th 2010, 12:37 PM Just to clarify, the first question is log with a subscript 2...I couldn't figure out how to write it like that on here :) March 6th 2010, 01:09 PM
{"url":"http://mathhelpforum.com/calculus/132340-can-someone-double-check-these-derivatives-print.html","timestamp":"2014-04-19T08:06:03Z","content_type":null,"content_length":"6541","record_id":"<urn:uuid:85f09dc9-b1aa-4d67-a24f-b8fa3721596a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Vertical Motion Model Re: Vertical Motion Model Sounds like your doing calculus. Vertical motion model is just that, you want a mathimatical model of where an object will be, how fast it will be traveling, and how fast it will be accelerating, when it is falling, or possibly being propelled (although that is quite complex). What exactly do you want to know? "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=26292","timestamp":"2014-04-20T18:33:40Z","content_type":null,"content_length":"11401","record_id":"<urn:uuid:cd088c76-e6ee-4f0b-9285-ce1329d22637>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Integration problem, stuck, not sure July 10th 2011, 10:58 AM #1 Jan 2009 Ontario Canada Integration problem, stuck, not sure Covering the topic of Indefinite Integral: I am doing a question, which my book does not give the answer and hoping someone can look at it and tell me if I did this right. Integre/sign then ( x^2 + 4x + 4 ) ^ 1/3 dx So I worked out 1/3 to the numbers in the brackets. So I have Int Sign ( x^7/3 + 4 x^4/3 + 4^1/3) int sign x^7/3 dx + 4 intsign x^4/3 dx + 4^1/3 intsign dx So my answer is 3/10 x^10/2 + 12/7 x^7/3 + 4^1/3 x + C. is this correct? or not, thanks. Re: Integration problem, stuck, not sure Hi Brady, So I worked out 1/3 to the numbers in the brackets. So I have Int Sign ( x^7/3 + 4 x^4/3 + 4^1/3) This is wrong as a polynomial cannot be broken down the way you have assumed it to. This is how you can go about doing it $x^2 + 4x + 4 = (x+2)^2$ so you are expected to evaluate $\int{((x+2)^2})^{\frac{1}{3}}dx} \Rightarrow \int{(x+2)^{\frac{2}{3}} dx}$ substitute $t= x+2$ and finish you integral. Re: Integration problem, stuck, not sure So is the answer 3/5(x+2)^5/3 + C ?? Thanks for the help, and correcting me as well. Re: Integration problem, stuck, not sure Re: Integration problem, stuck, not sure Oh I forgot about that, ya it's correct then. July 10th 2011, 11:19 AM #2 July 10th 2011, 02:44 PM #3 Jan 2009 Ontario Canada July 10th 2011, 04:53 PM #4 July 10th 2011, 05:15 PM #5 Jan 2009 Ontario Canada
{"url":"http://mathhelpforum.com/calculus/184380-integration-problem-stuck-not-sure.html","timestamp":"2014-04-17T04:39:43Z","content_type":null,"content_length":"41714","record_id":"<urn:uuid:f4fe6a17-4e41-4506-95ee-6d4268bfa0da>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
What does this particular geometric quotient locally look like? up vote 1 down vote favorite Let $k$ be a field and consider the algebraic group $GL_n$ over $Spec(k)$. It has as a closed (but not normal) algebraic subgroup the group $M$ of monomial matrices, i.e. matrices having exactly one nonzero entry in each row and each column (this is the normalizer $T\rtimes\Sigma_n$ of the diagonal matrices $T$). The geometric quotient $GL_n/M$ of the canonical action of $M$ on $GL_n$ exists (if I checked everything correctly) and it is the affine scheme associated to the ring of invariants $R$. (Intuitively, $GL_n/M$ should be some open subset of $(\mathbb{P}^{n-1})^n$ of $n$ lines spanning the whole space with permutations identified.) This ring of invariants $R$ is finitely generated as a $k$-algebra: $M$ is reductive, by Mumford’s Conjecture geometrically reductive and hence finitely generated by Nagata’s Theorem (it's possibly easier to see this directly in this example). I do not think, that I need an infinite field somewhere. Is there a Zariski open covering of $GL_n/M$ by nice affine schemes $Spec(k[x_1,\ldots,x_m]/I])$ which I can explicitly write down? invariant-theory geometric-invariant-theor ag.algebraic-geometry It seems interesting that $GL_n/M$ embeds in the Grassmannian of $n$-planes in ${\mathfrak gl}_n$, as the conjugation-orbit of the point corresponding to the diagonal matrices. – Allen Knutson Nov 8 '12 at 3:08 I don't understand what that means. What point corresponds to the diagonal matrices? – Will Sawin Nov 8 '12 at 4:55 The space of diagonal matrices is an $n$-dimensional subspace of ${\mathfrak gl}_n$. Hence it defines a point of the corresponding Grassmannian. $GL_n$ acts on that Grassmannian. – Allen Knutson Nov 9 '12 at 3:47 add comment 1 Answer active oldest votes First we compute $GL_n$ mod the group of diagonal matrices. $GL_n$ embeds into $(\mathbb A^n-0)^n$, so you are correct that the quotient by $(\mathbb G_m)^n$ is $(\mathbb P^{n-1})^n$. The only points we have to remove are those with determinant $0$, a hypersurface. The coordinate ring is thus generated by functions which are a product of one coefficient in each row, divided by the determinant. There are $n^n$ of these. They satisfy one relation coming from the fact that the determinant over the determinant is $1$, and the rest of the relations are toric: one product of generators is equal to another product of generators because up vote 2 each coefficient shows up the same number of times. down vote Computing the quotient of this ring by the symmetric group action is more subtle. It is easy to find a lot of elements: just take any product of generators and add together all the $S^n$ conjugates. I don't think it's too hard to find a basis of these, but I don't know how to find generators and relations. add comment Not the answer you're looking for? Browse other questions tagged invariant-theory geometric-invariant-theor ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/111751/what-does-this-particular-geometric-quotient-locally-look-like/111759","timestamp":"2014-04-16T13:27:44Z","content_type":null,"content_length":"55710","record_id":"<urn:uuid:8f8c2f21-3709-4ee7-8a2e-d28652a07ea3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Avondale Estates Statistics Tutor Find an Avondale Estates Statistics Tutor ...He was accepted to Ohio State for graduate school. I scored in the 99th percentile on the GMAT and taught GMAT prep courses for three years. Much of the GMAT overlaps with the GRE. 28 Subjects: including statistics, calculus, GRE, physics ...I firmly believe that learning happens best when there is an equal relationship between teachers and students. I treat my students with respect and treat them as knowledgeable in their own areas of expertise, valuing their opinions, but not hesitant to challenge them. I am an ardent believer in... 37 Subjects: including statistics, reading, English, ESL/ESOL ...The subjects I tutored in were Calculus 1, 2, and 3, General Biology, Cell Biology, Chemistry 1 and 2 along with Organic Chemistry. College Algebra and Remedial Math were our most tutored subjects. PreCalculus, Trigonometry and Statistics with Probability were among my favorite subjects to teach. 15 Subjects: including statistics, chemistry, calculus, geometry I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery. 13 Subjects: including statistics, geometry, SAT math, GRE ...In the summer of 2011 I taught history to 7th graders at Breakthrough Collaborative. Additionally, I taught students ages 4 – 14 in math, reading, writing and science at SCORE! Education centers in 2007-2008. 34 Subjects: including statistics, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/avondale_estates_statistics_tutors.php","timestamp":"2014-04-16T16:07:17Z","content_type":null,"content_length":"24204","record_id":"<urn:uuid:32354fee-0c7e-4b6f-a43a-adccd501bb26>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
60H15 Stochastic partial differential equations [See also 35R60] In traditional portfolio optimization under the threat of a crash the investment horizon or time to maturity is neglected. Developing the so-called crash hedging strategies (which are portfolio strategies which make an investor indifferent to the occurrence of an uncertain (down) jumps of the price of the risky asset) the time to maturity turns out to be essential. The crash hedging strategies are derived as solutions of non-linear differential equations which itself are consequences of an equilibrium strategy. Hereby the situation of changing market coefficients after a possible crash is considered for the case of logarithmic utility as well as for the case of general utility functions. A benefit-cost analysis of the crash hedging strategy is done as well as a comparison of the crash hedging strategy with the optimal portfolio strategies given in traditional crash models. Moreover, it will be shown that the crash hedging strategies optimize the worst-case bound for the expected utility from final wealth subject to some restrictions. Another application is to model crash hedging strategies in situations where both the number and the height of the crash are uncertain but bounded. Taking the additional information of the probability of a possible crash happening into account leads to the development of the q-quantile crash hedging strategy.
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/12105","timestamp":"2014-04-20T21:07:59Z","content_type":null,"content_length":"17987","record_id":"<urn:uuid:a7b591b1-c29b-483c-9d8a-cf3342be4af2>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
CS Concentration in Math: Middle Education Concentration in Math: Middle Grades Education Middle Grades Education majors choose two academic concentrations from four choices: Language Arts, Mathematics, Science, and Social Studies. Each concentration consists of 24 hours of course work. The 24-hour Mathematics Concentration is a popular choice for at least two reasons: (1) middle grades mathematics teachers are in the greatest demand of the four concentrations and (2) 18 hours of the course work is specifically designed for teachers. These courses focus on multiple ways to solve problems, various types of mathematical representations, common student errors within the particular course topic, and opportunities to use tools (manipulatives, technology, etc.) to investigate mathematics. Of the two MATH courses in the concentration, one of them is the required Foundations Curriculum course. • Middle grades education majors use the 24-hour concentration. □ 24-Hour Concentration □ MATE 1267. Functional Relationships □ MATE 2067. Data and Probability Explorations □ MATE 3067. Algebra and Number Foundations □ MATE 3167. Geometry and Measurement □ MATE 3267. Concepts in Discrete Mathematics □ MATH 1065. College Algebra □ MATE 3367. Mathematical Modeling □ MATH 2119. Elements of Calculus □ Note: Students may use 3 hours of MATH for Foundations Curriculum (FC) credit and for the concentrations. You are almost finished with your Concentration in Middle Grades Education! Please check-in with your advisor in your program area to make sure that you have everything that you need in order to Use this page for the most commonly required Resources and Forms Do you have any further questions or need help locating something? Contact us: Dr. Ron Preston prestonr@ecu.edu Director of Students Department of Mathematics, Science, and Instructional Technology Education East Carolina University Mail Stop 566 Greenville, NC 27858 Phone:252-328-9355
{"url":"http://www.ecu.edu/cs-educ/msite/Math/CSConcentrationMathMiddle.cfm","timestamp":"2014-04-18T18:39:57Z","content_type":null,"content_length":"30402","record_id":"<urn:uuid:ec95d26f-8c57-4365-b0d6-8f1e497031a4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Matheology � 222 Back to the roots Date: Mar 6, 2013 3:54 PM Author: Virgil Subject: Re: Matheology � 222 Back to the roots In article WM <mueckenh@rz.fh-augsburg.de> wrote: > On 5 Mrz., 22:27, Virgil <vir...@ligriv.com> wrote: > > If the list is complete, then never even one line contains al elements, > > and if not complete then not the set of all lines. > > > > > This holds for every FIS of the list > > > > But never for the list itself. > The list is nothing but the union of all lines. The union of all lines > is the same as the sequence of all lines - without its limit. Actually, it is the the union of all lines that will contain any limit and the sequence of all lines which will not contain any limit. That sequence will HAVE a limit, but will not CONTAIN it, as every member of it is a FIS, and no FIS contains all other FISs. At least everywhere outside Wolkenmuekenheim.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8543933","timestamp":"2014-04-19T12:25:34Z","content_type":null,"content_length":"2119","record_id":"<urn:uuid:064eba8d-fd9e-49e3-b0cb-e842ab53045b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Josephus Problem Date: 04/18/2003 at 20:39:50 From: Jeremy Subject: Determining an equation from a series There are x people sitting around a table. Every other person at the table is eliminated until there is only one person left. For instance, at a four-person table, the second and fourth person are eliminated; then going around the table once more, the third person is eliminated. In this case number 1 is the survivor. Here is a chart I made showing x and the surviving person's number: x surviving person's number The pattern shows that the answer increases by 2 along the odd numbers but 'restarts' at one every time x is a power of 2. From this I obtained the equation 2(x-the closest power of two less than or equal to x)+1 = the winner's number. I would like to know if there is a way to rewrite this equation without using any words. Date: 04/19/2003 at 23:07:37 From: Doctor Carbon Subject: Re: Determining an equation from a series Hi Jeremy, This problem is also known as the Josephus problem. A quick search on www.google.com for "Josephus problem" yields hundreds of links describing the original problem along with its history. You will also find relevant entries in the Dr. Math archives: Knights of the Round Table A Circular Massacre Your solution description using words is perfect. The real problem, as you have said, is in turning it into a single precise equation (or expression) for writing the part about "the closest power of two less than or equal to x." Within a longer description like this, I would look for the part that I can write most easily. In this case, "a power of 2" is fairly straightforward, and would look like 2 or 2^n Our goal, therefore, is to make sure that n is a whole number, and that the result of this calculation is less than or equal to x. You pointed out that you needed a mathematical notation, operation or function that shows the closest power of an integer less than or equal to another number. I have changed the requirement slightly and now we are looking for something that gives whole numbers less than or equal to a given number. I don't know if you have heard of it, but there is something called the Floor function. It is defined as the largest integer less than or equal to the number you use. The way you use it is this: "Floor of 2.3 is 2," "Floor of 15/2 is 7." It is written using a special kind of enclosing bracket, and I will try to make it using plain text here: | | | 2.3 | = 2 |_ _| This is the same as saying "Floor(2.3) = 2" or "Floor of 2.3 is 2." In your case, 2^n should equal the power of 2 closest to but less than x, so n could be written using the Floor function on its own. At this point, we can use the general idea that exponents and logarithms are opposites of each other. Therefore, to get a power of 2 close to but less than x, I could use logarithms to base 2 in this | | | Log (x) | |_ 2 _| 2 ^ I hope this helps. The following link to Eric Weisstein's MathWorld entry on the Josephus problem has some variations and historical solutions that you might find interesting. On a more philosophical note, my belief is that all mathematical notations must have evolved from long-winded natural language Write back if you'd like to talk about this some more, or if you have any other questions. - Doctor Carbon, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/62704.html","timestamp":"2014-04-20T21:07:09Z","content_type":null,"content_length":"9327","record_id":"<urn:uuid:20c74d11-285f-4448-852c-c566e31c90b5>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
How Many Times? Copyright © University of Cambridge. All rights reserved. Why do this problem? This problem will help consolidate children's understanding of the $24$ hour clock notation. It could also be used to focus on ways of working systematically. Possible approach It would be good to have an interactive digital clock on the whiteboard for the duration of this lesson so that you and the class can refer to it whenever necessary. ( This free version would be suitable, for example. Click on the arrows button to switch to a digital display.) You may want to begin by asking a few oral questions based on the clock before moving on to the problem as it stands. Explain the challenge to the class and ask children to suggest a few examples so that it is clear what is meant by consecutive. You may need to clarify that all the digits in the time need to be consecutive so, for example, 13:45 wouldn't count, as it only has three consecutive digits. Invite pairs of children to begin working on the first part of the problem. They could use mini-whiteboards to keep a record of the times they find. After a short time, draw the group together to share ways of working. Some children may be recording answers as they occur to them, others may have some sort of system - for example starting with the earliest time and working 'upwards'. Discuss the benefit of a systematic approach - it means that we know when we have found all the solutions. Having talked about this, children will be able to apply a system to the other parts of the question. In the plenary, as well as sharing solutions, encourage children to articulate reasons for their findings. Key questions Which digits will be possible? Why? How will you know you've got all the different times? Possible extension Children could also investigate the times which have just three consecutive digits. 5 on the Clock is a problem that requires a similar systematic approach and also involves digital time. Possible support It might be useful for some children to have access to an interactive version of a digital clock themselves, perhaps at a shared computer.
{"url":"http://nrich.maths.org/981/note?nomenu=1","timestamp":"2014-04-18T13:20:19Z","content_type":null,"content_length":"6008","record_id":"<urn:uuid:76d6c47e-b00b-4fef-a181-7dbd0168f536>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
League Chat Thread The first way while a lot harder on you is the most like the NBA. Kind of weird since you have a team though. So expect to see complaints when you're signing people but I really don't care. Will there be RFAs? I haven't made a final decision on that yet, I've seen it done two ways, and I may research to see if I can find a few other ways. The first one I've seen is people would just pm me with all their offers. Player, number of years, starting salary, and percentage increase (8-12 I think, not sure though) and I'd input it to the game. This way everyones bids are confidential only too me. The second way is just a bidding style, every player has their own thread in the free agency forum and you just bid on the player. So everyone knows what's being offered, and if a player isn't offered within 24 hours then he goes to the guy with the highest offer. The problem with that is it tends to force people into overpaying big time. how about on the signing of Free Agents? how does that work? we submit offers and the player will decide? did it ever happen that a player chooses a lower offer(money) from a contender over a higher offer from another team? I updated the index for all you guys who made moves. Kings/Lakers/Blazers to those who have experience with a Sim League : How does the attribute Greed, Loyalty and Play for Winners work? How does Free Agency work? Will a player with an expired contract refuse an extension with your team if he has low loyalty value? Will a Free Agent with a high Play for Winner attribute choose a lower offer from a contender than a higher offer from a worse team? It's kind of like a "invisible" stat I guess. You may lose a guy in FA, but you won't ever know if it was because he had a low loyalty rating or anything. to those who have experience with a Sim League : How does the attribute Greed, Loyalty and Play for Winners work? How does Free Agency work? Will a player with an expired contract refuse an extension with your team if he has low loyalty value? Will a Free Agent with a high Play for Winner attribute choose a lower offer from a contender than a higher offer from a worse team? Ok now I willing to let go Clifford Robinson to shed some salary, youth and/or picks. I can one up you, due to the recent play of Ruben Patterson my starters will be: 21,19,23,25,and 29 I'm kind of disappointed with my team giving up 91 points per game. I guess right now I just have to wait it out, I drafted my team really young. My starters are 22, 25, 29, 24, and 26. The guy I drafted to start at SF for me is also only 24. Anxiously awaiting to see how my team will do! 2nd best Win % in the East! Today after 2pm. Today sometime after 2. I haven't made a final decision on that yet, I've seen it done two ways, and I may research to see if I can find a few other ways. The first one I've seen is people would just pm me with all their offers. Player, number of years, starting salary, and percentage increase (8-12 I think, not sure though) and I'd input it to the game. This way everyones bids are confidential only too me. The second way is just a bidding style, every player has their own thread in the free agency forum and you just bid on the player. So everyone knows what's being offered, and if a player isn't offered within 24 hours then he goes to the guy with the highest offer. The problem with that is it tends to force people into overpaying big time. so the Free Agent himself have no control on where he will sign? its always the highest bidder? like an auction? The other league I'm in will have RFA's in our next league, but I don't have any idea how to do it. I much prefer the first way to be honest. What if two teams both offer max contracts? Does the original team get their player back? In the first way, does the game select which contract to accept or you? you will need a bigger inbox for the first one. I also like it, but the second one is not so bad. maybe we can put it into a vote? this too, what if none of those 2 teams is the original team? who gets the player? In the first way, does the game select which contract to accept or you? i think he said its always the highest offer when is the next sim? does anyone know any PER calculator? and do we have enough stats to compute the PER of the players? Calculating PER The Player Efficiency Rating (PER) is a per-minute rating developed by ESPN.com columnist John Hollinger. In John's words, "The PER sums up all a player's positive accomplishments, subtracts the negative accomplishments, and returns a per-minute rating of a player's performance." It appears from his books that John's database only goes back to the 1988-89 season. I decided to expand on John's work and calculate PER for all players since minutes played were first recorded (1951-52). All calculations begin with what I am calling unadjusted PER (uPER). The formula is: uPER = (1 / MP) * [ 3P + (2/3) * AST + (2 - factor * (team_AST / team_FG)) * FG + (FT *0.5 * (1 + (1 - (team_AST / team_FG)) + (2/3) * (team_AST / team_FG))) - VOP * TOV - VOP * DRB% * (FGA - FG) - VOP * 0.44 * (0.44 + (0.56 * DRB%)) * (FTA - FT) + VOP * (1 - DRB%) * (TRB - ORB) + VOP * DRB% * ORB + VOP * STL + VOP * DRB% * BLK - PF * ((lg_FT / lg_PF) - 0.44 * (lg_FTA / lg_PF) * VOP) ] Most of the terms in the formula above should be clear, but let me define the less obvious ones: factor = (2 / 3) - (0.5 * (lg_AST / lg_FG)) / (2 * (lg_FG / lg_FT)) VOP = lg_PTS / (lg_FGA - lg_ORB + lg_TOV + 0.44 * lg_FTA) DRB% = (lg_TRB - lg_ORB) / lg_TRB I am not going to go into details about what each component of the PER is measuring; that's why John writes and sells books. Problems arise for seasons prior to 1979-80: •1979-80 — debut of 3-point shot in NBA •1977-78 — player turnovers first recorded in NBA •1973-74 — player offensive rebounds, steals, and blocked shots first recorded in NBA The calcuation of uPER obviously depends on these statistics, so here are my solutions for years when the data are missing: •Zero out three-point field goals, turnovers, blocked shots, and steals. •Set the league value of possession (VOP) equal to 1. •Set the defensive rebound percentage (DRB%) equal to 0.7. •Set player offensive rebounds (ORB) equal to 0.3 * TRB. Some of these solutions may not be elegant, but I think they are reasonable. After uPER is calculated, an adjustment must be made for the team's pace. The pace adjustment is: pace adjustment = lg_Pace / team_Pace League and team pace factors cannot be computed for seasons prior to 1973-74, so I estimate the above using: estimated pace adjustment = 2 * lg_PPG / (team_PPG + opp_PPG) To give you an idea of the accuracy of these estimates, here are the actual pace adjustments and the estimated pace adjustments for teams from the Eastern Conference in 2002-03: Tm Act Est ATL 1.00 0.99 BOS 1.00 1.02 CHI 0.97 0.98 CLE 0.97 0.99 DET 1.05 1.06 IND 0.99 1.00 MIA 1.04 1.08 MIL 1.01 0.96 NJN 0.99 1.03 NOH 1.01 1.02 NYK 1.00 0.98 ORL 0.98 0.97 PHI 1.00 0.99 TOR 1.01 1.01 WAS 1.03 1.03 For all seasons where actual pace adjustments can be computed, the root mean square error of the estimates is 0.01967. Now the pace adjustment is made to uPER (I will call this aPER): aPER = (pace adjustment) * uPER The final step is to standardize aPER. First, calculate league average aPER (lg_aPER) using player minutes played as the weights. Then, do the following: PER = aPER * (15 / lg_aPER) The step above sets the league average to 15 for all seasons. Those are the gory details. If you have any comments or questions, please send me some feedback. AKA you need the leagues average player to calculate. And while I have a lot of time on my hands at times and am pretty good at stats and Excel, I'm not diving into this at all considering league stats are not all in one place, they're not exportable in a .csv file, and my team sucks. fun Fact: Teo Ratliff averages more blocks per game than my TEAM!
{"url":"http://www.pelicansreport.com/printthread.php?t=69950&pp=25&page=37","timestamp":"2014-04-20T23:46:12Z","content_type":null,"content_length":"38704","record_id":"<urn:uuid:74f6a359-514c-440a-891a-ff0894d4a9e9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
G-Forces in a Looping Water Slide | Science Blogs | WIRED • By Rhett Allain • 04.26.12 | • 8:28 am | I can’t help myself. I have to say something about this awesome water slide as seen on io9. You really should check out the io9 article – an interesting read. But for me, let me see if I can estimate what it would feel like to go through this crazy thing. To start, all I really have is the photo and a claim that the loop was about 15 to 20 feet high. How would you model this crazy slide? Let me break this into two parts. Part 1 is the straight tube. During this part, the force diagram would look like this: Since I am looking for the speed after it goes a certain distance, the best bet is to use the Work-Energy principle. If I take the person plus the Earth as the system, then I will still have the frictional force doing work as it slides down. Let me call the length of the slide s. This makes the work-energy principle like this: In order to find the speed at the bottom, I will need to first find a value for the frictional force. Looking back at the force diagram, the forces in the direction perpendicular to the slide must add up to zero since the person doesn’t accelerate that way. Along with this, I can use the model for friction that says it is proportional to the normal force. I am not worried about the mass (it won’t matter in the end), but I do need a value for the coefficient of kinetic friction. Since I have no actual data from this slide, I will have to look at something similar. Here is an older post with an analysis of a different slide. These are those big slides at the fair where you get on a potato sack or something. From that, I found a coefficient of kinetic friction with a value of 0.31. Let me just assume the water slide is a little bit less. How about 0.2? Everyone happy with that? Now, if I assume the slider person starts from rest at the top of the slide, I can find out how find the slider would be moving just before entering the loop. Actually, this is a little silly. I have both the length (s) and the height (h), but I could get a relationship between them from the angle of incline. Oh well. What about the loop part? The force diagram would look similar, but I will draw it anyway. An object moving in a vertical circle. Seems simple, right? You see problems like this in introductory physics. Or do you? No. You don’t. You see a problem that asks about the forces at either the top or bottom of the circle. They never ask about the motion all the way around. It isn’t so simple. The main problem is the force the tube exerts on the rider (normal force). This is considered to be a “constraint force”. This means that the normal force exerts whatever force is necessary (up to its breaking point) to keep the rider from going past the tube. It constrains the motion of the person to the surface. Get it? Constraint force. But then how do we deal with this force? A simple numerical model won’t work. The main process in these numerical calculations is to do the following: • For each small step in time: • Calculate the total force. • Use the total force to determine the change in momentum and thus the new momentum. • Use the momentum to find the change in position. • Rinse and repeat. This method works well if I can find the forces based on position (like a spring) or velocity (like air resistance). However, the normal force doesn’t depend on these things. What to do? Cheat. Well, not really cheat. Just sort of cheat. Here is the plan. First, I will assume the trajectory is in the path of a circle. From this I can calculate the acceleration in the direction towards the center of the circle based on the velocity and the radius. This radial acceleration is due to two forces: the normal force (which is in the same direction as the radial acceleration) and a component of the gravitational force. Since I know the acceleration in the radial direction and the gravitational force, I can solve for the unknown normal force. The direction of this normal force will be towards the center of the circle. With the normal force, I can then find the friction force. As a vector, it would be: Here the “v-hat” is a unit vector in the direction of the velocity. But the point is that now I know all three vector forces (gravity, friction, and the normal force). From here, I can use the usual numerical model. Apparent Weight The first question that comes to my mind: what kind of forces would you feel if you make it around the loop? Ok, I first need to determine the starting height. If I assume a loop diameter of 20 feet (6.1 meters), a measurement of the image shows the starting height would be about 16.2 meters above the bottom of the loop. This would put the speed entering the loop at 15 m/s (33.5 mph). This is bad. Why? Here is a quick animation of the loop if the starting speed is 15 m/s. Yep, that’s right. In this case the slider didn’t make it around the top of the loop. Good thing they put that escape hatch in the tube. I guess my value for the coefficient of friction was too high. There is that water sliding down with you after all. If I change the coefficient of kinetic friction to 0.1, then the speed entering the loop would be 16.5 m/s and the slider would make it over the Oh, you might notice that my animation included vectors representing the three forces. Notice two things about the normal force (white vector). First, it gets pretty huge. Second, in the case where the slider goes back down the direction of the normal force changed. This means that in order to stay on that circle, the tube would have to pull on the person. Of course that wouldn’t actually happen. Instead, the slider would fall and crash into the top of the tube at a lower point. Ouch. What if I want to plot the apparent weight. Remember that what you feel is not the gravitational force but instead all the other forces (because gravity pulls the same on all parts of you). I am pretty sure the apparent weight would be the sum of the frictional and normal forces. Here is a plot as function of time. Wow. 10 g’s when the slider first enters the loop? That seems crazy high. Let’s just check. Just the normal force would be easy to calculate. If the slider is at the bottom of the loop going 16 m/s, then the following must be true for the forces in the y-direction (at that instant): With a radius of 3 meters, this gives an acceleration of 10.2 g’s. Wow. That is just crazy. If you are going any slower, you wouldn’t make it over the loop. Any faster and you might die from the massive acceleration. Changing the Coefficient of Friction With the parameters as they are, what is the maximum value of the coefficient of friction for which you can get over the loop? Here is a plot of the maximum height in the loop for different starting values of μ. What does this say? This says that if the coefficient of friction is less than around 0.18, you will make it to the top. Making it to the top and making it around the loop are two different things. If you just barely make it to the top, you will be there with a speed of zero. This means you wouldn’t be moving in a circle. You would just fall straight down. In order to still be moving in a circle of radius R, the lowest speed would have no normal force pushing on you. This means that in the y direction we would have: With a radius of about 3 meters, this would be a minimum speed of 5.4 m/s. Here is a plot showing the maximum height along with the speed at that height. Here, the green line represents the speed and the horizontal red line shows the speed value of 5.4 m/s. From this, you would need a maximum coefficient of friction of 0.15 in order to just barely make it over the loop without crashing.
{"url":"http://www.wired.com/2012/04/g-forces-in-a-looping-water-slide/","timestamp":"2014-04-16T23:49:45Z","content_type":null,"content_length":"110394","record_id":"<urn:uuid:5b92933d-63da-454b-ad60-44d5e562a18c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning by Simulations: Central Limit Theorem Share this Page: The central limit theorem is considered to be one of the most important results in statistical theory. It states that means of an arbitrary finite distribution are always distributed according to a normal distribution, provided that the number of observations for calculating the mean is large enough. Usually 10 observations are sufficient to result in a approximate normal distribution. The central limit theorem is the reason why normal distributions are so frequent in nature. English version [334 kB] German version [334 kB] After downloading please unpack all files of the zipped packages and start the executable. The program CenLimit shows the effects of the central limit theorem. The user may select from various distributions to draw different numbers of observations for calculating the means. The distribution of the means is plotted. You may use this program to find out the relationship between the number of observations and the resulting standard deviation of the means.
{"url":"http://www.vias.org/simulations/simusoft_cenlimit.html","timestamp":"2014-04-16T13:10:22Z","content_type":null,"content_length":"7629","record_id":"<urn:uuid:fac5cf50-47c9-4aa5-bb97-29c9dd91838c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating functions, Tutte polynomials, and the bivariate series $\sum_n x^n y^{n^2} / n!$. up vote 9 down vote favorite A few years ago I computed the Tutte polynomials of the matroids given by the classical Coxeter groups, and found that their generating functions are all simple variations of the series $\sum_n \frac {x^n y^{n^2}}{n!}$. I've wondered if there is a more geometric/algebraic explanation of this. Is this series known? Are there other natural occurrences of it that might be relevant? tutte-polynomial power-series 5 en.wikipedia.org/wiki/Theta_function – Qiaochu Yuan Jul 3 '12 at 0:58 4 Don't you mean $\sum_n x^n y^{n^2}/n!$? – Ira Gessel Jul 3 '12 at 13:39 Is it known whether $\sum x^n y^{n^2} / {n!}$ satisfies an ADE? – Martin Rubey Jul 3 '12 at 17:03 2 The generating function $f(x) = \sum_n x^n y^\binom{n}{2}/n!$ satisfies $f'(x) = f(xy)$. – Ira Gessel Jul 3 '12 at 17:17 1 So writing $g(u)=f(\exp(u))$, Ira's equation becomes a "delay differential equation" where the derivative $g'(u)$ is written in terms of $g(u-\tau)$, for some $\tau$. – Gerald Edgar Jul 3 '12 at show 2 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged tutte-polynomial power-series or ask your own question.
{"url":"http://mathoverflow.net/questions/101191/generating-functions-tutte-polynomials-and-the-bivariate-series-sum-n-xn-y","timestamp":"2014-04-18T18:53:39Z","content_type":null,"content_length":"52743","record_id":"<urn:uuid:bc104742-656f-4ce8-9af3-e48779206434>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - renormalize some theories eljose79 Jul30-04 06:24 AM renormalize some theories why can we renormalize some theories but others not?..in fact what wo7uld happen if we apply renormalization group method,or Feynmann,s renormalization program to quatnum gravity (non renormalizable I also found that they claim to have solved the problem of renormalization..is that true?. selfAdjoint Jul30-04 09:09 AM Quote by eljose79 why can we renormalize some theories but others not?..in fact what wo7uld happen if we apply renormalization group method,or Feynmann,s renormalization program to quatnum gravity (non renormalizable I also found that they claim to have solved the problem of renormalization..is that true?. Any theory has parameters, numbers that define the theory. In quantum field theories, when you set up the equations, some of these parameters take on infinite values. Renormalization is a legitimate technique for modifying the theory temporarily so that the parameters stay finite ("regularization"), and then at the end of the development removing the modifications in such a way that the infinities don't appear in the physics ("renormalization"). It is very important that you only have to do this a finite number of times, for a finite number of parameters. Theories for which this is true are called renormalizable. Examples are quantum electrodynamics and the standard model. Theories for which an infinite number of parameters would have to be renormalized are called unrenormalizable, and they can't be developed with present methods as finite theories. Gravity turns out to be unrenormalizable. Every time you try to renormalize one parameter, another crops up. eljose79 Aug10-04 06:55 AM and could the divergences in a non-renormalizable theory be removed by using Renormalization group method and its equations?....another question for any lagrangian where i could find information about renormalization group method. vanesch Aug10-04 09:46 AM Quote by eljose79 and could the divergences in a non-renormalizable theory be removed by using Renormalization group method and its equations?....another question for any lagrangian where i could find information about renormalization group method. As far as I understand it, there are two ways of looking at non-renormalizable theories. The first one is what Self-Adjoint explained. The other one, using renormalisation group stuff, is different. In that view, non-renormalizable terms are the small remnants from another theory at high energies. In fact, the coefficients of non-renormalizable terms, when going from high to low energies, flow to 0, so they are called, in this picture, irrelevant terms. People say that, because gravity is non-renormalizable, it is irrelevant, and indeed, the coupling, at low energies, of gravity is extremely small. The point is that you should consider that relevant terms (renormalizable ones) are essentially "uncoupled" from the large energy scale at which new physics comes in ; that's why, after regularization, we can take the limit to infinite energies and still have a finite answer. In non-renormalizable theories, results DO depend on the energy cutoff you introduce. That's normal, because they represent the tiny remnants at low energies of important interactions at high energies when new theories come in, so we cannot arbitrary change that scale without an impact on the low energy behaviour. So, you can say that gravity is a catastrophy, or you can say that it is a small remnant of another theory at high energies. selfAdjoint Aug10-04 10:28 AM And just to add to what Patrick said, renormalization group methods are not instead of renormalization, they are in addition to it. For example RG methods don't renormalize QCD but they do show that confinement follows from renormalization. vanesch Aug10-04 01:40 PM There is something about non-renormalizable theories what bothers me. In fact, I'm not sure whether, if you restrict yourself to a given order, say, second order perturbation, can you do loop diagrams, or are you screwed any way ? I mean, I don't know if you get already an infinitude of counterterms at a given order, or whether at each order, you have to add more, but a finite number each time, of counter terms. zefram_c Aug10-04 03:03 PM I think you only need to add more terms at each order. From the one theory that I'm aware of ('naive' IVB theory of the weak interaction), the divergences get worse order by order (ie they diverge as x^kn, where n is the order is k is constant). All times are GMT -5. The time now is 10:15 AM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=37431","timestamp":"2014-04-19T15:15:44Z","content_type":null,"content_length":"11282","record_id":"<urn:uuid:2fef300e-7130-4b9d-b091-855c1556edaf>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Lessons In Electric Circuits -- Volume II Lessons In Electric Circuits -- Volume II Chapter 11 Consider a circuit for a single-phase AC power system, where a 120 volt, 60 Hz AC voltage source is delivering power to a resistive load: (Figure below) Ac source drives a purely resistive load. In this example, the current to the load would be 2 amps, RMS. The power dissipated at the load would be 240 watts. Because this load is purely resistive (no reactance), the current is in phase with the voltage, and calculations look similar to that in an equivalent DC circuit. If we were to plot the voltage, current, and power waveforms for this circuit, it would look like Figure below. Current is in phase with voltage in a resistive circuit. Note that the waveform for power is always positive, never negative for this resistive circuit. This means that power is always being dissipated by the resistive load, and never returned to the source as it is with reactive loads. If the source were a mechanical generator, it would take 240 watts worth of mechanical energy (about 1/3 horsepower) to turn the shaft. Also note that the waveform for power is not at the same frequency as the voltage or current! Rather, its frequency is double that of either the voltage or current waveforms. This different frequency prohibits our expression of power in an AC circuit using the same complex (rectangular or polar) notation as used for voltage, current, and impedance, because this form of mathematical symbolism implies unchanging phase relationships. When frequencies are not the same, phase relationships constantly change. As strange as it may seem, the best way to proceed with AC power calculations is to use scalar notation, and to handle any relevant phase relationships with trigonometry. For comparison, let's consider a simple AC circuit with a purely reactive load in Figure below. AC circuit with a purely reactive (inductive) load. Power is not dissipated in a purely reactive load. Though it is alternately absorbed from and returned to the source. Note that the power alternates equally between cycles of positive and negative. (Figure above) This means that power is being alternately absorbed from and returned to the source. If the source were a mechanical generator, it would take (practically) no net mechanical energy to turn the shaft, because no power would be used by the load. The generator shaft would be easy to spin, and the inductor would not become warm as a resistor would. Now, let's consider an AC circuit with a load consisting of both inductance and resistance in Figure below. AC circuit with both reactance and resistance. At a frequency of 60 Hz, the 160 millihenrys of inductance gives us 60.319 Ω of inductive reactance. This reactance combines with the 60 Ω of resistance to form a total load impedance of 60 + j60.319 Ω, or 85.078 Ω ∠ 45.152^o. If we're not concerned with phase angles (which we're not at this point), we may calculate current in the circuit by taking the polar magnitude of the voltage source (120 volts) and dividing it by the polar magnitude of the impedance (85.078 Ω). With a power supply voltage of 120 volts RMS, our load current is 1.410 amps. This is the figure an RMS ammeter would indicate if connected in series with the resistor and inductor. We already know that reactive components dissipate zero power, as they equally absorb power from, and return power to, the rest of the circuit. Therefore, any inductive reactance in this load will likewise dissipate zero power. The only thing left to dissipate power here is the resistive portion of the load impedance. If we look at the waveform plot of voltage, current, and total power for this circuit, we see how this combination works in Figure below. A combined resistive/reactive circuit dissipates more power than it returns to the source. The reactance dissipates no power; though, the resistor does. As with any reactive circuit, the power alternates between positive and negative instantaneous values over time. In a purely reactive circuit that alternation between positive and negative power is equally divided, resulting in a net power dissipation of zero. However, in circuits with mixed resistance and reactance like this one, the power waveform will still alternate between positive and negative, but the amount of positive power will exceed the amount of negative power. In other words, the combined inductive/resistive load will consume more power than it returns back to the source. Looking at the waveform plot for power, it should be evident that the wave spends more time on the positive side of the center line than on the negative, indicating that there is more power absorbed by the load than there is returned to the circuit. What little returning of power that occurs is due to the reactance; the imbalance of positive versus negative power is due to the resistance as it dissipates energy outside of the circuit (usually in the form of heat). If the source were a mechanical generator, the amount of mechanical energy needed to turn the shaft would be the amount of power averaged between the positive and negative power cycles. Mathematically representing power in an AC circuit is a challenge, because the power wave isn't at the same frequency as voltage or current. Furthermore, the phase angle for power means something quite different from the phase angle for either voltage or current. Whereas the angle for voltage or current represents a relative shift in timing between two waves, the phase angle for power represents a ratio between power dissipated and power returned. Because of this way in which AC power differs from AC voltage or current, it is actually easier to arrive at figures for power by calculating with scalar quantities of voltage, current, resistance, and reactance than it is to try to derive it from vector, or complex quantities of voltage, current, and impedance that we've worked with so far. • REVIEW: • In a purely resistive circuit, all circuit power is dissipated by the resistor(s). Voltage and current are in phase with each other. • In a purely reactive circuit, no circuit power is dissipated by the load(s). Rather, power is alternately absorbed from and returned to the AC source. Voltage and current are 90^o out of phase with each other. • In a circuit consisting of resistance and reactance mixed, there will be more power dissipated by the load(s) than returned, but some power will definitely be dissipated and some will merely be absorbed and returned. Voltage and current in such a circuit will be out of phase by a value somewhere between 0^o and 90^o. We know that reactive loads such as inductors and capacitors dissipate zero power, yet the fact that they drop voltage and draw current gives the deceptive impression that they actually do dissipate power. This “phantom power” is called reactive power, and it is measured in a unit called Volt-Amps-Reactive (VAR), rather than watts. The mathematical symbol for reactive power is (unfortunately) the capital letter Q. The actual amount of power being used, or dissipated, in a circuit is called true power, and it is measured in watts (symbolized by the capital letter P, as always). The combination of reactive power and true power is called apparent power, and it is the product of a circuit's voltage and current, without reference to phase angle. Apparent power is measured in the unit of Volt-Amps (VA) and is symbolized by the capital letter S. As a rule, true power is a function of a circuit's dissipative elements, usually resistances (R). Reactive power is a function of a circuit's reactance (X). Apparent power is a function of a circuit's total impedance (Z). Since we're dealing with scalar quantities for power calculation, any complex starting quantities such as voltage, current, and impedance must be represented by their polar magnitudes, not by real or imaginary rectangular components. For instance, if I'm calculating true power from current and resistance, I must use the polar magnitude for current, and not merely the “real” or “imaginary” portion of the current. If I'm calculating apparent power from voltage and impedance, both of these formerly complex quantities must be reduced to their polar magnitudes for the scalar arithmetic. There are several power equations relating the three types of power to resistance, reactance, and impedance (all using scalar quantities): Please note that there are two equations each for the calculation of true and reactive power. There are three equations available for the calculation of apparent power, P=IE being useful only for that purpose. Examine the following circuits and see how these three types of power interrelate for: a purely resistive load in Figure below, a purely reactive load in Figure below, and a resistive/ reactive load in Figure below. Resistive load only: True power, reactive power, and apparent power for a purely resistive load. Reactive load only: True power, reactive power, and apparent power for a purely reactive load. Resistive/reactive load: True power, reactive power, and apparent power for a resistive/reactive load. These three types of power -- true, reactive, and apparent -- relate to one another in trigonometric form. We call this the power triangle: (Figure below). Power triangle relating appearant power to true power and reactive power. Using the laws of trigonometry, we can solve for the length of any side (amount of any type of power), given the lengths of the other two sides, or the length of one side and an angle. • REVIEW: • Power dissipated by a load is referred to as true power. True power is symbolized by the letter P and is measured in the unit of Watts (W). • Power merely absorbed and returned in load due to its reactive properties is referred to as reactive power. Reactive power is symbolized by the letter Q and is measured in the unit of Volt-Amps-Reactive (VAR). • Total power in an AC circuit, both dissipated and absorbed/returned is referred to as apparent power. Apparent power is symbolized by the letter S and is measured in the unit of Volt-Amps (VA). • These three types of power are trigonometrically related to one another. In a right triangle, P = adjacent length, Q = opposite length, and S = hypotenuse length. The opposite angle is equal to the circuit's impedance (Z) phase angle. As was mentioned before, the angle of this “power triangle” graphically indicates the ratio between the amount of dissipated (or consumed) power and the amount of absorbed/returned power. It also happens to be the same angle as that of the circuit's impedance in polar form. When expressed as a fraction, this ratio between true power and apparent power is called the power factor for this circuit. Because true power and apparent power form the adjacent and hypotenuse sides of a right triangle, respectively, the power factor ratio is also equal to the cosine of that phase angle. Using values from the last example circuit: It should be noted that power factor, like all ratio measurements, is a unitless quantity. For the purely resistive circuit, the power factor is 1 (perfect), because the reactive power equals zero. Here, the power triangle would look like a horizontal line, because the opposite (reactive power) side would have zero length. For the purely inductive circuit, the power factor is zero, because true power equals zero. Here, the power triangle would look like a vertical line, because the adjacent (true power) side would have zero length. The same could be said for a purely capacitive circuit. If there are no dissipative (resistive) components in the circuit, then the true power must be equal to zero, making any power in the circuit purely reactive. The power triangle for a purely capacitive circuit would again be a vertical line (pointing down instead of up as it was for the purely inductive circuit). Power factor can be an important aspect to consider in an AC circuit, because any power factor less than 1 means that the circuit's wiring has to carry more current than what would be necessary with zero reactance in the circuit to deliver the same amount of (true) power to the resistive load. If our last example circuit had been purely resistive, we would have been able to deliver a full 169.256 watts to the load with the same 1.410 amps of current, rather than the mere 119.365 watts that it is presently dissipating with that same current quantity. The poor power factor makes for an inefficient power delivery system. Poor power factor can be corrected, paradoxically, by adding another load to the circuit drawing an equal and opposite amount of reactive power, to cancel out the effects of the load's inductive reactance. Inductive reactance can only be canceled by capacitive reactance, so we have to add a capacitor in parallel to our example circuit as the additional load. The effect of these two opposing reactances in parallel is to bring the circuit's total impedance equal to its total resistance (to make the impedance phase angle equal, or at least closer, to zero). Since we know that the (uncorrected) reactive power is 119.998 VAR (inductive), we need to calculate the correct capacitor size to produce the same quantity of (capacitive) reactive power. Since this capacitor will be directly in parallel with the source (of known voltage), we'll use the power formula which starts from voltage and reactance: Let's use a rounded capacitor value of 22 µF and see what happens to our circuit: (Figure below) Parallel capacitor corrects lagging power factor of inductive load. V2 and node numbers: 0, 1, 2, and 3 are SPICE related, and may be ignored for the moment. The power factor for the circuit, overall, has been substantially improved. The main current has been decreased from 1.41 amps to 994.7 milliamps, while the power dissipated at the load resistor remains unchanged at 119.365 watts. The power factor is much closer to being 1: Since the impedance angle is still a positive number, we know that the circuit, overall, is still more inductive than it is capacitive. If our power factor correction efforts had been perfectly on-target, we would have arrived at an impedance angle of exactly zero, or purely resistive. If we had added too large of a capacitor in parallel, we would have ended up with an impedance angle that was negative, indicating that the circuit was more capacitive than inductive. A SPICE simulation of the circuit of (Figure above) shows total voltage and total current are nearly in phase. The SPICE circuit file has a zero volt voltage-source (V2) in series with the capacitor so that the capacitor current may be measured. The start time of 200 msec ( instead of 0) in the transient analysis statement allows the DC conditions to stabilize before collecting data. See SPICE listing “pf.cir power factor”. pf.cir power factor V1 1 0 sin(0 170 60) C1 1 3 22uF v2 3 0 0 L1 1 2 160mH R1 2 0 60 # resolution stop start .tran 1m 200m 160m The Nutmeg plot of the various currents with respect to the applied voltage V[total] is shown in (Figure below). The reference is V[total], to which all other measurements are compared. This is because the applied voltage, V[total], appears across the parallel branches of the circuit. There is no single current common to all components. We can compare those currents to V[total]. Zero phase angle due to in-phase V[total] and I[total] . The lagging I[L] with respect to V[total] is corrected by a leading I[C] . Note that the total current (I[total]) is in phase with the applied voltage (V[total]), indicating a phase angle of near zero. This is no coincidence. Note that the lagging current, I[L] of the inductor would have caused the total current to have a lagging phase somewhere between (I[total]) and I[L]. However, the leading capacitor current, I[C], compensates for the lagging inductor current. The result is a total current phase-angle somewhere between the inductor and capacitor currents. Moreover, that total current (I[total]) was forced to be in-phase with the total applied voltage (V [total]), by the calculation of an appropriate capacitor value. Since the total voltage and current are in phase, the product of these two waveforms, power, will always be positive throughout a 60 Hz cycle, real power as in Figure above. Had the phase-angle not been corrected to zero (PF=1), the product would have been negative where positive portions of one waveform overlapped negative portions of the other as in Figure above. Negative power is fed back to the generator. It cannont be sold; though, it does waste power in the resistance of electric lines between load and generator. The parallel capacitor corrects this problem. Note that reduction of line losses applies to the lines from the generator to the point where the power factor correction capacitor is applied. In other words, there is still circulating current between the capacitor and the inductive load. This is not normally a problem because the power factor correction is applied close to the offending load, like an induction motor. It should be noted that too much capacitance in an AC circuit will result in a low power factor just as well as too much inductance. You must be careful not to over-correct when adding capacitance to an AC circuit. You must also be very careful to use the proper capacitors for the job (rated adequately for power system voltages and the occasional voltage spike from lightning strikes, for continuous AC service, and capable of handling the expected levels of current). If a circuit is predominantly inductive, we say that its power factor is lagging (because the current wave for the circuit lags behind the applied voltage wave). Conversely, if a circuit is predominantly capacitive, we say that its power factor is leading. Thus, our example circuit started out with a power factor of 0.705 lagging, and was corrected to a power factor of 0.999 lagging. • REVIEW: • Poor power factor in an AC circuit may be “corrected”, or re-established at a value close to 1, by adding a parallel reactance opposite the effect of the load's reactance. If the load's reactance is inductive in nature (which is almost always will be), parallel capacitance is what is needed to correct poor power factor. When the need arises to correct for poor power factor in an AC power system, you probably won't have the luxury of knowing the load's exact inductance in henrys to use for your calculations. You may be fortunate enough to have an instrument called a power factor meter to tell you what the power factor is (a number between 0 and 1), and the apparent power (which can be figured by taking a voltmeter reading in volts and multiplying by an ammeter reading in amps). In less favorable circumstances you may have to use an oscilloscope to compare voltage and current waveforms, measuring phase shift in degrees and calculating power factor by the cosine of that phase shift. Most likely, you will have access to a wattmeter for measuring true power, whose reading you can compare against a calculation of apparent power (from multiplying total voltage and total current measurements). From the values of true and apparent power, you can determine reactive power and power factor. Let's do an example problem to see how this works: (Figure below) Wattmeter reads true power; product of voltmeter and ammeter readings yields appearant power. First, we need to calculate the apparent power in kVA. We can do this by multiplying load voltage by load current: As we can see, 2.308 kVA is a much larger figure than 1.5 kW, which tells us that the power factor in this circuit is rather poor (substantially less than 1). Now, we figure the power factor of this load by dividing the true power by the apparent power: Using this value for power factor, we can draw a power triangle, and from that determine the reactive power of this load: (Figure below) Reactive power may be calculated from true power and appearant power. To determine the unknown (reactive power) triangle quantity, we use the Pythagorean Theorem “backwards,” given the length of the hypotenuse (apparent power) and the length of the adjacent side (true If this load is an electric motor, or most any other industrial AC load, it will have a lagging (inductive) power factor, which means that we'll have to correct for it with a capacitor of appropriate size, wired in parallel. Now that we know the amount of reactive power (1.754 kVAR), we can calculate the size of capacitor needed to counteract its effects: Rounding this answer off to 80 µF, we can place that size of capacitor in the circuit and calculate the results: (Figure below) Parallel capacitor corrects lagging (inductive) load. An 80 µF capacitor will have a capacitive reactance of 33.157 Ω, giving a current of 7.238 amps, and a corresponding reactive power of 1.737 kVAR (for the capacitor only). Since the capacitor's current is 180^o out of phase from the the load's inductive contribution to current draw, the capacitor's reactive power will directly subtract from the load's reactive power, resulting in: This correction, of course, will not change the amount of true power consumed by the load, but it will result in a substantial reduction of apparent power, and of the total current drawn from the 240 Volt source: (Figure below) Power triangle before and after capacitor correction. The new apparent power can be found from the true and new reactive power values, using the standard form of the Pythagorean Theorem: This gives a corrected power factor of (1.5kW / 1.5009 kVA), or 0.99994, and a new total current of (1.50009 kVA / 240 Volts), or 6.25 amps, a substantial improvement over the uncorrected value of 9.615 amps! This lower total current will translate to less heat losses in the circuit wiring, meaning greater system efficiency (less power wasted). Contributors to this chapter are listed in chronological order of their contributions, from most recent to first. See Appendix 2 (Contributor List) for dates and contact information. Jason Starck (June 2000): HTML document formatting, which led to a much better-looking second edition. Lessons In Electric Circuits copyright (C) 2000-2014 Tony R. Kuphaldt, under the terms and conditions of the Design Science License.
{"url":"http://www.ibiblio.org/kuphaldt/electricCircuits/AC/AC_11.html","timestamp":"2014-04-18T12:17:36Z","content_type":null,"content_length":"29414","record_id":"<urn:uuid:ebd1dbfa-df32-4a33-a6a3-7cdccb132ba8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra polynomial June 12th 2007, 04:06 PM #1 Jun 2007 Algebra polynomial I'm having problems solving this one. Please help The degree three polynomial f(x) with real coefficients and leading coefficient 1, has 4 and 3 + i among its roots. Express f(x) as a product of linear and quadratic polynomials with real f(x) = (x+4)(x^2+6x+10) f(x) = (x-4)(x^2-6x-9 f(x) = (x-4)x^2-6x+10) f(x) = (x-4)(x^2-6x+9) All such non-zero polynomials difference by a constant multiple. So we will find one which has 1 as its leading coefficient (called monic). Now if f(x) is a polynomial in R[x] (meaning the real numbers) and if a+bi is a zero then a-bi is also a zero. So if 3+i is a zero that means 3-i is a zero. $f(x) = (x-4)(x-(3+i))(x-(3-i))$ Multiply the last two factors together, $f(x)=(x-4)(x^2 - 6x + 10)$ thanks, good not have figured that one out without you. June 12th 2007, 04:44 PM #2 Global Moderator Nov 2005 New York City June 12th 2007, 04:57 PM #3 Jun 2007
{"url":"http://mathhelpforum.com/algebra/15895-algebra-polynomial.html","timestamp":"2014-04-16T06:15:54Z","content_type":null,"content_length":"36107","record_id":"<urn:uuid:7e75629c-5820-4b0b-b4f4-2e31430e82db>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - Proof: Permutations and Surjective Functions schaefera Mar14-12 02:48 PM Proof: Permutations and Surjective Functions 1. The problem statement, all variables and given/known data Let X and Y be finite nonempty sets, |X|=m, |Y|=n≤m. Let f(n, m) denote the number of partitions of X into n subsets. Prove that the number of surjective functions X→Y is n!*f(n,m). 2. Relevant equations I know a function is onto if and only if every element of Y is mapped to by an element of X. That is, for all y in Y, there is an x in X such that f(x)=y. Clearly if f is a function and n≤m, a function from X to Y can be onto (but it doesn't have to be, for instance, all of X could map to the same Y). 3. The attempt at a solution I tried by induction, but got lost moving from n=k to n=k+1, so I'm not sure if that works. I think that f(n, m) has something to do with the number of ways to permute the elements of X... but not sure. This is before the lecture on combinations, so I'm not sure if we need to use that method. Thanks in advance! schaefera Mar14-12 04:58 PM Re: Proof: Permutations and Surjective Functions Could it have something to do with creating a function from a partitioned version of X to Y? schaefera Mar15-12 07:37 PM Re: Proof: Permutations and Surjective Functions No takers? Re: Proof: Permutations and Surjective Functions Quote by schaefera (Post 3815488) Could it have something to do with creating a function from a partitioned version of X to Y? Sure it could. You split X into subsets, each one of which maps into a unique element of Y. Then given a split (of which there are f(n,m)) you figure out how many ways there are to assign each subset to an element of Y. I don't think induction is really necessary here. Just explain it in words. All times are GMT -5. The time now is 11:04 AM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=586998","timestamp":"2014-04-20T16:04:27Z","content_type":null,"content_length":"6931","record_id":"<urn:uuid:fdae1cae-30a4-4e9c-830c-319eb4d04716>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Having some trouble calculating certain odds.. plz help in addition to what WurlyQ said above, i'd like to point out something else. Originally Posted by bonghead365 What Im having trouble with is trying to calculate the chances of 2 specific cards showing. like if i hold 2 3 5 and i need a 4 and then a A or a 6 to complete my straight. I know that gives me 12 outs, but ... it seems that you must have misunderstood the concept of "outs". in this case, you would have 12 cards that would somehow slightly improve your hand. Those are not outs because even if you hit one of them, you still do not have a good made hand. Cards are outs for you only if you think that hitting them will make your hand the best. For the example, even if you hit a 4 on the turn, you still do not have a made hand and now you would be needing to hit your outs - namely an A or 6 on the river (and even then, depending on which cards are in your hand and which are on the board, your hand may still be beaten by a higher straight or a flush if one shows). So here is the math for your specific question: options for the turn: A) an ace or 6 comes - chance 8/47 - now you only need a 4 on the river, and the chance for that is 4/46 (you have seen one more card, remember). the total chance for this is 32/(47*46), or under 1.5% B) a 4 comes - chance 4/47 - in this case you would need an ace or 6 on the river - chance 8/46. guess what, the total chance for this is the same as above, under 1.5% C) any other card comes - chance 35/47 - here it doesn't really matter what the river will bring, you are not in a good shape. there are exceptions, of course, like getting both the turn and river be 3s to the 3 in your hand and you win, but those are outside the scope of the current analysis. so you will be good in under 3% of the time total, when either A or B above happens, and then you get another lucky card on the river. (the difference between my number and WurlyQ's above is because he somehow forgot you could also make a higher straight, with a 5-9 or 9-10.) basically - if you only need one card, but have chances to hit it on EITHER the turn or the river, then you could (approximately) add the probabilities. if you need specific things to happen on BOTH the turn and the river, you need to multiply the probabilities of the events. pokerstove (as suggested above) will do the math for you - but it is only really useful as an analysis tool after the fact. you should be much better if you knew some approximate ways to do math while at the table, and it seems you have the basics of that good luck!
{"url":"http://www.cardschat.com/f57/having-some-trouble-calculating-certain-odds-161780/","timestamp":"2014-04-18T00:45:26Z","content_type":null,"content_length":"49091","record_id":"<urn:uuid:823c9290-360c-47c7-9280-4c9fb36372c0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Calhoun, GA ACT Tutor Find a Calhoun, GA ACT Tutor ...All of my children scored above 675 on each part of the SAT. All went to major colleges and earned either a law degree or a master's degree. I am very confident in my teaching skills. 51 Subjects: including ACT Math, reading, English, writing Dear student or parent, I am a professional academic tutor with 20+ years of experience tutoring most subjects at the high school, college and graduate school levels. These include most * Math courses including algebra, geometry and pre-calc, calculus, linear and coordinate math, statistics differ... 126 Subjects: including ACT Math, chemistry, English, biology ...I currently work in the math lab at school, which provides tutoring services to the Berry College community, but I would like to expand my tutoring to the rest of Rome and surrounding communities. As a rising senior, I have taken my fair share of math classes, so almost all subjects are open for... 23 Subjects: including ACT Math, reading, elementary (k-6th), geometry ...I tutor just about any subject through grade 12. However, my specialty is mathematics. Fun and Outgoing personality. 52 Subjects: including ACT Math, Spanish, reading, English ...I have been playing violin since age 5. I played in the Rome Symphony Orchestra at age 10. I was selected for Georgia All State Orchestra and played in the Berry College Chamber Orchestra throughout middle and high school. 19 Subjects: including ACT Math, chemistry, physics, calculus Related Calhoun, GA Tutors Calhoun, GA Accounting Tutors Calhoun, GA ACT Tutors Calhoun, GA Algebra Tutors Calhoun, GA Algebra 2 Tutors Calhoun, GA Calculus Tutors Calhoun, GA Geometry Tutors Calhoun, GA Math Tutors Calhoun, GA Prealgebra Tutors Calhoun, GA Precalculus Tutors Calhoun, GA SAT Tutors Calhoun, GA SAT Math Tutors Calhoun, GA Science Tutors Calhoun, GA Statistics Tutors Calhoun, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/calhoun_ga_act_tutors.php","timestamp":"2014-04-18T23:56:22Z","content_type":null,"content_length":"23264","record_id":"<urn:uuid:b3b18a68-f7a8-4db3-b6d8-139ca8a0f576>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Java in Science: Data Inter- and Extrapolation Using Numerical Methods of Polynomial Fittings, Part 2 Review Part 1 Sums and Scalar Multiples Two matrices would be equal if they have the same size (that is, the same number of rows and the same number of columns) and also their corresponding entries are equal. Matrix A is not equal to matrix B, although they have the same matrix size = 3 x 2. The corresponding entries at index (1,1) are not the same for matrix A and B because A(1,1) = 3 and B(1,1) = 4, and so they are different. Note that the entry-index starts at zero. double[][] arrayA = {{4.,0.},{5.,3.},{-2.,1.}}; //Instantiate matrix object A Matrix A = new Matrix(arrayA); double[][] arrayB = {{4.,0.},{5.,4.},{-2.,1.}}; //Instantiate matrix object B Matrix B = new Matrix(arrayB); double[][] arrayC = {{1.,7.},{7.,4.}}; //Instantiate matrix object C Matrix C = Matrix(arrayC); double[][] arrayD = {{1.,0.},{0.,1.}}; //Instantiate matrix object D Matrix D = new Matrix(arrayD); The additions (sums) and subtractions (difference) of matrices is done element-wise. The sum of A+B would result in a matrix the same size as A or B with all entries as the sum of a[ij] and b[ij ], for example, the resultant entry of (1,1) = 7, which is a[11]+b[11] = 3+4 = 7. The sum of two different size (dimemsion) matrices is undefined such as A+C, because size(A) = 3 by 2 and size(C) = 2 by 2. In multiplying a matrix by a scaler results in a matrix with all the entries scaled by this scaler-value. 3*B is a multiplication of B by a scaler value of 3 in which all entries are now 3b[ij]. Matrix A_plus_B = A.plus(B) ; //returned matrix has all its entries scaled by 3. Matrix matrix_3_times_B = B.times(3.); //This following line will throw an IllegalArgumentException because matrix dimensions do //not agree, matrix A is a 4 by 2 but C is a 2 by 2 size matrix. Matrix A_plus_C = A.plus(C); Matrix Multiplication Under matrix multiplication, such as A*C, the inner dimensions have to be the same for it to be defined, regardless of the outer dimensions. In this case size(A) = [3 x 2] and size(C) = [2 x 2]; therefore, matrix multiplication is defined because they have the same inner dimensions of two, [3 x 2][2 x 2]. The dimension of the result of multiplying two matrices would be the outer dimensions of the respective matrices. A*C would result in a matrix with size(A*C) = [3 x 2], that is [3 x 2][2 x 2]. Matrix multiplication results in all entries(i,j) equals the summation of element-wise multiplication of each row of A to each column of C. Multiplication of A *B is undefined, because the inner dimensions do not agree: A* B is [3 x 2]*[3 x 2]. Multiplication of matrices is not always commutative (order of operation), B *C is defined, but reversing the order as C*B is undefined but C* D = D*C. Matrix multiplication is directly in contrast with its elementary counterpart, 3 x 4 = 4 x 3 = 12. Matrix D is a special type called Identity, and this is the equivalent in elementary arithmetic of the number 1, which is the multiplication identity, 6 x 1 = 6. Identity is a square-matrix (number of rows = columns) with any dimension and all its elements at the diagonal, from top-left to bottom-right are ones and zeros everywhere. Any matrix that is multiplied to the Identity is always that matrix, examples are: A*D = A, B* D = B and C*D = C. Matrix A_times_C = A.times(C) ; //The following line will throw an IllegalArgumentException because inner matrix //dimensions do not agree. Matrix A_times_B = A.times(B); Transpose of a Matrix Given an m x n matrix E, the transpose of E is the n x m matrix denoted by E^T, whose columns are formed from the corresponding rows of E. General matrix transposition operations: 1. (F + G)^T = F^T + G^T Matrix result1 = F.plus(G).transpose() ; Matrix result2 = F.transpose().plus(G.transpose()); //Matrix result1 and result2 are equals (same value in all entries) 2. (F^T)^T = F Matrix result3 = F.transpose().transpose() ; //Matrix result3 and F are equals (same value in all entries) 3. ( Matrix result4 = F.times(G).transpose() ; Matrix result5 = G.transpose().times(F.transpose()) ; //Matrix result4 and result5 are equals (same value in all entries) 4. For any scalar r, (r * F)^T = r * F^T Matrix result6 = F.times(r).transpose() ; Matrix result7 = F.transpose().times(r) ; //Matrix result6 and result7 are equals (same value in all entries) Inverse of a Matrix If K is an [n x n] matrix, sometimes there is another [n x n] matrix L such that: K * L = I and L * K = I where I is an [n x n] Identity matrix We say K is invertible and L is an inverse of K. The inverse of K is denoted by K^-1. K * K^-1 = I, where I - is the Identity matrix K^-1 * K = I Matrix inverseMatrix = K.inverse() ; //Matrix inverseMatrix is an inverse of matrix K QR Matrix Factorization Factorizing a matrix follows a similar concept in elementary arithmetics. The number 30 has prime factors of = 2 x 3 x 5. If P is an [m x n] matrix with linearly independent columns, then it can be factored as: P = Q * R: where Q is an [m x m] whose columns form an orthonormal basis and R is an [m x n] upper triangular invertible matrix with positive entries on its diagonal. double[][] arrayP = {{1.,0.},{1.,1.},{1.,1.},{1.,1.}}; Matrix P = new Matrix(arrayP); //Create a QR factorisation object from matrix P. QRDecomposition QR = new QRDecomposition(P); Matrix Q = QR.getQ(); Matrix R = QR.getR(); //Now that we have two matrix factors for matrix P which: Q*R = P Elementary Matrix Function and Special Matrices repmat (Matrix mat, int row, int col) - static method from JElmat class of Jamlab package Repeat a matrix in a tiling manner. If matrix A is to be tiled [3 by 2], then A would be tiled in a 3-rows by 2-columns pattern. double[][] arrayA = {{7,5},{2,3}}; Matrix A = new Matrix(arrayA); //Create matrix object B by tiling matrix A in a 3 by 2 pattern. Matrix B = JElmat.repmat(A,3,2); vander (double[] a,int numCol) - static method from JElmat. Create a Vandermonde matrix in which the second to last column are the elements of array a[]. The matrix has numCol number of columns. //Create Vandermonde-matrix double[] arrayA = {5,-1,7,6}; //vander is also overloaded with one input argument a double[] array, where columns = rows Matrix A = JElmat.vander(arrayA); double[] arrayB = {5,-1,7,6,-3,4}; //Create a Vandermonde-matrix with only four columns. Matrix B = JElmat.vander(arrayB,4); sum (Matrix matrix) - static method from JDatafun class. sum all the elements of the column of a matrix and return a single row matrix. double arrayA = {{19,18,16,18},{5,15,9,15},{12,9,12,4},{10,0,16,8}}; Matrix A = new Matrix(arrayA); //Sum all the columns of matrix A. Matrix B = JDatafun.sum(A); linespace (double leftBound, double rightBound, int nPoints) - static method from JElmat class Linear spaced matrix row vector. Generates nPoints between leftBound and rightBound. //row matrix A, starting at -8. and ending at 12. with eleven points equally spaced. //internal array A = (-8 -6 -4 -2 0 2 4 6 8 10 12) Matrix A = JElmat.linspace(-8.,12.,11); reshape (Matrix A, int M, int N) - static method from JElmat class. returns the M-by-N matrix whose elements are taken columnwise from A. double[][] arrayA = {{1.,2.,3.,4.},{5.,6.,7.,8.},{9.,10.,11.,12.}}; Matrix A = new Matrix(arrayA); //reshape matrix A from 3 x 4 dimension into 6 x 2 Matrix B = JElmat.reshape(A,6,2); zeros (int m, int n) - static method from JElmat class. returns an m x n matrix of zeros. ones (int m, int n) - static method from JElmat class. returns an m x n matrix of ones. //Create a matrix of ones with 3 x 4 size. Matrix X = JElmat.ones(3,4); //Create a matrix of zeros with 2 x 2 size. Matrix Y = JElmat.zeros(2,2); We will continue with polynomial fittings and Java code in Part 3. Download Jama and Jamlab and Jamlab documentation. • Java for Engineers and Scientists by Stephen J. Chapman, Prentice Hall, 1999. • Introductory Java for Scientists and Engineers by Richard J. Davies, Addison-Wesley Pub. Co., 1999. • Applied Numerical Analysis (Sixth Edition) by Curtis F. Gerald and Patrick O. Wheatly, Addison-Wesley Pub. Co., 1999. • Linear Algebra and Its Applications (Second Edition), by David C. Lay, Addison-Wesley Pub. Co. • Numerical Recipes in Fortran 77, the Art of Scientific Computing (Volume 1) by William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery, Cambridge University Press, 1997. • Mastering MATLAB 5: A Comprehensive Tutorial and Reference by Duane Hanselman and Bruce Littlefield, Prentice Hall, 1997. • Advanced Mathematics and Mechanics Applications using MATLAB by Louis H. Turcotte and Howard B. Wilson, CRC Press, 1998. Related Articles About the Author Sione Palu is a Java developer at Datacom in Auckland, New Zealand, currently involved in a Web application development project. Palu graduated from the University of Auckland, New Zealand, double majoring in mathematics and computer science. He has a personal interest in applying Java and mathematics in the fields of mathematical modeling and simulations, expert systems, neural and soft computation, wavelets, digital signal processing, and control systems.
{"url":"http://www.developer.com/tech/article.php/788311/Java-in-Science-Data-Inter--and-Extrapolation-Using-Numerical-Methods-of-Polynomial-Fittings-Part-2.htm","timestamp":"2014-04-20T18:28:10Z","content_type":null,"content_length":"68867","record_id":"<urn:uuid:7880392d-15f4-48a2-a043-11920ce35830>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
Resolution: standard / high Figure 3. EIS results. (a) and (b) are the Nyquist plots of EIS of symmetrical dummy cells fabricated with different fibrous electrodes; (b) is the amplificatory figure of (a); the inset of (b) shows the detail of overlapped part in high frequency region of Nyquist plots; (c) shows the Bode-phase plots. (d) is the equivalent circuits of symmetrical dummy cells except for BCNT-BCNT type and (e) is the equivalent circuit of the BCNT-BCNT dummy cell, where R[s] is the serial resistance, R[ct] is the electron transfer resistance at the interface of electrode/electrolyte, C[dl] is the double layer capacitance of the electrode/electrolyte interface, Z[N] is the Nernst diffusion impedance of electrolyte, R[CNT] is the electron transfer resistance in solid CNT net and C[sl] is the capacitance of CNT net itself. Dots are the experimental data and solid lines are the fitted results. Huang et al. Nanoscale Research Letters 2012 7:222 doi:10.1186/1556-276X-7-222 Download authors' original image
{"url":"http://www.nanoscalereslett.com/content/7/1/222/figure/F3","timestamp":"2014-04-24T11:21:11Z","content_type":null,"content_length":"12513","record_id":"<urn:uuid:b3be2ee6-262d-490f-afcf-88dee7a45b65>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Hinsdale, IL Prealgebra Tutor Find a Hinsdale, IL Prealgebra Tutor ...I tutored math through college to stay fresh. Finally, trigonometry always finds its way into my day-to-day work, from teaching college-level physics concepts to building courses for professional auditors. I was an advanced math student, completing calculus in high school and then taking statistics as part of the engineering curriculum in college. 13 Subjects: including prealgebra, calculus, statistics, geometry ...Although that was a long time ago, I have helped several students with their pre-calculus courses. The precalculus student that I helped just last academic year went from getting Cs and Bs in her tests to As. She got an "A" in her last course, and she was extremely happy with the results. 13 Subjects: including prealgebra, geometry, statistics, finance ...It takes time and patience to work this way, plus I include positive reinforcement all along the way to build confidence. I am a certified teacher with experience using desktop publishing for student worksheets, managing lists, flyers, professional development, parent letters, tests and quizzes,... 40 Subjects: including prealgebra, reading, English, writing ...I currently teach middle school and am very familiar with this topic. I have a Bachelor's degree (2010) in mathematics from the University of Illinois at Urbana-Champaign. I took MATH 405 Teacher's Course in the Spring of 2009. 12 Subjects: including prealgebra, calculus, algebra 2, algebra 1 I am certified math teacher. Currently, I work as a substitute teacher at Elmwood Park School District and Morton High Schools in Cicero. I have been tutoring students since 2008 and preparing them for ACT. I have BA in Mathematics and Secondary Education from Northeastern Illinois University. 12 Subjects: including prealgebra, calculus, geometry, algebra 1 Related Hinsdale, IL Tutors Hinsdale, IL Accounting Tutors Hinsdale, IL ACT Tutors Hinsdale, IL Algebra Tutors Hinsdale, IL Algebra 2 Tutors Hinsdale, IL Calculus Tutors Hinsdale, IL Geometry Tutors Hinsdale, IL Math Tutors Hinsdale, IL Prealgebra Tutors Hinsdale, IL Precalculus Tutors Hinsdale, IL SAT Tutors Hinsdale, IL SAT Math Tutors Hinsdale, IL Science Tutors Hinsdale, IL Statistics Tutors Hinsdale, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/hinsdale_il_prealgebra_tutors.php","timestamp":"2014-04-19T23:35:38Z","content_type":null,"content_length":"24277","record_id":"<urn:uuid:749a8e28-1d20-4542-9dbb-b1865998d9ab>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Composite Higgs If the Higgs boson and top quark are not elementary particles, they could be made of smaller particles held together by a force stronger than any others in the Standard Model. This Feynman diagram represents an example of part of a calculation of the Higgs (and therefore W and Z) mass, which in the Standard Model is infinite. In a composite model in which the Higgs is made of other particles, the diagram is much more complicated when considering high energies and shorter length scales (with the purple loopy lines representing the new force carriers). The more complicated diagrams turn out to be finite, and therefore can complete the Standard Model. (Unit: 2)
{"url":"http://www.learner.org/courses/physics/visual/visual.html?shortname=higgs_mass","timestamp":"2014-04-19T19:43:40Z","content_type":null,"content_length":"3219","record_id":"<urn:uuid:aa0a10ef-40e1-4180-974c-f5f56cda644d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Tangram - 6 pieces Tangram - 6 pieces Here is how to make a 6 piece Tangram from the standard matchbox. First squash the matchbox flat. It should make a square. Then draw the two diagonals of the square. Cut out the Triangle ABC. This should open to give a square. The two pieces you have made should now open out to look like this; Draw the lines DF and EF using the existing creases to guide you. Cut along DF and EG. Also cut the square piece in half along JK. Now you have 6 pieces; 4 equal triangles and two “chevrons”, a small one and a large one: These will be the 6 pieces that make up your Tangram Puzzle 1 Now use the 6 pieces to make A Square A Rectangle A Parallelogram An Isosceles Trapezium A Right angled Triangle Puzzle 2 Now use all 6 pieces to make these shapes
{"url":"http://www.cyffredin.co.uk/Matches%20on%20the%20web/Tangram%206%20piece.htm","timestamp":"2014-04-18T18:22:14Z","content_type":null,"content_length":"15760","record_id":"<urn:uuid:98f423cc-4583-4d9c-a830-1bf2595a8e0b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Write each rational expression in lowest terms. State all the values for which the denominator of the original problem would be undefined. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51622aa5e4b0c2e460701c4d","timestamp":"2014-04-17T07:17:00Z","content_type":null,"content_length":"38127","record_id":"<urn:uuid:542040b2-7636-446b-bf58-9768583eb78f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Lattice (order) See lattice for other meanings of the term, both within and without mathematics. , a is a partially ordered set in which all nonempty finite subsets have both a ) and an ). On the other hand, lattices can also be characterized as algebraic structures that satisfy certain . Since both views can be used interchangeably, lattice theory can draw upon applications and methods both from order theory and from universal algebra . Lattices constitute one of the the most prominent representatives of a series of "lattice-like" structures which admit order-theoretic as well as algebraic descriptions, such as semilattices, Heyting algebras, or Boolean algebras. The term "lattice" derives from the shape of the Hasse diagrams that result from depicting these orders. This article treats the most basic definitions of lattice theory, including the case of bounded lattices, i.e lattices that have top and bottom elements. Formal definition As mentioned above, lattices can be characterized both as posets and as algebraic structures. Both approaches and their relationship are explained below. Lattices as posets Consider a partially ordered set (L, ≤). L is a lattice if for all elements x and y of L, the set {x, y} has both a least upper bound (join) and a greatest lower bound (meet). In this situation, the join and meet of x and y are denoted by x'y and x'y, respectively. Clearly, this defines binary operations and on lattices. Also note that the above definition is equivalent to requiring L to be both a meet- and a join-semilattice. It will be stated explicitly whenever a lattice is required to have a least or greatest element. If both of these special elements do exist, then the poset is a bounded lattice. Using an easy induction argument, one can also conclude the existence of all suprema and infima of non-empty finite subsets of any lattice. Further conclusions may be possible in the presence of other properties. See the article on completeness in order theory for more discussion on this subject. This article also discusses how one may rephrase the above definition in terms of the existence of suitable Galois connections between related posets -- an approach that is of special interest for category theoretic investigations of the concept. Lattices as algebraic structures Consider an algebraic structure in the sense of universal algebra, given by (L,, ), where and are two binary operations. L is a lattice if the following identities hold for all elements a, b, and c in L: Idempotency laws: Commutativity laws: Associativity laws: Absorption laws: Note that the laws for idempotency, commutativity, and associativity just state that (L,) and (L,) constitute two semilattices, while the absorption laws guarantee that both of these structures interact appropriately. Furthermore, it turns out that the idempotency laws can be deduced from absorption and thus need not be stated separately. In order to describe bounded lattices, one has to include neutral elements 0 and 1 for the meet and join operations in the above definition. For details compare the article on semilattices. Connection between both definitions Obviously, an order theoretic lattice gives rise to two binary operations and . It now can be seen very easily that this operation really makes (L, , ) a lattice in the algebraic sense. Maybe more surprisingly, one can also obtain the converse of this result: consider any algebraically defined lattice (M, , ). Now one can define a partial order ≤ on M by setting x ≤ y iff x = xy or, equivalently, x ≤ y iff y = xy for all elements x and y in M. The above laws for absorption assure that both definitions are indeed equivalent. One can now check that the relation ≤ introduced in this way defines a partial ordering within which binary meets and joins are given through the original operations and . Conversely, the order induced by the algebraically defined lattice (L, , ) that was derived from the order theoretic formulation above coincides with the original ordering of L. Hence, the two definitions can be used in an entirely interchangeable way, depending on which of them appears to be more convenient for a particular purpose. Morphisms of lattices The appropriate notion of a morphism between two lattices can easily be derived from the algebraic definition above: given two lattices (L, , ) and (M, , ), a homomorphisms of lattices is a function f : L → M with the properties that f(xy) = f(x) f(y), and f(xy) = f(x) f(y). Thus f is a homomorphism of the two underlying semilattices. If the lattices are furthermore equipped with least elements 0 and greatest elements 1, then f should also preserve these special f(0) = 0, and f(1) = 1. In the order-theoretical formulation, these conditions just state that a homomorphism of lattices is a function that preserves binary meets and joins. For bounded lattices, preservation of least and greatest elements is just preservation of join and meet of the empty set. Note that any homomorphism of lattices is necessarily monotone with respect to the associated ordering relation. For an explanation see the article on preservation of limits. The converse is of course not true: monotonicity does by no means imply the required preservation properties. Using the standard definition of isomorphisms as invertible morphisms, one finds that an isomorphism of lattices is exactly a bijective lattice homomorphism. Lattices and their homomorphisms obviously form a category. Properties of lattices The definitions above already introduced the simple condition of being a bounded lattice. A number of other important properties, many of which lead to interesting special classes of lattices, will be introduced below. A highly relevant class of lattices are the complete lattices. A lattice is complete if any of its subsets has both a join and a meet, which should be contrasted to the above definition of a lattice where one only requires the existence of all (non-empty) finite joins and meets. It turns out that the existence of all joins suffices to conclude the existence of all meets and vice versa. For more details on this basic result and some alternative sufficient conditions for completeness, see the article on completeness properties. Note also that complete lattices are always bounded. Examples of complete lattices include: • The subsets of a given set, ordered by inclusion. The supremum is given by the union and the infimum by the intersection of subsets. • The unit interval [0,1] and the extended real number line, with the familiar total order and the ordinary suprema and infima. • The non-negative integers, ordered by divisibility. The least element of this lattice is the number 1, since it divides any other number. Maybe surprisingly, the greatest element is 0, because it can be divided by any other number. The supremum of finite sets is given by the least common multiple and the infimum by the greatest common divisor. For infinite sets, the supremum will always be 0 while the infimum can well be greater than 1. For example, the set of all even numbers has 2 as the greatest common divisor. If 0 is removed from this structure it remains a lattice but ceases to be complete. • The subgroups of a group, ordered by inclusion. The supremum is given by the subgroup generated by the union of the groups and the infimum is given by the intersection. • The submodules of a module, ordered by inclusion. The supremum is given by the sum of submodules and the infimum by the intersection. • The ideals of a ring, ordered by inclusion. The supremum is given by the sum of ideals and the infimum by the intersection. • The open sets of a topological space, ordered by inclusion. The supremum is given by the union of open sets and the infimum by the interior of the intersection. • The convex subsets of a real or complex vector space, ordered by inclusion. The infimum is given by the intersection of convex sets and the supremum by the convex hull of the union. • The topologies on a set, ordered by inclusion. The infimum is given by the intersection of topologies, and the supremum by the topology generated by the union of topologies. • The lattice of all transitive binary relations on a set. • The lattice of all sub-multisets of a multiset. • The lattice of all equivalence relations on a set; the equivalence relation ~ is considered to be smaller (or "finer") than ≈ if x~y always implies x≈y. Many theorems of order theory take especially simple forms when stated for complete lattices. For example, the Knaster-Tarski theorem states that the set of fixed points of a monotone function on a complete lattice is again a complete lattice. Since any lattice comes with two binary operations, it is natural to consider distributivity laws among them. A lattice (L, , ) is distributive, if the following condition is satisfied for every three elements x, y and z of L: Maybe surprisingly, this condition turns to be equivalent to its dual statement: Other characterizations exist and can be found in the article on distributive lattices. For complete lattices one can formulate various stronger properties, giving rise to the classes of frames and completely distributive lattices. An overview of these different notions is given in the article on distributivity in order theory. Often one finds that distributivity is too strong a condition for certain applications. A strictly weaker property is modularity: a lattice (L, , ) is modular if, for all elements x, y, and z of L, we have Another equivalent statement of this condition is as follows: if x ≤ z then for all y one has For example, the lattice of submodules of a module and the lattice of normal subgroups of a group have this special property. Furthermore, every distributive lattice is indeed modular. Continuity and Algebraicity In domain theory, one is often interested in approximating the elements in a partial order by "much simpler" elements. This leads to the class of continuous posets, consisting of posets where any element can be obtained as the supremum of a directed set of elements that are way-below the element. If one can additionally restrict to the compact elements of a poset for obtaining these directed sets, then the poset is even algebraic. Both concepts can be applied to lattices as follows: • A continuous lattice is a complete lattice that is continuous as a poset. • An algebraic lattice is a complete lattice that is algebraic as a poset. Both of these classes have interesting properties. For example, continuous lattices can be characterized as algebraic structures (with infinitary operations) satisfying certain identities. While such a characterization is not known for algebraic lattices, they can be described "syntactically" via Scott information systems. Complements and Pseudo-complements The concept of complements introduces the idea of "negation" into lattice theory. Consider a bounded lattice with greatest element 1 and least element 0. One says that an element x is a complement of one element y if the following hold: A bounded lattice within which every element has some complement is called a complemented lattice. Note that this complement is neither required to be unique nor to be "special" in any sense among all existing complements. In contrast, a Boolean algebra has a unique complement for each element x which can thus be denoted by ¬x. In contrast, Heyting algebras are more general kinds of lattices, within which complements usually do not exist. However, each element x in a Heyting algebra has a pseudo-complement that is usually also denoted by ¬x. It is characterized as being greatest among all elements y with the property that x y = 0. If the pseudo-complements of a Heyting algebra are in fact complements, then it is a Boolean algebra. • For any set A, the collection of all finite subsets of A (including the empty set) can be ordered via subset inclusion to obtain a lattice. • The natural numbers in their common order are a lattice. • None of the above lattices is bounded. However, any complete lattice especially is a bounded lattice. • The set of compact elements of an arithmetic (complete) lattice is a lattice with a least element. Free lattices Using the standard definition of universal algebra, a free lattice over a generating set S is a lattice L together with a function i:S→L, such that any function f from S to the underlying set of some lattice M can be factored uniquely through a lattice homomorphism fð from L to M. Stated differently, for every element s of S we find that f(s) = fð(i(s)) and that fð is the only lattice homomorphism with this property. These conditions basically amount to saying that there is a functor from the category of sets and functions to the category of lattices and lattice homomorphisms which is left adjoint to the forgetful functor from lattices to their underlying sets. We treat the case of bounded lattices, i.e. algebraic structures with the two binary operations and and the two constants (nullary operations) 0 and 1. The set of all correct (well-formed) expressions that can be formulated using these operations on elements from a given set of generators S will be called W(S). This set of words contains many expressions that turn out to be equal in any lattice. For example, if a is some element of S, then a1 = 1 and a1 =a. The word problem for lattices is the question, which of these elements have to be identified. The answer to this problem is as follows. Define a relation <~ on W(S) by setting w <~ v iff one of the following holds: • w = v (this can be restricted to the case where w and v are elements of S), • w = 0 or v = 1, • w = w1 w2 and both w1<~v and w2<~v hold, • w = w1 w2 and either w1<~v or w2<~v holds, • v = v1 v2 and either w<~v1 or w<~v2 holds, • v = v1 v2 and both w<~v1 and w<~v2 hold. This defines a preorder <~ on W(S). The partially ordered set induced by this preorder (i.e. the set obtained by identifying all words w and v with w<~v and v<~w) is the free lattice on S. The required embedding i is the obvious mapping from a generator a to (the set of words equivalent to) the word a. One of the consequences of this statement is that the free lattice of a three element set of generators is already infinite. In fact, one can even show that every free lattice on three generators contains a sublattice which is free for a set of four generators. By induction this eventuall yields a sublattice free on countably many generators. The case of lattices that are not bounded is treated similarly, using only the two binary operations in the above construction. Important lattice-theoretic notions In the following, let L be a lattice. We define some order-theoretic notions that are of particular importance in lattice theory. An element x of L is called join-irreducible iff • x = a v b implies x = a or x = b for any a, b in L, • if L has a 0, x is sometimes required to be different from 0. When the first condition is generalized to arbitrary joins Va[i], x is called completely join-irreducible. The dual notion is called meet-irreducability. Sometimes one also uses the terms v-irreducible and ^-irreducible, respectively. An element x of L is called join-prime iff • x ≤ a v b implies x ≤ a or x ≤ b, • if L has a 0, x is sometimes required to be different from 0. Again, this can be generalized to obtain the notion completely join-prime and dualized to yield meet-prime. Any join-prime element is also join-irreducible, and any meet-prime element is also meet-irreducible. If the lattice is distributive the converse is also true. Other important notions in lattice theory are ideal and its dual notion filter. Both terms describe special subsets of a lattice (or of any partially ordered set in general). Details can be found in the respective articles. A very good first introduction is given in the popular textbook of Davey's and Priestley's: • B. A. Davey, H. A. Priestley: Introduction to Lattices and Order. Cambridge University Press, 2002. (ISBN 0521784514) A more in depth treatment can be found in Garrett Birkhoff's classic: • G. Birkhoff: Lattice Theory. Volume 25 of American Mathematical Society Colloquium Publications. American Mathematical Society, Providence, Rhode Island, 3rd edition, 1967. The results on free lattices can also be found in the (otherwise not lattice-theoretical) book: • P. T. Johnstone: Stone spaces. Cambridge Studies in Advancd Mathematics 3, Cambridge University Press, 1982.
{"url":"http://july.fixedreference.org/en/20040724/wikipedia/Lattice_(order)","timestamp":"2014-04-19T05:01:42Z","content_type":null,"content_length":"28377","record_id":"<urn:uuid:acbabbb3-85ca-471c-bf2c-a84b6ae01a61>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematician proves there are infinitely many pairs of prime numbers less than 70 million units apart May 15th, 2013 in Other Sciences / Mathematics Prime Numbers Prime Numbers (Phys.org) —Mathematician Yitang Zhang of the University of New Hampshire, appears to have taken a major step in solving the twin prime conjecture. He's come up with a mathematical proof that shows that the number of pairs of prime numbers that exist that are less than 70 million units apart is infinite. His proof is currently under review for publication in the journal Annals of Mathematics. The twin prime conjecture has puzzled mathematicians for nearly as long as they have known of the existence of prime numbers (whole numbers divisible by themselves and one)—going all the way back to Euclid. An interesting aspect of prime numbers is they come farther and farther apart as more are found—except sometimes, they don't—sometimes instead, they come in pairs: 11 and 13 for example, or 41 and 43. The twin prime conjecture states that there are infinitely many pairs, but no one has been able to prove it. The closest anyone has come is when a team of three mathematicians demonstrated back in 2005 that the number of prime pairs that differ by only 16 units is infinite. The problem there was that it was based on another unproven conjecture. In this new work, Zhang has shown, using nothing but standard mathematical techniques that the number of pairs of prime numbers that exist that are 70 million units apart, or less, is infinite—sans unproven conjecture. Mathematicians note that 70 million might seem like a lot to those outside the field, but inside the field, it's a tremendous breakthrough. This is because it proves that the size of the stretches between pairs doesn't keep growing larger forever—a baseline exists—a baseline that could very well be reduced to a smaller number, though no one is yet suggesting it might ever come down to just 2. Zhang has said in interviews that the idea for his proof came to him while he was visiting with a friend last summer. He's been working on it ever since. And now that he's made his proof public, other mathematicians have been reviewing it as well, and thus far, no one has spotted any problems with it. More information: via Nature doi:10.1038/nature.2013.12989 © 2013 Phys.org "Mathematician proves there are infinitely many pairs of prime numbers less than 70 million units apart." May 15th, 2013. http://phys.org/news/
{"url":"http://phys.org/print287828042.html","timestamp":"2014-04-17T14:04:05Z","content_type":null,"content_length":"7237","record_id":"<urn:uuid:b9eef1c3-a20f-4985-870d-43d3c6adaa49>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
FairVote - Algorithm This page describes the variables used in the model, the algorithm used for four different types of races, and how we measure the accuracy of a projected party and winning percentage. This information is common to the methods used for 1996-2000 as well as for 2002. For the 2002 projections, however, we made two modifications to account for changes in districts due to redistricting and the lack of availability of data about the partisan nature of 60 of the 435 new districts. First, we adjusted incumbents' prior winning percentages according to the change in partisanship of the 2000 district compared to the 2002 district. For example, if a candidate received 58% in a district that became 5% stronger for the incumbent party, we would treat the prior race as a 63%. If the district became 5% less favorable to the incumbent, we would treat it as a 53%. Second, in districts that lacked 2002 partisanship data based on the Gore's percentage in the new district, we cautiously estimated a partisanship by subtracting 3% from the partisanship of corresponding 2000 district. Third, because our knowledge of the 2002 districts is limited, we increased the parameter, "Decrease for long term overachievers," from 42% for 1996-2000 to 60%. This means we make slightly more cautious projections for 2002. In all other respects, the model treats races from 1996-2000 identically to 2002 races. This section describes each variable used in the model, along with the default value and the category of seats it applies to. The variables typically refer to an incumbent’s weakest performance out of the last two elections. Variables related to projections Buffer for open seats (11%). This variable is amount by which partisanship is reduced to make an open seat projection. For example, a 70% district is reduced to a 59% projection. Increase for underachievers (33%). If a freshman runs behind her partisanship in her first election, her projection is raised 33% of the way up from her election result to the partisanship. For example, a freshman elected with 55% in 61% district is projected to win with 57% in her second election. Decrease for overachievers (67%). If a freshman or a two-term member runs ahead of their district partisanship, we reduce their projection 67% of the way back to the partisanship. For example, a candidate who wins with 61% in a 55% district is projected to win with 57%. Reduction for underachievers (1%). If a two-term or three-or-more term incumbent’s worst performance is below the district partisanship, the projection is 1% less than their worst performance. Previous race uncontested (35%). If a two-termer’s second election was uncontested, the incumbent is treated like a freshman. This variable represents the minimum winning margin by which a race is considered uncontested race. Decrease for long-term overachievers (60%). If a three-term incumbent’s worst performance is better than the partisanship of the district, the projection is reduced by 60% of the difference of partisanship and worst performance. Adjustment to better 2nd election (33%). If a two-termer’s second election was better than their first, then use this variable to interpolate from the stronger performance to the weaker performance to establish the “weakest performance” number. The purpose of this is to give more weight to the 2nd election. Variable related to national two-party vote Dem 2-party share (50%). Our model makes projections based on an assumption that the national two-party vote is evenly split between Democrats and Republicans. To examine the projections in the event the two-party vote is not evenly split, you can enter a different number in this cell. Variables related to categories No proj win (3%). If a projected result is between 47% and 53%, it is considered “no projected win.” Win (5%). If a projected win is between 53% and 55%, it is considered a “win” projection. Comfortable (10%). If a projected results is between 55% and 60%, it is considered “comfortable.” Landslide (>10%). This is not a separate variable, as it encompasses all projections greater than “comfortable.” Algorithms for 4 different types of races I. Open seats Use partisanship to project outcome, but then subtract 11% off projected margin. Ex: a 62% partisanship is a 24% margin, minus 11% = 13% margin or 56.5% (comfortable) There are 3 separate cases for incumbents. For 2002 projections, we begin by adjusting past performance to reflect change in district partisanship from 2000 to 2002. Ex: A 53.5% win in a district that changed from 52.1% to 54.3% picks up 2.2% and is treated as a 55.7% result. II. Freshman If past performance was below partisanship, the projection equals the past performance + 1/3 the difference with partisanship. (Projection stronger than past performance.) Ex: Past performance 42.5%, partisanship 48.5%, projection = 44.5% If past performance was better than partisanship, the projection equals past performance – 2/3 of the difference with partisanship. (Projection weaker than past performance.) Ex: Past performance 48.5%, partisanship 42.5%, projection = 44.5% III. Two-termers If 2nd election was uncontested, treat first election as like a freshman. If 2nd election contested, then consider weakest of past 2 performances. If the 2nd election was stronger than the first, adjust the past performance by interpolating from the stronger performance toward the weaker performance based on the variable. Then, If past performance < partisanship, projection = performance - 2% If past performance > partisanship, projection = past performance - 2/3 difference with partisanship. IV. Long termers (3 or more) Take weakest of past 2 performances. If < p-ship, projection = performance - 2%. If > p-ship, projection = interpolation from past performance back toward p-ship based on variable. Projection of party and margin The projection is a percent (0-100) referring to likely Dem vote. Based on that, we project a party and range (landslide, comfortable, competitive or no projection) Scheme for projection of party: Incumbent Party Projection D R >= 50 + 3 D Vulnerable <= 50 – 3 Vulnerable R between No proj No proj This area in yellow shows how the projection number (0-100) and incumbent party combine to make a projection about which party will win the seat. Scheme for projected winning range: Projected winning party Projection No proj Vulnerable D/R >= 50 + 10 No proj vulnerable Landslide >= 50 + 5 No proj vulnerable Comfortable >= 50 + 0 No proj vulnerable Competitive For example, a Dem in a 57% projection seat is a D projection. A Dem in a 45% seat is classified as vulnerable. There is one I seat (Bernie Sanders, VT), and he is treated like a Democrat. For example, if a Dem is projected (ie, projected party is D and not No Projection or Vulnerable), the projected range is correct under the following conditions: party Projection Correct if Dem Competitive Dem wins Dem Comfortable Dem % >=55% Dem Landslide Dem % >=60% For example, a Dem in a 57% projection seat is a D projection. A Dem in a 45% seat is classified as vulnerable. There is one I seat (Bernie Sanders, VT), and he is treated like a Democrat. The same patterns applies for Rep seats. For example, a Rep. "comfortable projection" is correct if the candidate wins with at least 55% of the vote.
{"url":"http://archive.fairvote.org/global/?page=492","timestamp":"2014-04-19T02:31:32Z","content_type":null,"content_length":"15409","record_id":"<urn:uuid:48ff00ee-0a98-428a-bce2-722a067997d4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Elmwood Park, NJ SAT Math Tutor Find an Elmwood Park, NJ SAT Math Tutor ...I have degrees in Biology and Mathematics and am comfortable teaching any and all of the subjects in both science and math. I have personally tutored everything from Algebra to Advanced Calculus and English to AP Biology and everything in between. I also have experience teaching to the SAT and ... 22 Subjects: including SAT math, reading, chemistry, English ...I am confident in my English skills as well because learning other languages forces one to perform well in English. I live in New Hartford with my wife, two kids, and two puppies who are growing like weeds. If I can be of assistance, please do not hesitate to contact me! 45 Subjects: including SAT math, English, Spanish, reading ...Please consider contacting me to discuss your tutoring needs.I have an MBA from Stern School of Business of New York University. Intro to Business subject would cover topics of ethics, basic finance and accounting and budgeting, basic marketing, basic human resources, relationship to government ... 35 Subjects: including SAT math, English, reading, geometry ...Recent tutoring successes, especially in SAT Prep, have resulted in students receiving hefty scholarships (up to $50,000 a year for some) to colleges including Gettysburg College, Boston University, Drexel University, Rutgers University, Stony brook, Drew, Montclair, The College Of New Jersey, H... 33 Subjects: including SAT math, physics, calculus, GRE Even though I struggled with math in the past, today I'm a mechanical engineering major in one of the best engineering programs nationwide. Overcoming my math struggles, gave me special abilities to tutor and help others to overcome theirs. Let me help you discover the math wiz we all have in us. 12 Subjects: including SAT math, chemistry, French, calculus
{"url":"http://www.purplemath.com/Elmwood_Park_NJ_SAT_Math_tutors.php","timestamp":"2014-04-19T09:45:03Z","content_type":null,"content_length":"24279","record_id":"<urn:uuid:edc42391-3379-4d6d-985c-9122aa61f0bb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Confidence Interval Comparison Calculator Features & Benefits • Computes the Adjusted-Wald, Wald, Exact and Score confidence intervals • Easily integrate the formulas into your spreadsheets • Specify any confidence level (e.g. 95%, 99%, 98%) • Supports both 1 and 2 Tailed Confidence Intervals • Provides both methods for computing a confidence interval when there are all successes or all failures. One method uses 1 -tailed critical value and the other uses a 2-tailed critical value. Who Should Buy Those who need a quick way to compute multiple binomial confidence intervals and to have access to the formulas in Excel. Select Sample Screen Shots Confidence Interval Comparsions Easily enter the number of successes and number of failures See the Detailed Calculations If you need to see how the Exact, Wald, Adjusted-Wald or Score intervals are calculated or want to integrate the formulas into your sheets, having the calculations is essential. Shows both calculations where there are all successes or all failures When there are all successes or all failures (for example 10/10) and you want a 95% 2-tailed confidence interval, you have a problem. A 2-tailed confidence interval means there is 1-(alpha/2) chance of the population being below the lower bound or above the upper bound of the interval. Since it is impossible to have anything above 1 (or 100%) you end up with a 97.5% confidence interval. Some calculators compensate for this by adjusting the confidence level to 90% so there is still a 5% chance of the proportion falling below the lower bound (instead of a 97.5% chance). There is some controversy over what the "correct' approach is. Without the adjustment, you will be overstating the width of the interval. With the adjustment you will understate the width under some conditions. This calculator provides both methods. Batch Confidence Intervals (Adjusted-Wald) If you've ever needed to compute a lot of confidence intervals fast, the batch functionality will come in handy. Just enter the number of successes and total number tested and copy formulas in excel. There's even a graph that will automatically update for you. Batch Confidence Intervals (Exact) While the adjusted-wald interval is the best all-around confidence interval, there may be times where you need the exact interval (such as in medical studies when you need to be absolutely certain the confidence interval coverage is at least 95%). Simply enter the number of successes and total tested for as many rows as you need. Binomial Confidence Intervals • Wald • Adjusted-Wald • Exact (Clopper-Pearson) • Score (Wilson Interval) What Customers are Saying: The confidence interval calculator is a great product. The batch feature that allows calculation of many CIs at once is a tremendous time saver. Many statistical packages still focus on statistical tests and do not calculate interval estimates, so this program fills an important gap. Polly Bijur Ph.D MPH Professor of Emergency Medicine, Epidemiology and Population Health Albert Einstein College of Medicine I like the overall calculator functions and found it to be useful. Orbital Science Corp. I needed to know how to perform a statistical test in Excel but could not figure it out. This calculator gave me exactly what I was looking for. Sharon Kaiser
{"url":"http://www.measuringusability.com/products/ciComparison","timestamp":"2014-04-20T15:51:11Z","content_type":null,"content_length":"19357","record_id":"<urn:uuid:02a5597d-4326-44c2-948f-dcfee998d634>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
We often think of math as being rather dry. Memorizing formulas. Calculating things. A race to a single right answer. But math can also be creative and inspiring — about discovering deeper patterns. Revealing hidden connections. Math as fun. The Julia Robinson Mathematics Festival — a single-day event for middle- and high-school kids — is a terrific way to experience the fun. “We wanted to emphasize fun rather than competition, so we decided to call it a festival,” says founder Nancy Blachman. I’ve been to several events and they are festive — tables filled with games, puzzles, colorful activities and kids having a great time. The next scheduled festival will take place at the University of California, Berkeley on Jan. 27, and our Numberplay puzzles for the next two weeks will feature puzzles from this upcoming event. Festival director Joshua Zucker will be chiming in with suggestions and ideas as we discuss each challenge. And with that, here’s Mr. Zucker with this week’s puzzle. Jump in and give it a try! The Trapezoidal Number Puzzle There are several ways to write 2013 as a sum of consecutive positive integers, such as 2013 = 670 + 671 + 672. How many ways are there in total? These ways are called trapezoidal representations, because you can line up the terms of the sum in rows to make a trapezoidal shape. Sometimes these numbers are also called “staircase numbers” for the same reason. The number 5, for example, has one trapezoidal representation: What numbers have no trapezoidal representations? How many trapezoidal representations does a googol (1 followed by 100 zeros) have? The Trapezoid Number Puzzle was one of the more fiercely attacked puzzles we’ve seen in some time. Here’s Joshua Zucker with the solution: The 13-Link Chain Puzzle Gary Antonick The Mathematical Sciences Research Institute in Berkeley, which is dedicated to the appreciation of the beauty and power of mathematical ideas, was host to a Celebration of Mind event this past Friday. It was held to honor the life and many interests of recreational mathematician Martin Gardner, whose puzzles have appeared so frequently in Numberplay. I was delighted to be a part of the celebration, and came away with several terrific puzzles. My favorite is the following scale and chain challenge, which was suggested by Dr. Ashok Vaish. The 13-Link Chain Puzzle You have a balance scale and a single chain with thirteen links. Each link of the chain weighs one ounce. How many links of the chain do you need to break in order to be able to weigh items from 1 to 13 ounces in 1-ounce increments? That’s it. Before getting to our recap of last week’s puzzle, I want to mention that David Suzuki hosted an episode of the TV series “The Nature of Things” about Martin Gardner’s many interests and countless friends. You’ll find the video at the end of this post. You’ll also find the complete set of Vi Hart’s hexaflexagon videos, including the fresh Hex Mex. Bon appétit! And now — our recap of last week’s puzzle. Recap: Martin Gardner’s The Two-Child Problem The puzzle: □ Mr. Jones has two children. The older child is a girl. What is the probability that both children are girls? □ Mr. Smith has two children. At least one of them is a boy. What is the probability that both children are boys? The solution: Golden Dragon was in early with this correct answer:
{"url":"http://wordplay.blogs.nytimes.com/tag/berkeley-calif/","timestamp":"2014-04-17T07:11:05Z","content_type":null,"content_length":"30755","record_id":"<urn:uuid:d6c40128-94f1-45cc-954a-b464d2904d4d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the between estimator? Title Between estimators Author William Gould, StataCorp Date March 2001 I understand the basic differences between a fixed-effects and a random-effects model for a panel dataset, but what is the “between estimator”? The manual explains the command, but I cannot figure out what would lead one to choose (or not choose) the between estimator. Let’s start off by explaining cross-sectional time-series models. One usually writes cross-sectional time-series models as y_it = X_it*b + (complicated error term) and explores different ways of writing the complicated error term. In this discussion, I want to avoid that, so let’s just write y_it = X_it*b + noise I want to put the issue of noise aside and focus on b because, it turns out, b has some surprising meanings. Let’s focus on one of the X variables, say, x1: y_it = x1_it*b1 + x2_it*b2 + ... + noise b1 says that an increase in x1 of one unit leads me to expect, in all cases, that y will increase by b1. The emphasis here is on “in all cases”: I expect the same difference in y if 1. I observe two different subjects with a one-unit difference in x between them, and 2. I observe one subject whose x value increases by one unit. Some variables might act like that, but there is no reason to expect that all variables will. For example, pretend that y is income and x1 is “lives in the South of the United States”: 1. If I compare two different people, one who lives in the East (x1=0) and another who lives in the South (x1=1), I expect the earnings of the person living in the South to be lower because, on average, all prices and wages are lower in the South. That is, I expect the coefficient on x1, b1, will be less than 0. 2. On the other hand, if I observe a person living in the East (x1=0) who moves to the South (x1=1), I expect that the earnings increased, or why else would that person move? That is, I expect b1 will be greater than 0. There are really two kinds of information in cross-sectional time-series data: 1. The cross-sectional information reflected in the changes between subjects 2. The time-series or within-subject information reflected in the changes within subjects xtreg, be estimates using the cross-sectional information in the data. xtreg, fe estimates using the time-series information in the data. The random-effects estimator, it turns out, is a matrix-weighted average of those two results. Under the assumption that b1 really does have the same effect in the cross-section as in the time-series—and that b2, b3, ... work the same way—we can pool these two sources of information to get a more efficient estimator. Now let’s discuss testing the random-effects assumption. Indeed, it is the expected equality of these two estimators under the equality-of-effects assumption that leads to the application of the Hausman test to test the random-effects assumption. In the Hausman test, one tests that b_estimated_by_RE == b_estimated_by_FE As I just said, it is true that b_estimated_by_RE = Average(b_estimated_by_BE, b_estimated_FE) and so the Hausman test is a test that Average(b_estimated_by_BE, b_estimated_FE) == b_estimated_by_FE or equivalently, that b_estimated_by_BE == b_estimated_by_FE I am being loose with my math here but there is, in fact, literature forming the test in this way and, if you forced me to go back and fill in all the details, we would discover that the Hausman test is asymptotically equivalent to testing by more conventional means, but that is not important right now. What is important is that the Hausman test is cast in terms of efficiency, whereas thinking about b_estimated_by_BE == b_estimated_by_FE has recast the problem in terms of something real: 1. b1 from b_estimated_by_BE is what I would use to answer the question “What is the expected difference between Mary and Joe if they differ in x1 by 1?” 2. b1 from b_estimated_by_FE is what I would use to answer the question “What is the expected change in Joe’s value if his x1 increases by 1?” It may turn out that the answers to those two questions are the same (random effects), and it may turn out that they are different. More general tests exist, as well. If they are different, does that really mean the random-effects assumption is invalid? No, it just means the silly random-effects model that constrains all betas between person and within person to be equal is invalid. Within a random-effects model, there is nothing stopping me from saying that, for b1, the effects are different: . egen avgx1 = mean(x1), by(i) . gen deltax1 = x1 - avgx1 . xtreg y avgx1 deltax1 x2 x3 ..., re In the above model, _b[avgx1] measures the effect within the cross-section, and _b[deltax1] the effect within person. The model constrains the other effects to have the same effect within the cross-section and within-person (the random-effects model). In fact, if I make this decomposition for every variable in the model: . egen avgx1 = mean(x1), by(i) . gen deltax1 = x1 - avgx1 . egen avgx2 = mean(x2), by(i) . gen deltax2 = x2 - avgx2 . ... . xtreg y avgx1 deltax1 avgx2 deltax2 ..., re The coefficients I obtain will equal the coefficients that would be estimated separately by xtreg, be and xtreg, fe. Moreover, I now have the equivalent of the Hausman specification test, but recast with different words. My test amounts amounts to testing that the cross-sectional effects equal the within-person . test avgx1 = deltax1 . test avgx2 = deltax2, accum . ... Not only do I like these words better, but this test, I believe, is a better test than the Hausman test for testing random effects, because the Hausman test depends more on asymptotics. In particular, the Hausman test depends on the difference between two separately estimated covariance matrices being positive definite, something they just have to be, asymptotically speaking, under the assumptions of the test. In practice, the difference is sometimes not positive definite, and then we have to discuss how to interpret that result. In the above test, however, that problem simply cannot arise. This test has another advantage, too. I do not have to say that the cross-sectional and within-person effects are the same for all variables. I may very well need to include x1=“lives in the South” in my model—knowing that it is an important confounder and knowing that its effects are not the same in the cross-section as in the time-series—but my real interest is in the other variables. What I want to know is whether the data cast doubt on the assumption that the other variables have the same cross-sectional and time-series effects. So, I can test . test avgx2 = deltax2 . test avgx3 = deltax3, accum . ... and simply omit x1 from the test. Why, then, does Stata include xtreg, be? One answer is that it is a necessary ingredient in calculating random-effects results: the random-effects results are a weighted average of the xtreg, be and the xtreg, fe results. Another is that it is important in and of itself if you are willing to think a little differently from most people about cross-sectional time-series models. xtreg, be answers the question about the effect of x when x changes between person. This can usefully be compared with the results of xtreg, fe, which answers the question about the effect of x when x changes within person. Thinking about and discussing the between and within models is an alternative to discussing the structure of the residuals. I must say that I lose interest rapidly when researchers report that they can make important predictions about unobservables. My interest is piqued when researchers report something that I can almost feel, touch, and see.
{"url":"http://www.stata.com/support/faqs/statistics/between-estimator/","timestamp":"2014-04-20T18:33:14Z","content_type":null,"content_length":"32669","record_id":"<urn:uuid:08a76e62-bb86-4802-8025-ee2575e1517a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Effects of surface tension March 5th 2008, 02:25 AM Effects of surface tension What I have understood about surface tension is that it lays a pressure on the surface in proportion to the bending of it, the angular change per meter surface , which has the unit $[rad/m^{-1}]= [m^{-1}]$. And pressure has the unit $N/m^2$. So surface tension would have the unit $\left[\frac{N/m^2}{m^{-1}}\right]=[N/m]$, which it has. Now, is it so that the pressure applied to the surface is given by $P=S\cdot\frac{\delta^2 y}{\delta x^2}$ Where S is the surface tension, and x and y have the unit $[m]$, so $\frac{\delta^2y}{\delta x^2}$ has the unit $[m^{-1}]$. $\frac{\delta^2 y}{\delta x^2}$ is the same as $\frac{\delta\alpha}{\ delta x}$, where $\alpha$ is the angle ( $\alpha = \frac{\delta y}{\delta x}$ for small $\alpha$) March 5th 2008, 09:23 AM What I have understood about surface tension is that it lays a pressure on the surface in proportion to the bending of it, the angular change per meter surface , which has the unit $[rad/m^{-1}]= [m^{-1}]$. And pressure has the unit $N/m^2$. So surface tension would have the unit $\left[\frac{N/m^2}{m^{-1}}\right]=[N/m]$, which it has. Now, is it so that the pressure applied to the surface is given by $P=S\cdot\frac{\delta^2 y}{\delta x^2}$ Where S is the surface tension, and x and y have the unit $[m]$, so $\frac{\delta^2y}{\delta x^2}$ has the unit $[m^{-1}]$. $\frac{\delta^2 y}{\delta x^2}$ is the same as $\frac{\delta\alpha}{\ delta x}$, where $\alpha$ is the angle ( $\alpha = \frac{\delta y}{\delta x}$ for small $\alpha$) If I am understanding you correct then you are right. There is also another feature of surface tension: the surface formed contains the least possible amount of energy. In this way you can predict the equation for the surface. (There's a partial differential equation for this, but I don't remember it and I'm too lazy right now to look it up. :) ) March 11th 2008, 04:52 PM I found a picture on the wikipedia article which made it a bit clearer to me I haven't thought of surface tension this way before, like something is pulling sideways in the surface. This clearly motivates the tension part of the name. ;)
{"url":"http://mathhelpforum.com/advanced-applied-math/30035-effects-surface-tension-print.html","timestamp":"2014-04-16T14:47:34Z","content_type":null,"content_length":"10611","record_id":"<urn:uuid:57633178-7676-4a2b-9028-92925ddd7d31>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory Small_Step header "Small-Step Semantics of Commands" theory Small_Step imports Star Big_Step begin subsection "The transition relation" text_raw{*\snip{SmallStepDef}{0}{2}{% *} small_step :: "com * state => com * state => bool" (infix "->" 55) Assign: "(x ::= a, s) -> (SKIP, s(x := aval a s))" | Seq1: "(SKIP;;c⇩[2],s) -> (c⇩[2],s)" | Seq2: "(c⇩[1],s) -> (c⇩[1]',s') ==> (c⇩[1];;c⇩[2],s) -> (c⇩[1]';;c⇩[2],s')" | IfTrue: "bval b s ==> (IF b THEN c⇩[1] ELSE c⇩[2],s) -> (c⇩[1],s)" | IfFalse: "¬bval b s ==> (IF b THEN c⇩[1] ELSE c⇩[2],s) -> (c⇩[2],s)" | While: "(WHILE b DO c,s) -> (IF b THEN c;; WHILE b DO c ELSE SKIP,s)" small_steps :: "com * state => com * state => bool" (infix "->*" 55) where "x ->* y == star small_step x y" subsection{* Executability *} code_pred small_step . values "{(c',map t [''x'',''y'',''z'']) |c' t. (''x'' ::= V ''z'';; ''y'' ::= V ''x'', <''x'' := 3, ''y'' := 7, ''z'' := 5>) ->* (c',t)}" subsection{* Proof infrastructure *} subsubsection{* Induction rules *} text{* The default induction rule @{thm[source] small_step.induct} only works for lemmas of the form @{text"a -> b ==> …"} where @{text a} and @{text b} are not already pairs @{text"(DUMMY,DUMMY)"}. We can generate a suitable variant of @{thm[source] small_step.induct} for pairs by ``splitting'' the arguments @{text"->"} into pairs: *} lemmas small_step_induct = small_step.induct[split_format(complete)] subsubsection{* Proof automation *} declare small_step.intros[simp,intro] text{* Rule inversion: *} inductive_cases SkipE[elim!]: "(SKIP,s) -> ct" thm SkipE inductive_cases AssignE[elim!]: "(x::=a,s) -> ct" thm AssignE inductive_cases SeqE[elim]: "(c1;;c2,s) -> ct" thm SeqE inductive_cases IfE[elim!]: "(IF b THEN c1 ELSE c2,s) -> ct" inductive_cases WhileE[elim]: "(WHILE b DO c, s) -> ct" text{* A simple property: *} lemma deterministic: "cs -> cs' ==> cs -> cs'' ==> cs'' = cs'" apply(induction arbitrary: cs'' rule: small_step.induct) apply blast+ subsection "Equivalence with big-step semantics" lemma star_seq2: "(c1,s) ->* (c1',s') ==> (c1;;c2,s) ->* (c1';;c2,s')" proof(induction rule: star_induct) case refl thus ?case by simp case step thus ?case by (metis Seq2 star.step) lemma seq_comp: "[| (c1,s1) ->* (SKIP,s2); (c2,s2) ->* (SKIP,s3) |] ==> (c1;;c2, s1) ->* (SKIP,s3)" by(blast intro: star.step star_seq2 star_trans) text{* The following proof corresponds to one on the board where one would show chains of @{text "->"} and @{text "->*"} steps. *} lemma big_to_small: "cs => t ==> cs ->* (SKIP,t)" proof (induction rule: big_step.induct) fix s show "(SKIP,s) ->* (SKIP,s)" by simp fix x a s show "(x ::= a,s) ->* (SKIP, s(x := aval a s))" by auto fix c1 c2 s1 s2 s3 assume "(c1,s1) ->* (SKIP,s2)" and "(c2,s2) ->* (SKIP,s3)" thus "(c1;;c2, s1) ->* (SKIP,s3)" by (rule seq_comp) fix s::state and b c0 c1 t assume "bval b s" hence "(IF b THEN c0 ELSE c1,s) -> (c0,s)" by simp moreover assume "(c0,s) ->* (SKIP,t)" show "(IF b THEN c0 ELSE c1,s) ->* (SKIP,t)" by (metis star.simps) fix s::state and b c0 c1 t assume "¬bval b s" hence "(IF b THEN c0 ELSE c1,s) -> (c1,s)" by simp moreover assume "(c1,s) ->* (SKIP,t)" show "(IF b THEN c0 ELSE c1,s) ->* (SKIP,t)" by (metis star.simps) fix b c and s::state assume b: "¬bval b s" let ?if = "IF b THEN c;; WHILE b DO c ELSE SKIP" have "(WHILE b DO c,s) -> (?if, s)" by blast moreover have "(?if,s) -> (SKIP, s)" by (simp add: b) ultimately show "(WHILE b DO c,s) ->* (SKIP,s)" by(metis star.refl star.step) fix b c s s' t let ?w = "WHILE b DO c" let ?if = "IF b THEN c;; ?w ELSE SKIP" assume w: "(?w,s') ->* (SKIP,t)" assume c: "(c,s) ->* (SKIP,s')" assume b: "bval b s" have "(?w,s) -> (?if, s)" by blast moreover have "(?if, s) -> (c;; ?w, s)" by (simp add: b) moreover have "(c;; ?w,s) ->* (SKIP,t)" by(rule seq_comp[OF c w]) ultimately show "(WHILE b DO c,s) ->* (SKIP,t)" by (metis star.simps) text{* Each case of the induction can be proved automatically: *} lemma "cs => t ==> cs ->* (SKIP,t)" proof (induction rule: big_step.induct) case Skip show ?case by blast case Assign show ?case by blast case Seq thus ?case by (blast intro: seq_comp) case IfTrue thus ?case by (blast intro: star.step) case IfFalse thus ?case by (blast intro: star.step) case WhileFalse thus ?case by (metis star.step star_step1 small_step.IfFalse small_step.While) case WhileTrue thus ?case by(metis While seq_comp small_step.IfTrue star.step[of small_step]) lemma small1_big_continue: "cs -> cs' ==> cs' => t ==> cs => t" apply (induction arbitrary: t rule: small_step.induct) apply auto lemma small_big_continue: "cs ->* cs' ==> cs' => t ==> cs => t" apply (induction rule: star.induct) apply (auto intro: small1_big_continue) lemma small_to_big: "cs ->* (SKIP,t) ==> cs => t" by (metis small_big_continue Skip) text {* Finally, the equivalence theorem: theorem big_iff_small: "cs => t = cs ->* (SKIP,t)" by(metis big_to_small small_to_big) subsection "Final configurations and infinite reductions" definition "final cs <-> ¬(EX cs'. cs -> cs')" lemma finalD: "final (c,s) ==> c = SKIP" apply(simp add: final_def) apply(induction c) apply blast+ lemma final_iff_SKIP: "final (c,s) = (c = SKIP)" by (metis SkipE finalD final_def) text{* Now we can show that @{text"=>"} yields a final state iff @{text"->"} terminates: *} lemma big_iff_small_termination: "(EX t. cs => t) <-> (EX cs'. cs ->* cs' ∧ final cs')" by(simp add: big_iff_small final_iff_SKIP) text{* This is the same as saying that the absence of a big step result is equivalent with absence of a terminating small step sequence, i.e.\ with nontermination. Since @{text"->"} is determininistic, there is no difference between may and must terminate. *}
{"url":"http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/library/HOL/HOL-IMP/Small_Step.html","timestamp":"2014-04-17T12:34:41Z","content_type":null,"content_length":"21689","record_id":"<urn:uuid:f0a5a249-b050-4807-8102-49c65ce4d3c1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
POOMA Tutorial 2 Red/Black Update Red-Black Update Passing Arrays to Functions Calling the Function A Note on Expressions Using Two-Dimensional Ranges Periodic Boundary Conditions Operations and Their Results This tutorial shows how Range objects can be used to specify more general multi-valued array indices. It also introduces the ConstArray class, and delves a bit more deeply into the similarities between POOMA's arrays and the Standard Template Library's iterators. Jacobi iteration is a good general-purpose relaxation method, but there are several ways to speed up its convergence rate. One of these, called red-black updating, can also reduce the amount of memory that a program requires. Imagine that the array's elements are alternately colored red and black, like the squares on a checkerboard. In even-numbered iterations, the red squares are updated using the values of their black neighbors; on odd-numbered iterations, the black squares are updated using the red squares' values. These updates can clearly be done in place, without any need for temporaries, and yield faster convergence for an equivalent number of calculations than simple Jacobi iteration. A complete program that implements this is shown below (and is included in the release as examples/Solvers/RBJacobi). Its key elements are the declaration and initialization of two Range objects on line 37, the definition of the function that applies Jacobi relaxation on a specified domain on lines 9-17, and the four calls to that function on lines 43-47. The sections following the program source discuss each of these points in turn. 01 #include "Pooma/Arrays.h" 03 #include <iostream> 05 // The size of each side of the domain. 06 const int N = 20; 08 // Apply a Jacobi iteration on the given domain. 09 void 10 ApplyJacobi( 11 const Array<2> & V, // to be relaxed 12 const ConstArray<2> & b, // fixed term 13 const Range<1> & I, // range on first axis 14 const Range<1> & J // range on second axis 15 ){ 16 V(I,J) = 0.25 * (V(I+1,J) + V(I-1,J) + V(I,J+1) + V(I,J-1) - b(I,J)); 17 } 19 int 20 main( 21 int argc, // argument count 22 char* argv[] // argument list 23 ){ 24 // Initialize POOMA. 25 Pooma::initialize(argc, argv); 27 // The array we'll be solving for. 28 Array<2> V(N, N); 29 V = 0.0; 31 // The right hand side of the equation. 32 Array<2> b(N,N); 33 b = 0.0; 34 b(N/2, N/2) = -1.0; 36 // The interior domain, now with stride 2. 37 Range<1> I(1, N-3, 2), J(1, N-3, 2); 39 // Iterate 100 times. 40 for (int iteration=0; iteration<100; ++iteration) 41 { 42 // red 43 ApplyJacobi(V, b, I, J); 44 ApplyJacobi(V, b, I+1, J+1); 45 // black 46 ApplyJacobi(V, b, I+1, J); 47 ApplyJacobi(V, b, I, J+1); 48 } 50 // Print out the result. 51 std::cout << V << std::endl; 53 // Clean up and report success. 54 Pooma::finalize(); 55 return 0; 56 } Our first requirement is a simple, efficient way to specify non-adjacent array elements. POOMA borrows the terminology of Fortran 90 and other data-parallel languages, referring to the spacing between a sequence of index values as the sequence's stride. For example, the sequence of indices {1,3,5,7} has a stride of 2, while the sequence {8,13,18,23,28} has a stride of 5, and the sequence {10,7,4,1} has a stride of -3. POOMA programs represent index sequences with non-unit strides using Range objects. The templated class Range is a generalization of the Interval class seen in the previous tutorial (although for implementation reasons Interval is not derived from Range). When a Range is declared, the program must specify its rank (i.e., the number of dimensions it spans). The object's constructor parameters then specify the initial value of the sequence it represents, the upper bound on the sequence's value (or lower bound, if the stride is negative), and the actual stride. For example, the three sequences in the previous paragraph would be declared as: Range<1> first ( 1, 7, 2); Range<1> second( 8, 30, 5); Range<1> third (10, 0, -3); Note that the range's bound does not have to be a value in the sequence: an upward range stops at the greatest sequence element less than or equal to the bound, while a downward range stops at the smallest sequence element greater than or equal to the bound. This conforms to the meaning of the Fortran 90 triplet notation. It may seem redundant to define a separate class for Interval, since it is just a Range with a stride of 1. However, the use of an Interval is a signal that certain optimizations are possible during compilation that take advantage of Interval's unit stride. These optimizations cannot efficiently be deferred until the program is executing, since that would, in effect, require a conditional inside an inner loop. Another reason for making Interval and Range different classes is that Intervals can be used when declaring Array dimensions, but Ranges cannot, since Arrays must always have unit The previous tutorial said that the use of a non-scalar index as an array subscript selected a section from that array. The way this is implemented is tied into POOMA's notion of an engine. Arrays are just handles on engines, which are entities that give the appearance of managing actual data. Engines come in two types: storage engines, which actually contain data, and proxy engines, which can alias storage engines' data areas, calculate data values on demand, or do just about anything else in order to give the appearance there's an actual data area in there somewhere. When an Array is declared, a storage engine is created to store that array's elements. When that array is subscripted with an Interval or a Range, the temporary Array that is created is bound to a view engine, which aliases the memory of the storage engine. Similarly, when an Array or ConstArray is passed by value to a function, the parameter is given a view engine, so that the values in the argument are aliased, rather than being copied. This happens in the calls to ApplyJacobi(), which is discussed below. POOMA's engine-based architecture allows it to implement a number of useful tools efficiently. One of the simplest of these is the ConstantFunction engine, which provides a way to make a scalar behave like an array. For example, the following statements: ConstArray<1, double, ConstantFunction> c(10); produce a full-featured read-only array that returns 3.14 for all elements. This is more efficient and uses less storage than making a Brick array with constant values. Engines that select components from arrays of structured types, or present arrays whose values are calculated on the fly as simple functions of their indices, are discussed in Tutorial 4 and Tutorial 6. Lines 9-17 of this program define a function that applies Jacobi relaxation to a specified subset of the elements of an array. The actual calculation appears identical to that seen in the previous tutorial. However, the function's parameter declarations specify that I and J are Range objects, instead of Intervals. This means that the set of elements being read or written is guaranteed to be regularly spaced, although the actual spacing is not known until the program is run. Another new feature in this function declaration is the use of the class ConstArray. Declaring something to be of type ConstArray is not the same as declaring it to be a const Array. As mentioned earlier, POOMA's Array classes are handles on actual data storage areas. If something is declared to be a const Array, it cannot itself be modified, but the data it refers to can be. This is illustrated in line 16, which modifies the elements of V even though it is declared const. Put another way, the following is perfectly legal: Array<1> original(10); const Array<1>& reference = original; reference(4) = 3.14159; If an immutable array is really desired, the program must use the class ConstArray. This class overloads the element access method operator() to return a constant reference to an underlying data element, rather than a mutable reference. As a result, the following code would fail to compile: Array<1> original(10); ConstArray<1>& reference = original; reference(4) = 3.14159; since the assignment on its third line is attempting to overwrite a const reference. In fact, Array is derived from ConstArray by adding assignment and indexing that return mutable references. This allows an Array to be used as a ConstArray, but not vice versa. There is a subtle issue here though. One cannot initialize a ConstArray object with an Array object. The following code would fail to Array<1> a(10); ConstArray<1> ca(a); This problem results from a design decision to allow a ConstArray to be constructed with an arbitrary domain: template<class Sub1> ConstArray(const Sub1 & s1); While an Array is a ConstArray, this function will be chosen by C++ compilers over the copy constructor because an exact match is preferred over a promotion to a base class. To avoid this problem, pass arrays by reference. It is good programming practice to use ConstArray wherever possible, both because it documents the way the particular array is being used, and because it makes it harder (although not impossible) for functions to have inadvertent side effects. It is important to note that the Range arguments to ApplyJacobi() must be defined as const references. The reason for this is that C++ does not allow programs to bind non-const references to temporary variables. For example, the following code is illegal: void fxn(int& i) void caller() int a = 5; fxn(a + 3); Similarly, when the main body of the relaxation program adds offsets to the Range objects I and J on lines 44, 46, and 47, the overloaded addition operator creates a temporary object. ApplyJacobi() must therefore declare its corresponding arguments to be const Range<1>&. The bottom line is that if a routine can get a temporary object, arguments should be passed by value or by const reference. If there is no possibility of the routine getting a temporary, arguments can be declared to be non-const reference. For example: template<int D, class T, class E> void f(const Array<D, T, E>& a); template<int D, class T, class E> void g(Array<D, T, E> a); template<int D, class T, class E> void h(Array<D, T, E>& a); void example() Interval<3> I(...); Array<3> x(...); f(x); // OK g(x); // OK h(x); // OK f(x(I)); // OK g(x(I)); // OK h(x(I)); // Bad, x(I) generates a temporary. Note again that in the functions f(), g(), and h(), the array argument a can appear on the left hand side of an assignment. This is because Array is like an STL iterator: a const iterator or const Array can be dereferenced, it just can't be modified itself. If you want to ensure that the array itself can't be changed, use ConstArray. Lines 43-47 bring all of this together by passing the arrays V and b by value to ApplyJacobi(). The program makes four calls to this function; the first pair update the red array elements, while the second pair update the black array elements. To see why two calls are needed to update each pair, consider the fact that each Range object specifies one half of the array's elements. The use of two orthogonal Ranges therefore specifies (1/2)^2= 1/4 of the array's elements. Simple counting rules of this kind are a useful check on the correctness of complicated subscript expressions. As discussed above, each call to ApplyJacobi() constructs one temporary Array and one temporary ConstArray, each of which is bound to a view engine instead of a storage engine. Since these temporary objects are allocated automatically, they are also automatically destroyed when the function returns. POOMA uses reference counting to determine when the last handle on an actual area of array storage has been destroyed, and releases that area's memory at that time. Note that in this case, both arrays are bound to view engines, which do not have data storage areas of their own, so creating and destroying ApplyJacobi()'s arguments is very fast. As you may have guessed from the preceding discussion, POOMA expressions are first-class non-writable Arrays with an expression engine. As a consequence, expressions can be subscripted directly, as Array<1> a(Interval<1>(-4, 0)), b(5), c(5); for (int i = 0; i < 5; i++) c(i) = (a + 2.0 * b)(i); This is equivalent, both semantically and in performance, to the loop: for (int i = 0; i < 5; i++) c(i) = a(i - 4) + 2.0 * b(i); Note that the offsetting of the non-zero-based arrays in expressions is handled automatically by POOMA. POOMA also now includes a function called iota(), which allows applications to initialize array elements in parallel using expressions that depend on elements' indices. Instead of writing a sequential loop, such as: for (i = 0; i < n1; ++i) for (j = 0; j < n2; ++j) a(i,j) = sin(i)+j*5; a program could simply use: a = sin(iota(n1,n2).comp(0)) + iota(n1,n2).comp(1)*5; In general, iota(domain) returns an Array whose elements are vectors, such that iota(domain)(i,j) is Vector<2,int>(i,j). These values can be used in expressions, or stored in objects, as in: Iota<2>::Index_t I(iota(n1,n2).comp(0)); Iota<2>::Index_t J(iota(n1,n2).comp(1)); a = sin(I*0.2) + J*5; As a general rule, whenever a set of objects are always used together, they should be combined into a single larger structure. If we examine the example program shown at the start of this tutorial, we can see that the two Range objects used to subscript arrays along their first and second axes are created in the same place, passed as parameters to the same function, and always used as a pair. We could therefore improve this program by combining these two objects in some way. In POOMA, that way is to use a 2-dimensional Interval or Range instead of a pair of 1-dimensional Intervals or Ranges. A 2-dimensional Interval is just the cross-product of its 1-dimensional constituents: it specifies a dense rectangular patch of an array. Similarly, a 2-dimensional Range is a generalization of the red or black squares on a checkerboard: the elements it specifies are regularly spaced, but need not have the same spacing along different axes. An N-dimensionalInterval is declared in the same way as its 1-dimensional cousin. An N-dimensional Interval is usually initialized by giving its constructor N 1-dimensional Intervals as arguments, as shown in the following example: Interval<2> calc( Interval<1>(1, N), Interval<1>(1, N) ); Multi-dimensional POOMA arrays can be subscripted with any combination of 1-, 2-, and higher-dimensional indices, so long as the total dimensionality of those indices equals the dimension of the array. Thus, a 4-dimensional array can be subscripted using: • four 1-dimensional indices • a 2-dimensional index and a pair of 1-dimensional indices (in any order) • a pair of 2-dimensional indices • one 3-dimensional index and one 1-dimensional index (in any order); or • a single 4-dimensional index. If only a single array element is required, a new templated index class called Loc can be used as an index. Like other domain classes, this class can specify up to seven dimensions; unlike other domain classes, it only specifies a single location along each axis. Thus, the declaration: Loc<2> origin(0, 0); specifies the origin of a grid, while the declaration: Loc<3> centerBottom(N/2, N/2, 0); specifies the center of the bottom face of an N×N×N rectangular block. Loc objects are typically used to specify key points in an array, or as offsets for specifying shifted domains. The latter of these uses is shown in the function ApplyJacobi() in the program below (which is included in the release as examples/Solvers/RBJacobi). This program re-implements the red/black relaxation scheme introduced at the start of this tutorial using 2-dimensional subscripting: 01 #include "Pooma/Arrays.h" 03 #include <iostream> 05 // The size of each side of the domain. Must be even. 06 const int N = 20; 08 // Apply a Jacobi iteration on the given domain. 09 void 10 ApplyJacobi( 11 const Array<2> & V, // to be relaxed 12 const ConstArray<2> & b, // fixed term 13 const Range<2> & IJ // region of calculation 14 ){ 15 V(IJ) = 0.25 * (V(IJ+Loc<2>(1, 0)) + V(IJ+Loc<2>(-1, 0)) + 16 V(IJ+Loc<2>(0, 1)) + V(IJ+Loc<2>( 0, -1)) - b(IJ)); 17 } 19 int 20 main( 21 int argc, // argument count 22 char* argv[] // argument vector 23 ){ 24 // Initialize POOMA. 25 Pooma::initialize(argc, argv); 27 // The calculation domain. 28 Interval<2> calc( Interval<1>(1, N-2), Interval<1>(1, N-2) ); 30 // The domain with guard elements on the boundary. 31 Interval<2> guarded( Interval<1>(0, N-1) , Interval<1>(0, N-1) ); 33 // The array we'll be solving for. 34 Array<2> V(guarded); 35 V = 0.0; 37 // The right hand side of the equation. 38 Array<2> b(calc); 39 b = 0.0; 40 b(N/2, N/2) = -1.0; 42 // The interior domain, now with stride 2. 43 Range<2> IJ( Range<1>(1, N-3, 2), Range<1>(1, N-3, 2) ); 45 // Iterate 100 times. 46 for (int i=0; i<100; ++i) 47 { 48 ApplyJacobi(V, b, IJ); 49 ApplyJacobi(V, b, IJ+Loc<2>(1, 1)); 50 ApplyJacobi(V, b, IJ+Loc<2>(1, 0)); 51 ApplyJacobi(V, b, IJ+Loc<2>(0, 1)); 52 } 54 // Print out the result. 55 std::cout << V << std::endl; 57 // Clean up and report success 58 Pooma::finalize(); 59 return 0; 60 } The keys to this version of red/black relaxation are the Interval declarations on lines 28 and 31, and the array declarations on lines 34 and 38. The first Interval declaration defines the N-2 × N-2 region on which the calculation is actually done; the region defined by the second declaration pads the first with an extra column on each side, and an extra row on the top and the bottom. These extra elements are not part of the problem domain proper, but instead are used to ensure zero boundary conditions. Any other arbitrary boundary condition could be represented equally well by assigning values to these padding elements. Using Interval objects that run from 1 to N-2 to specify the dimensions of the Interval object calc defined on line 28 means that when the array b is defined (line 38), its legal indices also run from 1 to N-2 along each axis. While POOMA uses 0..N-1 indexing by default, any array can have arbitrary lower and upper bounds along any axis, as this example shows. This is particularly useful when the natural representation for a problem uses a domain whose indices are in -N..N. Note that line 31 could equally well have been written: Interval<2> guarded(N, N); In other words, integers work inside of Domain declarations the same way they do in Array declarations. If a program needs to declare a point, it can use: Interval<2> x(Interval<1>(2, 2), Interval<1>(3, 2)); The declaration of calc on line 28 does need to be written as it is because the axes start at 1. Examination of the update loop on lines 48-51, and the update assignment statement on lines 15-16, shows that the padding elements are never assigned to. Instead, the assignment on lines 15-16 only overwrites the interior of the array V. Note also that the domain used for the array b, which represents the fixed term in the Laplace equation, is only defined on the inner N-2 × N-2 domain. While the memory this saves is inconsequential in this 20×20 case, the savings grow quickly as the size and dimension of the problems being tackled increase. Our last look at red/black updating replaces the zero boundary condition of the previous examples with periodic boundaries in both directions. As is usual in programs of this kind, this is implemented by copying the values on one edge of the array into the padding elements next to the array's opposite edge after each relaxation iteration. For example, the padding elements to the right of the last column of the array are filled with the values from the first actual column of the array, and so on. In the program shown below (included in the release as examples/Solvers/ PeriodicJacobi), the "actual" values of the array V are stored in the region [1..N]×[1..N]. Elements with an index of either 0 or N+1 on either axis are padding, and are to be overwritten during each The function that actually updates the periodic boundary conditions is called ApplyPeriodic(), and is shown on lines 20-33 below. The key to understanding this code is that when a "naked" integer is used to subscript a POOMA array, the result of that subscripting operation is reduced by one dimension in relation to that of the subscripted array. Thus, if a 2-dimensional array is subscripted using two specific integers, the result is a scalar value; if that same array is subscripted using an integer and a Interval or Range, the result is a 1-dimensional array. Note that subscripting an Array with a Loc<2> yields a single scalar value, just as subscripting with two integers does, while subscripting with an Interval or Range that happens to refer to just one point yields an Array with just one element. There isn't a zero-dimensional Array (at least not in this release of POOMA), which is what the Loc<2> would have returned. The reduction in rank has to come from compile-time information, so Loc and integers reduce dimensionality, but Interval and Range do not. 01 #include "Pooma/Arrays.h" 03 #include <iostream> 05 // The size of each side of the domain. Must be even. 06 const int N = 18; 08 // Apply a Jacobi iteration on the given domain. 09 void 10 ApplyJacobi( 11 const Array<2> & V, // to be relaxed 12 const ConstArray<2> & b, // fixed term 13 const Range<2> & IJ // region of calculation 14 ){ 15 V(IJ) = 0.25 * (V(IJ+Loc<2>(1,0)) + V(IJ+Loc<2>(-1,0)) + 16 V(IJ+Loc<2>(0,1)) + V(IJ+Loc<2>(0,-1)) - b(IJ)); 17 } 19 // Apply periodic boundary conditions by copying each slice in turn. 20 void 21 ApplyPeriodic( 22 const Array<2> & V // to be wrapped 23 ){ 24 // Get the horizontal and vertical extents of the domain. 25 Interval<1> I = V.domain()[0], 26 J = V.domain()[1]; 28 // Copy each of the four slices in turn. 29 V(0, J) = V(N, J); 30 V(N+1, J) = V(1, J); 31 V(I, 0) = V(I, N); 32 V(I, N+1) = V(I, 1); 33 } 35 int main( 36 int argc, // argument count 37 char* argv[] // argument vector 38 ){ 39 // Initialize POOMA. 40 Pooma::initialize(argc, argv); 42 // The calculation domain. 43 Interval<2> calc( Interval<1>(1, N), Interval<1>(1, N) ); 45 // The domain with guard elements on the boundary. 46 Interval<2> guarded( Interval<1>(0, N+1), Interval<1>(0, N+1) ); 48 // The array we'll be solving for. 49 Array<2> V(guarded); 50 V = 0.0; 52 // The right hand side of the equation. 53 Array<2> b(calc); 54 b = 0.0; 55 b(3*N/4, N/4) = -1.0; 56 b( N/4, 3*N/4) = 1.0; 58 // The interior domain, now with stride 2. 59 Range<2> IJ( Range<1>(1, N-1, 2), Range<1>(1, N-1, 2) ); 61 // Iterate 200 times. 62 for (int i=0; i<200; ++i) 63 { 64 ApplyJacobi(V, b, IJ); 65 ApplyJacobi(V, b, IJ+Loc<2>(1,0)); 66 ApplyJacobi(V, b, IJ+Loc<2>(0,1)); 67 ApplyJacobi(V, b, IJ+Loc<2>(1,1)); 68 ApplyPeriodic(V); 69 } 71 // Print out the result. 72 std::cout << V << std::endl; 74 // Clean up and report success. 75 Pooma::finalize(); 76 return 0; 77 } Note that, as we shall see in the next tutorial, the body of ApplyPeriodic() could more generally be written: 29 V(I.first(), J) = V(I.last()-1, J); 30 V(I.last(), J) = V(I.first()+1, J); 31 V(I, J.first()) = V(I, J.last()-1); 32 V(I, J.last()) = V(I, J.first()+1); One of the primary features of the POOMA array concept is the notion that "everything is an Array". For example, if you take a view of an Array, the result is a full-featured array. If you add two Arrays together, the result is an Array. The table below illustrates this, using the declarations: Array<2,Vector<2>> a Array<2> b Interval<2> I Interval<1> J Range<2> R Operations Involving Arrays │ Operation │ Example │ Output Type │ │Taking a view of the array's domain │ a() │ Array<2,Vector<2>,BrickView<2,true>> │ │Taking a view using an Interval │ a(I) │ Array<2,Vector<2>,BrickView<2,true>> │ │Taking a view using a Range │ a(R) │ Array<2,Vector<2>,BrickView<2,false>> │ │Taking a slice │ a(2,J) │ Array<1,Vector<2>,BrickView<2,true>> │ │Indexing │ a(2,3) │ Vector<2>& │ │Taking a read-only view of the array's domain│ a.read() │ ConstArray<2,Vector<2>,BrickView<2,true>> │ │Taking a read-only view using an Interval │ a.read(I) │ ConstArray<2,Vector<2>,BrickView<2,true>> │ │Taking a read-only view using a Range │ a.read(R) │ ConstArray<2,Vector<2>,BrickView<2,false>> │ │Taking a read-only slice │ a.read(2,J) │ ConstArray<1,Vector<2>,BrickView<2,true>> │ │Reading an element │ a.read(2,3) │ Vector<2> │ │Taking a component view │ a.comp(1) │ Array<2,double,CompFwd<Engine<2,Vector<2>,Brick>,1>> │ │Taking a read-only component view │a.readComp(1)│ConstArray<2,double,CompFwd<Engine<2,Vector<2>,Brick>,1>> │ │Applying a unary operator or function │ sin(a) │ ConstArray<2,Vector<2>,ExpressionTag< │ │ │ │ UnaryNode<FnSin,ConstArray<2,Vector<2>,Brick>>>> │ │ │ │ ConstArray<2,Vector<2>,ExpressionTag< │ │Applying a binary operator or function │ a + b │ BinaryNode<OpAdd,ConstArray<2,Vector<2>,Brick>, │ │ │ │ ConstArray<2,double,Brick>>>> │ Indexing is the only operation that does not generate an Array. All other operations generate an Array or ConstArray with a different engine, perhaps a different element type, and, in the case of a slice, a different dimensionality. ConstArrays result when the operation is read-only. This tutorial has shown that POOMA arrays can be subscripted using objects that represent index sequences with regular strides. Subscripting an array with a non-scalar index, or passing an array by value as a function parameter, creates a temporary array. While explicitly-declared arrays are bound to storage engines that encapsulate actual data storage, each temporary array is bound to a view engine, which aliases a storage engine's data area. Programs should use the templated class ConstArray to create immutable arrays, since the object created by a const Array declaration is actually an immutable handle on a mutable storage region. Finally, multi-dimensional and integer subscripts can be used to select subsections of arrays, and they yield results of differing dimensions. Copyright © Los Alamos National Laboratory 1998-2000
{"url":"http://www.nongnu.org/freepooma/tutorial/tut-02.html","timestamp":"2014-04-17T03:51:27Z","content_type":null,"content_length":"36718","record_id":"<urn:uuid:475d9543-3204-4749-b4e0-42116b800ba3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Separation of Variables January 5th 2011, 01:41 PM #1 MHF Contributor Mar 2010 Separation of Variables $\displaystyle\frac{\varphi'(x)}{\varphi(x)}=\frac{ xyz\psi(y)\omega(z)}{\psi'(y)\omega'(z)}$ $\varphi'(x)-\lambda\varphi(x)=0\Rightarrow m=\lambda$ $\displaystyle\frac{\psi'(y)}{\psi(y)}=\frac{xyz\om ega(z)}{\lambda\omega'(z)}$ $\psi'(y)-\mu\psi(y)=0\Rightarrow n=\mu$ $\displaystyle\int\frac{\omega'(z)}{\omega(z)}=\int \frac{xyz}{\lambda\mu}dz\Rightarrow \ln{|\omega(z)|}=\frac{xyz^2}{2\lambda\mu}+c$ $\displaystyle\omega(z)=C_2\exp{\left(\frac{xyz^2}{ 2\lambda\mu}\right)}$ $\displaystyle u(x,y,z)=\varphi(x)\psi(y)\omega(z)=C\exp{\left(x\ lambda+y\mu+\frac{xyz^2}{2\lambda\mu}\right)}$ However, this solution doesn't check out. $\displaystyle\frac{\varphi'(x)}{\varphi(x)}=\frac{ xyz\psi(y)\omega(z)}{\psi'(y)\omega'(z)}$ $\varphi'(x)-\lambda\varphi(x)=0\Rightarrow m=\lambda$ $\displaystyle\frac{\psi'(y)}{\psi(y)}=\frac{xyz\om ega(z)}{\lambda\omega'(z)}$ $\psi'(y)-\mu\psi(y)=0\Rightarrow n=\mu$ $\displaystyle\int\frac{\omega'(z)}{\omega(z)}=\int \frac{xyz}{\lambda\mu}dz\Rightarrow \ln{|\omega(z)|}=\frac{xyz^2}{2\lambda\mu}+c$ $\displaystyle\omega(z)=C_2\exp{\left(\frac{xyz^2}{ 2\lambda\mu}\right)}$ $\displaystyle u(x,y,z)=\varphi(x)\psi(y)\omega(z)=C\exp{\left(x\ lambda+y\mu+\frac{xyz^2}{2\lambda\mu}\right)}$ However, this solution doesn't check out. There's a mistake in this line $\displaystyle\frac{\varphi'(x)}{\varphi(x)}=\frac{ xyz\psi(y)\omega(z)}{\psi'(y)\omega'(z)}$ It should be $\displaystyle\frac{\varphi'(x)}{x\varphi(x)}=\frac {yz\psi(y)\omega(z)}{\psi'(y)\omega'(z)}$ Notice the location of the $x$. So for psi, the y should be with it as well then? Yep you got it! BTW - what book are you going through and are you doing it independently? I don't mind the book but it lacks in examples. Last edited by dwsmith; January 5th 2011 at 04:21 PM. January 5th 2011, 02:50 PM #2 January 5th 2011, 02:51 PM #3 MHF Contributor Mar 2010 January 5th 2011, 02:58 PM #4 January 5th 2011, 02:59 PM #5 MHF Contributor Mar 2010 January 5th 2011, 03:05 PM #6 January 5th 2011, 03:07 PM #7 MHF Contributor Mar 2010 January 5th 2011, 03:12 PM #8 MHF Contributor Mar 2010
{"url":"http://mathhelpforum.com/differential-equations/167547-separation-variables.html","timestamp":"2014-04-19T07:18:03Z","content_type":null,"content_length":"60634","record_id":"<urn:uuid:b2815caf-ac1d-4899-9f70-9d6f26b84f32>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Independent Prob Proof February 3rd 2011, 04:49 PM #1 MHF Contributor Mar 2010 Independent Prob Proof If A and B are independent events each with positive probability, prove that they cannot be mutually exclusive. Not to sure about this one but this is what I have: $\displaystyle P(A\cap B)=P(A)P(B)\Rightarrow P(A|B)=P(A) \ \ \text{and} \ \ P(B|A)=P(B)$ This is all I have. Okay so, The events are mutually exclusive if $P(A \cap B)=0$ . Here you have $P(A) \;and\; P(B)$ are both greater than 0, so $P(A \cap B) eq 0$. Proof by contradiction: $P\land\sim Q$ $P(A\cap B)=P(A)P(B)>0$ We have reached a contradiction since $P(A\cap B)=P(A)P(B)eq 0$. Therefore, A and B cannot be mutually exclusive. February 3rd 2011, 05:03 PM #2 February 3rd 2011, 05:09 PM #3 MHF Contributor Mar 2010
{"url":"http://mathhelpforum.com/advanced-statistics/170148-independent-prob-proof.html","timestamp":"2014-04-17T23:15:50Z","content_type":null,"content_length":"38323","record_id":"<urn:uuid:6d265766-62f1-42d3-924a-d242af9e5ae1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] Request for usage examples on scipy.stats.rv_continuous and scipy.ndimage.grey_dilate(structure) [SciPy-User] Request for usage examples on scipy.stats.rv_continuous and scipy.ndimage.grey_dilate(structure) Robert Kern robert.kern@gmail.... Mon Mar 22 09:50:23 CDT 2010 On Mon, Mar 22, 2010 at 09:44, Christoph Deil <Deil.Christoph@googlemail.com> wrote: > Hi, > I am having some trouble figuring out how to use the following two aspects of scipy: > 1) How can I create a random distribution given by some python formula, e.g. pdf(x) = x**2 in the range x = 1 to 2. > The example given in the rv_continuous docstring doesn't work for me: > In [1]: import matplotlib.pyplot as plt > In [2]: numargs = generic.numargs > AttributeError: type object 'numpy.generic' has no attribute 'numargs' > Is it also possible to use rv_continuous for multidimensional distributions, say pdf(x,y)=x*y in the range x = 1 to 2 and y = 0 to 3? > Or for pdfs that are just given numerically on a grid (say as the result of a gaussian_kde() ), not by a python function? > It would be nice if someone could add a few examples that show rv_continuous usage to the docstring or tutorial. rv_continuous is a base class. You do not use it directly. The reason the docstring doesn't work is because it is used as a template to build the docstrings for the subclasses. If you want to subclass from it to make your own distribution class, read the source of the other distribution classes for examples. Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-March/024777.html","timestamp":"2014-04-17T01:54:38Z","content_type":null,"content_length":"4816","record_id":"<urn:uuid:00de1409-6999-45cd-96ea-0f0edb27deb7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Work to Compress Spring 1. The problem statement, all variables and given/known data A spring, with a spring constant of 87 N/m, was compressed a distance of 8.6cm (0.086m). How much work was done in order to compress the spring? 2. Relevant equations 3. The attempt at a solution I tried to plug in the numbers as follows in the above equation - but I can't get the answer out. I have been doing this for ages am like a broken record - pse snap me out of this programming! WC= 2*87*.086^2 = 1.286904J Apparently, the correct answer is .32J
{"url":"http://www.physicsforums.com/showthread.php?t=435695","timestamp":"2014-04-21T09:54:48Z","content_type":null,"content_length":"28623","record_id":"<urn:uuid:1df4a6e1-f162-4869-a367-d8394c6473d6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Are there complexity classes with provably no complete problems? up vote 15 down vote favorite A problem is said to be complete for a complexity class $\mathcal{C}$ if a) it is in $\mathcal{C}$ and b) every problem in $\mathcal{C}$ is log-space reducible to it. There are natural examples of NP-complete problems (SAT), P-complete problems (circuit-value), NL-complete problems (reachability), and so on. Papadimitrou states that "semantic" complexity classes like (I think) BPP are less likely to admit complete problems. Then again, we do not actually know whether P = BPP or not, and if so, there would be BPP-complete problems. (There exist IP-complete problems, since IP=PSPACE, and determining whether a quantified boolean formula is satisfiable is PSPACE-complete.) Question: Are there any natural complexity classes that can be shown not to have complete problems? I think this question has to be modified slightly, because I would imagine $Time(n)$ has no complete problems because log-space reductions can introduce a polynomial factor. So, in the word "natural," I include the assumption that the complexity class should be invariant under polynomial transformations (I don't know how to make this precise, but hopefully it is clear). (Also, time or space bounds should be at least $\log n$, of course.) Edit: A commenter has pointed me to an interesting result of Sipser that $\mathrm{BPP}^M$ for $M$ a suitable oracle does not have complete problems. Is the same true for a (less fancy) class of the form $\bigcup_f TIME(f)$, where the union is over a class of recursive functions $f$ that are all polynomially related to each other? (Same for $\bigcup_f NTIME(f)$, and ditto for space. 2 Have you looked at Sipser's results (mentioned also in en.wikipedia.org/wiki/Complete_%28complexity%29 ) – Gjergji Zaimi Jun 9 '10 at 11:08 Hm, interesting. I'll have to look at that. – Akhil Mathew Jun 9 '10 at 11:21 1 Also in this article the author says that POLYLOGSPACE doesn't have complete problems www2.research.att.com/~dsj/columns/col7.pdf and gives a reference to R. V. Book, "Translational lemmas, polynomial time, and $(logn)^j$-space". – Gjergji Zaimi Jun 9 '10 at 12:04 @Gjergji: Fair enough - I think that's because Space(log n) is strictly contained in Space(log^2 n), and so on, by the hierarchy theorem, and a complete problem in POLYLOGSPACE would imply that this hierarchy would collapse to some finite level. (The same should be true for time because of the time hierarchy theorem.) But log(n) and log^2(n) are not polynomially related in the input n (which is what I had meant). – Akhil Mathew Jun 9 '10 at 12:48 Akhil, your profile says that you are a senior in high school. In this case, I am extremely expressed by your high level of mathematical sophistication! – Joel David Hamkins Jun 9 '10 at 12:54 add comment 5 Answers active oldest votes The zoo of complexity classes extends naturally into the realm of computability theory and beyond, to descriptive set theory, and in these higher realms there are numerous natural classes which have no members that are complete, even with respect to far more generous notions of reduction than the one you mention. • For example, consider the class Dec of all decidable sets of natural numbers. There can be no decidable set $U$ such that every other decidable set $A$ reduces to it in time uniformly bounded by any computable function $f$ (even iterated exponentials, or the Ackerman function, etc.). In particular, it has no member that is complete in your sense. If there were such a member, then we would be able to consruct a computable enumeration $A_0$, $A_1$, $\ldots$ containing all decidable sets, which is impossible since then we could diagonalize against it: the set of $n$ such that $n\notin A_n$ would be computable, but it can't be on the list. • The class of all arithmetically definable sets is obtained by closing the decidable sets (or much less) under projection from $\mathbb{N}^{n+1}\to \mathbb{N}^n$ and under Boolean up vote 8 combinations. The members of this class are exactly the sets that are defined by a first order formula over the structure $\langle\mathbb{N},+,\cdot,\lt\rangle$, and the hierarchy is down vote stratified by the complexity of these definitions. This class has no member that is complete with respect to any computable reduction, and even with respect to any arithmetically accepted definable reduction of bounded complexity, for in this case the hierarchy would collapse to some level $\Sigma_n$, which is known not to occur. • A similar argument works for the hyperarithmetic hiearchy, which can have no universal member hyperarithmetic reductions of any fixed complexity. • And similarly for the projective hiearchy on sets of reals. The general phenomenon is that there are numerous hierarchies growing from computability theory into descriptive set theory which are all known to exhibit strictly proper growth in such a way that prevents them from having universal members. Thanks! How do semidecidable sets fit into this hierarchy? If I'm not mistaken, there is a "universal" semidecidable set from the universal Turing machine, but presumably it cannot belong to the arithmetic hierarchy then, right? – Akhil Mathew Jun 9 '10 at 12:51 There is a universal c.e. set, and all other c.e. sets reduce to it in linear time. Every c.e. set is arithmetic, at the level of $\Sigma_1$, and in fact this characterizes the c.e. 1 sets. But this set is not universal for all arithmetic sets, only for the bottom level $\Sigma_1$. Similarly, there is a universal $\Sigma_n$ set for any fixed $n$, and this is how one can see the arithmetic hierarchy does not collapse. – Joel David Hamkins Jun 9 '10 at 12:59 add comment Here's a really simple class that is very natural and has no complete problems: ALL, the class of all languages. The reason is that there are uncountably many problems in ALL, but only countably many Turing machines to go around (for reductions), so every problem in ALL cannot be reduced to a single problem in ALL. up vote 9 down vote Similarly, any class with advice, like P/poly, L/poly, BQP/qpoly, or even P/1 does not have complete problems (using the same argument). 6 Also, the class NONE = ∅. It contains no language, so in particular it has no complete language. :-) – Antonio E. Porreca Jun 9 '10 at 18:52 add comment Another class without complete problems w.r.t. logspace or polytime reductions (not an union of classes TIME(f) for some family of polynomially related functions f, but still relatively natural in my opinion) is ELEMENTARY = TIME(2^n) ∪ TIME(2^2^n) ∪ TIME(2^2^2^n) ∪ TIME(2^2^2^2^n) ∪ ⋯ up vote 8 down vote If L were ELEMENTARY-complete, then it would belong to some level of this hierarchy, and all problems above could be reduced to it. But this hierarchy is known to be proper (time hierarchy theorem), contradiction. 1 Moreover, the same argument applies to the polynomial hierarchy if it does not collapse (though this is unknown and does not follow from the hierarchy theorem). – Akhil Mathew Jun 9 '10 at 19:52 2 The same argument works for any hierarchy that is known to be infinite. I guess this general method captures most of the examples given in this thread till now, like POLYLOGSPACE, ELEMENTARY, etc. – Rune Jun 9 '10 at 20:03 Oh, I somehow missed the comments mentioning POLYLOGSPACE; indeed, it’s the same argument. – Antonio E. Porreca Jun 9 '10 at 20:08 add comment This is more of a comment than an answer, but the comment go too long. From this thread, there seem to be two different themes to coming up with classes without complete problems. Completeness is defined using two properties. L is X-complete if (1) L is in X (2) L is X-hard (under some suitable notion of efficient reductions) The first theme involves classes which have hard problems, but if the hard problem were a member of the class itself, it would cause problems. The examples of POLYLOGSPACE and ELEMENTARY up vote 3 fall in this category. Both have hard problem, of course, but if the hard problem were a member of the class, some hierarchy theorem would be violated. (Space hierarchy and time hierarchy down vote theorems, respectively.) Similarly one could come up with more examples of this kind. The second theme involves classes which have no hard problems, such as ALL or P/poly. These classes don't have a complete problem for a fundamentally different reason than the previous It would be interesting to see if there are other classes which fail to have complete problems for completely different reasons. add comment I'm not sure that it's entirely correct, but here goes. Let $f(n)$ be a proper complexity function (a.k.a. space and time constructible, etc.) and consider the class $\\mathcal{C} = \bigcup_ {P,Q} DTIME Q(f(P(n)))$, where $P,Q$ rangs over polynomials with natural number coefficients. Suppose $f(n) \geq n$. Then the language $L=(M,x,P,Q)$: $M$ is a deterministic Turing machine that accepts string $x$ in at most $Q(f(P(n))$ steps should be $\mathcal{C}$-complete. Indeed, verification is just a mechanical process of simulating the Turing machine (which can be done in polynomial time on the length of $M$ and $x$), and every language decided by a Turing machine $M$ in $DTIME(Q(f(P(n)))$ should reduce to $L$ based on the machine $M$. The same should hold up vote for nondeterministic complexity classes. 0 down vote I've seen something like this for $NP$ (this is how the Cook-Levin theorem is proved, if I understand correctly), and I think it should generalize, and that natural complexity classes based solely on a time constraint (which is sufficiently large) should admit complete problems. add comment Not the answer you're looking for? Browse other questions tagged computational-complexity or ask your own question.
{"url":"http://mathoverflow.net/questions/27572/are-there-complexity-classes-with-provably-no-complete-problems/28086","timestamp":"2014-04-16T22:02:23Z","content_type":null,"content_length":"86241","record_id":"<urn:uuid:ef350d60-645b-4ce9-a9ad-2093387e4154>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Redondo Beach Math Tutor Find a Redondo Beach Math Tutor ...I graduated from CSUN with a BA in mathematics in 2009. My passion and enthusiasm for math is reflected in my teaching, and I try to make each session interesting and fun! I have experience working as a private math tutor and at an established math tutoring company. 9 Subjects: including algebra 1, algebra 2, calculus, geometry ...Physics was my undergraduate minor and the basis of my PhD. It is the most fundamental science; all other natural and life sciences are based on Physics. I've been working as a physicist for more years than I want to count, and still use physics every day, both in my job and in understanding everyday life. 13 Subjects: including algebra 1, algebra 2, calculus, geometry ...I possess a current Clear Multiple Subject Teaching Credential and a Master's degree in Special Education. I have previously been certified in English, Social Studies, and Special Education, and I have experience teaching both general and special education students. I also have been trained by ... 29 Subjects: including geometry, SAT math, prealgebra, algebra 1 ...It may be a challenging instrument, but that's what makes it so rewarding to learn. With my expertise, I can help you learn the techniques and musical possibilities of that wonderful four-stringed instrument, the violin! So, you want to be the next Diane Warren, Bruno Mars or Harold Arlen? 42 Subjects: including algebra 1, grammar, piano, elementary (k-6th) ...I started playing clarinet when I was ten years old. That was over forty years ago. I played in elementary school, junior high, high school, college, and graduate school. 47 Subjects: including calculus, ESL/ESOL, French, English
{"url":"http://www.purplemath.com/Redondo_Beach_Math_tutors.php","timestamp":"2014-04-18T15:42:44Z","content_type":null,"content_length":"23927","record_id":"<urn:uuid:2fac8842-9451-402e-a3f6-8e5bac6b3517>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
North Plainfield, NJ ACT Tutor Find a North Plainfield, NJ ACT Tutor ...Use Jamie if you want an excellent tutor. I love tutoring! Working with a student and creating a customized learning curriculum makes every new student a challenge and opportunity. 16 Subjects: including ACT Math, geometry, algebra 1, GED ...I've done official tutoring at one of the colleges I attended as well as outside tutoring work. As a high school Math teacher, I am aware of the phobia that many students have when they encounter Math. So, one of the main things I focus on is developing students' confidence by relating abstract math concepts to what they already know and making it easy to understand. 9 Subjects: including ACT Math, geometry, algebra 1, algebra 2 Hello, my name is Doug. Magic is my hobby, but I've been a teacher and student. I taught English to junior high students in Japan for five years and did private tutoring there as well. 20 Subjects: including ACT Math, reading, English, writing ...I hold a Bachelor's Degree in Elementary Education, have passed the Praxis Exam for Elementary Ed, and have my NJ teaching certificate of eligibility for K-5 Elementary Education. I am currently teaching part-time and especially enjoy working with students in the K-5 age group. I started sewing at the age of 12, inspired by my older sister's Home Economics project. 58 Subjects: including ACT Math, English, reading, physics ...The SAT Writing section is actually the easiest of the three sections in terms of achieving score increases. It consists of a holistically graded essay + a multiple choice grammar test. Unlike the flat formulaic essays that result from forcing students to merely follow a formula, the essay I teach a student to write develops a mature voice and distinct style. 40 Subjects: including ACT Math, chemistry, writing, English Related North Plainfield, NJ Tutors North Plainfield, NJ Accounting Tutors North Plainfield, NJ ACT Tutors North Plainfield, NJ Algebra Tutors North Plainfield, NJ Algebra 2 Tutors North Plainfield, NJ Calculus Tutors North Plainfield, NJ Geometry Tutors North Plainfield, NJ Math Tutors North Plainfield, NJ Prealgebra Tutors North Plainfield, NJ Precalculus Tutors North Plainfield, NJ SAT Tutors North Plainfield, NJ SAT Math Tutors North Plainfield, NJ Science Tutors North Plainfield, NJ Statistics Tutors North Plainfield, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/North_Plainfield_NJ_ACT_tutors.php","timestamp":"2014-04-20T01:56:48Z","content_type":null,"content_length":"24116","record_id":"<urn:uuid:67d9cef8-9e52-4514-8b45-f049df7aedab>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
General decay for a system of nonlinear viscoelastic wave equations with weak damping In this paper, we are concerned with a system of nonlinear viscoelastic wave equations with initial and Dirichlet boundary conditions in ( ). Under suitable assumptions, we establish a general decay result by multiplier techniques, which extends some existing results for a single equation to the case of a coupled system. MSC: 35L05, 35L55, 35L70. viscoelastic system; general decay; weak damping 1 Introduction In this paper, we are concerned with a coupled system of nonlinear viscoelastic wave equations with weak damping where ( ) is a bounded domain with smooth boundary ∂Ω, u and v represent the transverse displacements of waves. The functions and denote the kernel of a memory, and are the nonlinearities. In recent years, many mathematicians have paid their attention to the energy decay and dynamic systems of the nonlinear wave equations, hyperbolic systems and viscoelastic equations. Firstly, we recall some results concerning single viscoelastic wave equation. Kafini and Tatar [1] considered the following Cauchy problem: They established the polynomial decay of the first-order energy of solutions for compactly supported initial data and for a not necessarily decreasing relaxation function. Later Tatar [2] studied the problem (1.2) with the Dirichlet boundary condition and showed that the decay of solutions was an arbitrary decay not necessarily at exponential or polynomial rate. Cavalcanti et al.[3] studied the following equation with Dirichlet boundary condition: The authors established a global existence result for and an exponential decay of energy for . They studied the interaction within the and the memory term . Later on, several other results were published based on [4-6]. For more results on a single viscoelastic equation, we can refer to [7-14]. For a coupled system, Agre and Rammaha [15] investigated the following system: where ( ) is a bounded domain with smooth boundary. They considered the following assumptions on ( ): (A[1]) Let (A[2]) There exist two positive constants , such that for all , satisfies Under the assumptions (A[1])-(A[2]), they established the global existence of weak solutions and the global existence of small weak solutions with initial and Dirichlet boundary conditions. Moreover, they also obtained the blow up of weak solutions. Mustafa [16] studied the following system: in with initial and Dirichlet boundary conditions, proved the existence and uniqueness to the system by using the classical Faedo-Galerkin method and established a stability result by multiplier techniques. But the author considered the following different assumptions on ( ) from (A[1])-(A[2]): ( ) ( ) are functions and there exists a function F such that for all , where the constant and , for . Han and Wang [17] considered the following coupled nonlinear viscoelastic wave equations with weak damping: where is a bounded domain with smooth boundary ∂Ω. Under the assumptions (A[1])-(A[2]) on ( ), the initial data and the parameters in the equations, they established the local existence, global existence uniqueness and finite time blow up properties. When the weak damping terms , were replaced by the strong damping terms , , Liang and Gao [18] showed that under certain assumption on initial data in the stable set, the decay rate of the solution energy is exponential when they take and if , if . Moreover, they obtained that the solutions with positive initial energy blow up in a finite time for certain initial data in the unstable set. For more results on coupled viscoelastic equations, we can refer to [19-21]. If we take in (1.4), the system will be transformed into (1.1). To the best of our knowledge, there is no result on general energy decay for the viscoelastic problem (1.1). Motivated by [16,17], in this paper, we shall establish the general energy decay for the problem (1.1) by multiplier techniques, which extends some existing results for a single equation to the case of a coupled system. The rest of our paper is organized as follows. In Section 2, we give some preparations for our consideration and our main result. The statement and the proof of our main result will be given in Section For the reader’s convenience, we denote the norm and the scalar product in by and , respectively. denotes a general constant, which may be different in different estimates. 2 Preliminaries and main result To state our main result, in addition to (A[1])-(A[2]), we need the following assumption. (A[3]) , , are differentiable functions such that and there exist nonincreasing functions satisfying Now, we define the energy functional and the functional The existence of a global solution to the system (1.1) is established in [17] as follows. Let (A[1])-(A[3]) hold. Assume that , and that , , where is a computable constant and . Then the problem (1.1) has a unique global solution satisfying We are now ready to state our main result. Theorem 2.1Let (A[1])-(A[3]) hold. Assume that , and that , , where is a computable constant and . Then there exist constants such that, fortlarge, the solution of (1.1) satisfies 3 Proof of Theorem 2.1 In this section, we carry out the proof of Theorem 2.1. Firstly, we will estimate several lemmas. Lemma 3.1Let , be the solution of (1.1). Then the following energy estimate holds for any : Proof Multiplying the first equation of (1.1) by and the second equation by , respectively, integrating the results over Ω, performing integration by parts and noting that , we can easily get (3.1). The proof is complete.□ Lemma 3.2Under the assumption (A[3]), the following hold: Proof Using Hölder’s inequality, we get On the other hand, we repeat the above proof with , instead of g, we can get (3.3). The proof is now complete.□ Lemma 3.3Let (A[1])-(A[3]) hold and , be the solution of (1.1). Then the functional defined by Proof By (1.1), a direct differentiation gives From the assumptions (A[1])-(A[2]), we derive By Young’s inequality and (3.2), we deduce for any Similarly, we have Using Young’s inequality and Poincaré’s inequality, we obtain for any where λ is the first eigenvalue of −Δ with the Dirichlet boundary condition. Similarly, which together with (3.5)-(3.9) gives which together with (3.10) gives (3.4). The proof is complete.□ Lemma 3.4Let (A[1])-(A[3]) hold and , be the solution of (1.1). Then the functional defined by Proof A direct differentiation for yields Using the first equation of (1.1) and integrating by parts, we obtain From Young’s inequality, Poincaré’s inequality and Lemma 3.2, we derive Now, we estimate the first term on the right-hand side of (3.17). Using the assumptions (A[1])-(A[2]) and Young’s inequality, we arrive at where we used the embedding for if or if and the fact proved in Lemma 5.1 in [17]. Combining (3.13)-(3.18), we get The same estimate to , we can derive which together with (3.19) gives (3.11). The proof is now complete.□ Proof of Theorem 2.1 For , we define the functional by and let Using Lemma 3.1 and Lemmas 3.3-3.4, a direct differentiation gives Now, we choose and , large enough so that Inserting (3.21)-(3.23) into (3.20), we have Therefore, for two positive constants ω and C, we obtain On the other hand, we choose even larger so that is equivalent to , i.e., Multiplying (3.25) by and using (A[3]), we get By virtue of (A[3]) and , we have Using (3.26), we can easily get which together with (3.28) yields, for some positive constant η, Integrating (3.30) over , we arrive at which together with (3.29) and the boundedness of E and ξ yields (2.3). The proof is now complete.□ Authors’ contributions The paper is a joint work of all authors who contributed equally to the final version of the paper. All authors read and approved the final manuscript. Baowei Feng was supported by the Doctoral Innovational Fund of Donghua University with contract number BC201138, and Yuming Qin was supported by NNSF of China with contract numbers 11031003 and 11271066 and the grant of Shanghai Education Commission (No. 13ZZ048). Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2012/1/146/","timestamp":"2014-04-20T08:20:26Z","content_type":null,"content_length":"176370","record_id":"<urn:uuid:2b8a6dc1-e61c-448a-826e-bb3dae38ab38>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximum Grilled Steelers Forum We're down to the last five weeks of the regular season, and the Steelers find themselves in a hell of a pickle. A mediocre overall record and a poor divisional record demands a strong finish. CBS Sportsline has the current playoff standings, and Football Outsiders has a weekly update of its calculated playoff odds. I'll use the actual standings, notated with Football Outsiders odds and other notes. AFC 1 Indianapolis (y) South 11-0-0 . Indy becomes the first team to clinch a playoff berth. FO places their odds of being the 1 seed at 98.5%. 2 Cincinnati North 8-3-0 . An identical record to San Diego, FO sees the Chargers as the eventual 2 seed and the Bengals as the 3 seed, with a 33.0% chance, and assigns them a 97.1% chance of making the playoffs. 3 San Diego West 8-3-0. Chargers are listed as having a 34.9% chance of taking the 2 seed, and a 92.2% chance of making the playoffs. 4 New England East 7-4-0 . FO also agrees that the Patriots look ready to be the weakest division winner by record, and give them a 39.7% chance of taking the 4 seed, and a 93.5% chance of making the playoffs. 5 Denver West 7-4-0. Denver hangs on to the 5 seed at FO also, with a 29.4% chance of winning the 5, and a 73.3% chance of making the playoffs. 6 Jacksonville South 6-5-0 . Jax isn't getting the love from FO, who have them behind Baltimore and Pittsburgh at the 8 spot, with a 19.4% chance of making the playoffs. Not dead just yet, but little wiggle room. Still alive 7 Baltimore North 6-5-0 . FO has the 6 seed going to Baltimore, at 30.3%, and a 60.7% chance overall of making the cut. 8 Pittsburgh North 6-5-0 . On the outside looking in is Pittsburgh, the 7 team according to FO, with a 40.3% chance of making the postseason. 9 Miami East 5-6-0. FO rank is 9, 10.4% chance of making the dance. 10 N.Y. Jets East 5-6-0 . FO rank is 11, 4.4% chance of making it. 11 Tennessee South 5-6-0 . Only 12th according to FO, the comeback kids only have a 1.4% shot. 12 Houston South 5-6-0 . The 10 team in FO's list, and a 7.0% shot. 13 Buffalo East 4-7-0 . 0.4% chance of making it; not fucking likely. 14 Kansas City West 3-8-0 . 0.0% chance; close to the official cut. 15 Oakland West 3-8-0 . 0.0% chance; almost out. Eliminated 16 Cleveland North 1-10-0. Too bad, pukes. NFC 1 New Orleans South 11-0-0 . FO has them at a 74.3% chance of being the 1 seed, and 100.0% chance of making the playoffs. 2 Minnesota North 10-1-0 . 65.5% chance of being the 2 seed and 100.0% chance of making it. 3 Dallas East 8-3-0 . Cowboys in the playoffs? FO agrees, 36.2% chance of being the 3 seed and 85.8% chance of making it at all. 4 Arizona West 7-4-0. FO gives the Cards a 51.8% chance of being the 4, and with their weak division a 91.8% chance of making the post. 5 Philadelphia East 7-4-0. FO flip-flops the Philly and GB order, with Philly the 20.2% chance of making the 6 seed and a 74.1% chance of getting in. 6 Green Bay North 7-4-0. GB has a 53.5% chance of being the 5 seed, and an 86.1% chance of getting in. Still alive 7 N.Y. Giants East 6-5-0. Giants are also the 7, outside looking in, for FO, with a 16.5% chance of making the 6 seed (not far off Philly's odds), but only a 31.2% chance of making the show. Compare to the AFC 7 seed Pitt, with a 40.3% chance of making it, and you see the NFC has fewer WC contenders with a serious shot. 8 Atlanta South 6-5-0 . Also 8 in FO, 15.3% chance of the playoffs. 9 San Francisco West 5-6-0 Also 9 in FO, and a 15.3% shot. Note: I made SF the 9 off an identical 15.3% chance of making the post, since Atlanta also had a 15.3% chance of a wildcard spot and SF on a 4.6% chance. 10 Carolina South 4-7-0 . 0.3% chance; forget it. 11 Chicago North 4-7-0. 0.1% chance; nope. 12 Seattle West 4-7-0 . 0.1% chance; nada. 13 Washington East 3-8-0 0.0% -- that's not good. Eliminated 14 Detroit North 2-9-0 15 Tampa Bay South 1-10-0 16 St. Louis West 1-10-0 Too effin bad. So the AFC WC hunt looks like it has 4 contenders (Balty, Denver, Pitt, Jax) for 2 spots. Miami would need a miracle. In the NFC, it's really GB, Philly, NYG, and maybe Atlanta for the last 2 spots.
{"url":"http://maximumgrilledsteelers.com/index.php?topic=18200.msg10616590","timestamp":"2014-04-19T03:03:07Z","content_type":null,"content_length":"67981","record_id":"<urn:uuid:696a67f8-4351-460d-9096-38f83dc157f5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Cipher modes operate on the next level up from the underlying block cipher. They transform the blocks going in and out of the cipher in ways to give them desirable properties in certain circumstances. The cipher modes implemented by GNU Crypto, which is contained in the gnu.crypto.mode package and are referenced herein by their three-letter abbreviations described below, are: • Cipher block chaining mode. The "CBC" mode makes every block of the ciphertext depend upon all previous blocks by adding feedback to the transformation. This is done by XORing the plaintext with the previous ciphertext (or, with the first block, an initialization vector) before it is transformed. That is, encryption looks like: C[i] = ENCRYPT(k, P_i ^ C[i-1]); and decryption is P[i] = C [i-1] ^ DECRYPT(C[i]). • Counter mode. Counter mode, referred to as "CTR" mode, is one of a class of sequenced cipher modes that turn the underlying cipher into a keystream. Counter mode relys on a simple counter register that is updated for every block processed. For plaintexts P[1] ... P[n], ciphertexts C[1] ... C[n], counter elements T[1] ... T[n], and an encryption function ENCRYPT(k, ...), encryption is defined as C[i] = P[i] ^ ENCRYPT(k, T[i]) and decryption as P[i] = C[i] ^ ENCRYPT(k, T[i]). • Electronic codebook mode. Or "ECB" mode, is the most obvious cipher mode: the cipher block is the direct output of the forward function, and the plain block is the direct output of the inverse function. That is, encryption is C_i = E_k(P_i) and decryption is P_i = E_k^\bgroup-1\egroup(C_i). • Integer counter mode. "ICM" mode has features in common with counter mode described above. The counter, T_i, is computed by T_i = (T_0 + i) \bmod 256^b, where b is the cipher's block size. T_0 is initialized to the integer representation of some initialization vector. The keystream bytes are then E_k(T_i). Encryption and decryption are then C_i = P_i \oplus E_k(T_i) and P_i = C_i \oplus E_k(T_i), respectively. • Output feeback mode. "OFB" mode creates a keystream by repeatedly iterating the underlying block cipher over an initialization vector. That is, the ith keystream block is X_i = E(X_\bgroup i-1\ egroup) for 1 < i \leq n, and X_1 = IV. Like the other stream modes, the input block i is transformed by the exclusive-or of the block with X_i.
{"url":"http://www.gnu.org/software/gnu-crypto/manual/Modes.html","timestamp":"2014-04-19T03:12:09Z","content_type":null,"content_length":"4406","record_id":"<urn:uuid:ea3070f4-efb8-4013-9b6f-ad40ed779fd6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Fixed effects models Arena posted on Thursday, August 16, 2012 - 9:33 pm I'm currently analyzing data to examine how classroom characteristics relate to students' standardized test scores. Because students are nested within classrooms, I used CLUSTER to adjust the standard errors within each classroom. However, I also want to employ a school fixed effects model to account for the unobserved characteristics of schools that may bias observed relationships. How can I do this in Mplus? Thank you in advance. Linda K. Muthen posted on Friday, August 17, 2012 - 1:03 pm I'm not sure what you mean. Arena posted on Friday, August 17, 2012 - 1:43 pm I am wondering if there's a procedure in Mplus that is equivalent to the xtreg-fe command in Stata. Linda K. Muthen posted on Saturday, August 18, 2012 - 10:27 am It sounds like this is a multilevel model where students are nested in classrooms and dummy variables are used as covariates to represent schools. Yes, this can be done in Mplus. Byungbae Kim posted on Saturday, March 30, 2013 - 11:40 pm I have the same question above. Would you please be more specific on the ways in which school dummies could be specified in the model.I have students (n=20,000) nested within schools (n=89). Since I was not interested in estimating group level variations, I had used the "cluster" option to take into account a clustering issue. Now reviewers want me to use fixed effects dummies for schoolto make sure that any variance resulting from the group id could be adequately partialled out, instead of the cluster option. Thank you. Linda K. Muthen posted on Sunday, March 31, 2013 - 9:03 am School dummy variables are used as covariates. I would not recommend this in your case with 88 dummy variables. I would instead use TYPE=TWOLEVEL. The results you want would be on the within level. You could have a model of only random intercepts on the between level. See Example 9.1. Markus Riek posted on Thursday, January 16, 2014 - 12:48 pm Fixed effects in a complex (cross-national) sample. I’m doing a SEM analysis based on a sample of multiple countries. I want to analyze the effects for the whole sample (not between the countries). In order to account for fixed effects I specified the following parameters in the analysis configuration. VARIABLE: WEIGHT = w; CLUSTER = country; ANALYSIS: TYPE = COMPLEX; Is that the correct specification to include country fixed effects? Does someone know about an example of a single-level analysis including fixed effects? Bengt O. Muthen posted on Thursday, January 16, 2014 - 6:17 pm "Country fixed effects" sounds to me like a multiple-group analysis where group is country (a single-level analysis). The setup you give would not estimate any such effects - it would just correct the SEs for country clustering. A related paper discussing fixed versus random effects is on our website: Muthén and Asparouhov (2013). New methods for the study of measurement invariance with many groups. Mplus scripts are available here. Back to top
{"url":"http://www.statmodel.com/discussion/messages/12/10284.html?1364745791","timestamp":"2014-04-18T08:05:08Z","content_type":null,"content_length":"26060","record_id":"<urn:uuid:44be1534-55ae-48f1-9adf-f181872ca560>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
□ org.apache.commons.math3.stat.inference.BinomialTest □ Method Summary Modifier and Type Method and Description double binomialTest(int numberOfTrials, int numberOfSuccesses, double probability, AlternativeHypothesis alternativeHypothesis) binomialTest(int numberOfTrials, int numberOfSuccesses, double probability, AlternativeHypothesis alternativeHypothesis, double alpha) Returns whether the null hypothesis can be rejected with the given confidence level. ☆ Methods inherited from class java.lang.Object clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait □ Method Detail ☆ binomialTest public boolean binomialTest(int numberOfTrials, int numberOfSuccesses, double probability, AlternativeHypothesis alternativeHypothesis, double alpha) Returns whether the null hypothesis can be rejected with the given confidence level. ○ Number of trials must be ≥ 0. ○ Number of successes must be ≥ 0. ○ Number of successes must be ≤ number of trials. ○ Probability must be ≥ 0 and ≤ 1. numberOfTrials - number of trials performed numberOfSuccesses - number of successes observed probability - assumed probability of a single trial under the null hypothesis alternativeHypothesis - type of hypothesis being evaluated (one- or two-sided) alpha - significance level of the test true if the null hypothesis can be rejected with confidence 1 - alpha NotPositiveException - if numberOfTrials or numberOfSuccesses is negative OutOfRangeException - if probability is not between 0 and 1 MathIllegalArgumentException - if numberOfTrials < numberOfSuccesses or if alternateHypothesis is null. See Also: ☆ binomialTest public double binomialTest(int numberOfTrials, int numberOfSuccesses, double probability, AlternativeHypothesis alternativeHypothesis) Returns the observed significance level , or , associated with a Binomial test The number returned is the smallest significance level at which one can reject the null hypothesis. The form of the hypothesis depends on alternativeHypothesis. The p-Value represents the likelihood of getting a result at least as extreme as the sample, given the provided probability of success on a single trial. For single-sided tests, this value can be directly derived from the Binomial distribution. For the two-sided test, the implementation works as follows: we start by looking at the most extreme cases (0 success and n success where n is the number of trials from the sample) and determine their likelihood. The lower value is added to the p-Value (if both values are equal, both are added). Then we continue with the next extreme value, until we added the value for the actual observed sample. ○ Number of trials must be ≥ 0. ○ Number of successes must be ≥ 0. ○ Number of successes must be ≤ number of trials. ○ Probability must be ≥ 0 and ≤ 1. numberOfTrials - number of trials performed numberOfSuccesses - number of successes observed probability - assumed probability of a single trial under the null hypothesis alternativeHypothesis - type of hypothesis being evaluated (one- or two-sided) NotPositiveException - if numberOfTrials or numberOfSuccesses is negative OutOfRangeException - if probability is not between 0 and 1 MathIllegalArgumentException - if numberOfTrials < numberOfSuccesses or if alternateHypothesis is null. See Also: Copyright © 2003–2013 The Apache Software Foundation. All rights reserved.
{"url":"http://commons.apache.org/proper/commons-math/apidocs/org/apache/commons/math3/stat/inference/BinomialTest.html","timestamp":"2014-04-18T16:04:08Z","content_type":null,"content_length":"19988","record_id":"<urn:uuid:fae80d7d-e60b-42f0-bbc0-79491a618784>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Bijection between tangent spaces December 12th 2010, 08:44 AM #1 Oct 2010 Bijection between tangent spaces We have defined the tangent space at a point p of a manifold as the set of all derivatives. Now i want to show, that there exist a bijection between all derivatives and the "Tangentvectors" defined by (equivalent classes of) paths. More precisely: We call two paths equivalent, iff they define the same derivation at a point p. Now i want to show that there is a bijection between the set of all equivalent classes and the Tangentspace (defined as the set of derivatives). My Approach was the following. Every path c:I->M into the manifold M defines a derivation by $D(f):=\frac{d}{dt}_{|t=0}(f\circ c)(t)$ so i try to show that $\phi([c])=D$ with $D(f)=\frac{d}{dt}_{|t=0}(f\circ c)(t)$ is a bijection. Of course $\phi$ is well defined and injective, but i couldn't show that $\phi$ is surjective! If D is some arbitrary derivative at the point p. How can i find a path c, s.t. $\phi(c)=D$? If you fix a point $p$ and express everything in a local chart, then the required path $c(s), \ -\epsilon<s<\epsilon$ is just the unique solution of the initial value problem $c'(0)=D, \ c(0)=p$. Hello Rebesques, thank you for your help! But i don't see what you mean. Ok lets choose a chart (U, $\phi$) around the fixed point p. I think by your notation you mean $c'(0)=\frac{d}{dt}_{|t=0}(f\circ c)(t)$, is it right? But How can i solve this differential equation D=c'(t)? But i know by a thm., that every derivation D is of the form $D=\sum_{i=1}^d D(x_i) \frac{d}{dx_i}_{|p}$ whereas $x_i$ are the coordinate functions of our selected chart $\phi$ Perhaps this equation can help? But i don't know how. Not quite. We are looking for a curve $c<img src=$-\epsilon,\epsilon)\rightarrow M" alt="c$c(0)=p, \ c'(0)=(\frac{\partial}{\partial t}\vert_{0})c=D\in T_pM$. There are many curves with this property; Let's construct one. Choose a chart $(U,\phi), \ \phi=(x^1,\ldots,x^n)$, and w.l.o.g. assume $\phi(p)=0, \ D(\phi)\vert_{p}=I$, where $D(\phi)\vert_{p}=(\frac{\partial}{\partial x^i}\vert_p(x^j))$ and I is the identity matrix. Then, by letting $D=a^i\frac{\partial}{\partial x^i}$ and identifying $D=(a^1,\ldots,a^n)\in R^n$, the curve $c(t)=\phi^{-1}(tD), \ -\epsilon<t<\epsilon$ has the required properties. Ps. And $c$ is unique in the sence it belongs to the equivalence class of curves passing through $p$ and having tangent $D\in T_pM$. Last edited by Rebesques; December 17th 2010 at 02:50 AM. Reason: notation December 12th 2010, 01:05 PM #2 December 13th 2010, 07:39 AM #3 Oct 2010 December 16th 2010, 10:48 AM #4
{"url":"http://mathhelpforum.com/differential-geometry/166018-bijection-between-tangent-spaces.html","timestamp":"2014-04-19T01:03:28Z","content_type":null,"content_length":"43883","record_id":"<urn:uuid:81c4e2f4-cee4-486a-8ff8-d363e2214401>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Golden Spiral This Demonstration draws an approximation to a golden spiral using a golden rectangle. By successively drawing an arc between vertices of each square in a golden rectangle, you can approximate a golden spiral. A golden spiral is a logarithmic spiral that goes through successive points dividing a golden rectangle into squares.
{"url":"http://demonstrations.wolfram.com/GoldenSpiral/","timestamp":"2014-04-19T12:04:19Z","content_type":null,"content_length":"44406","record_id":"<urn:uuid:8f897d2f-af59-4156-8d76-76740bd0ea1f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Clifford Hwang Associate Faculty, Mathematics Clifford Hwang has been teaching mathematics at Mission College since 1999. He also teaches part-time at Evergreen Valley College and Santa Clara University. Ph.D., Electrical Engineering, University of California, Los Angeles M.S., Electrical Engineering, University of California, Los Angeles B.S., Electrical Engineering, University of California, San Diego Courses Taught Math 900 (Arithmetic Functions) Math 902 (Pre-Algebra) Math 903 (Elementary Algebra) Math B (Geometry) Math C (Intermediate Algebra) Math D (Trigonometry) Math 1 (Pre-Calculus Algebra) Math 2 (Pre-Calculus Algebra and Trigonometry) Math 12 (Calculus – Business and Social Science) Math 3A (Analytic Geometry and Calculus I) Math 3B (Analytic Geometry and Calculus II) Fast Facts When not teaching, Clifford enjoys running, watching baseball, and playing on-line games. Favorite Quote "Failure is not fatal but failure to change might be." ~ John Wooden
{"url":"http://www.missioncollege.org/-profiles/hwang_clifford.html","timestamp":"2014-04-20T20:56:35Z","content_type":null,"content_length":"11250","record_id":"<urn:uuid:39e98d39-d445-4039-a66a-46f0720cd7be>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/samnatha/asked","timestamp":"2014-04-19T20:01:10Z","content_type":null,"content_length":"92553","record_id":"<urn:uuid:94433382-16d3-450d-afac-9d8cd1482614>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
CS 223 CS 223 -- Random Processes and Algorithms Preliminary Syllabus Instructor: Michael Mitzenmacher E-mail: michaelm@eecs.harvard.edu Office: Maxwell Dworkin 331 Phone: 496-7172 Office Hours: By appointment. Syllabus: www.eecs.harvard.edu/~michaelm/CS223/syllabus.html Handouts: www.eecs.harvard.edu/~michaelm/CS223/indexb.html The goal of this course is to provide you with a solid foundation in the basic techniques used to analyze randomized algorithms and probabilistic processes. The course is designed for advanced undergraduates with an appropriate theory background (such as CS 124) and first year graduate students. Graduate students in disciplines outside theory are welcome and encouraged to take the course. The course will be a mix of textbook reading, lectures, and reading and discussion of research papers. The goal will be to move as quickly as possible from talking about topics at the textbook level to seeing how they are applied in research problems. We aim to emphasize applications outside of theory, looking at how the techinques come up in networking, Internet algorithms, CS-economics settings, and so on. Course content The course emphasizes theoretical foundations. Topics to be covered are expected to include the following: • Expectation, Variance. • Tail Bounds: Markov, Chevyshev, Chernoff. • Balls and Bins Problems; the Poisson Distribution. • Markov Chains: Uses and Examples. • Random Graphs. Average-Case Analysis of Algorithms. • The Probabilistic Method: Existence of Combinatorial Objects. • Continuous Random Variables. Queues, Exponential Distributions. • Entropy. Shannon's Theorem. • Markov Chain Monte Carlo Simulation. • Limited (Pairwise) Independence. • Coupling. Students should have taken at least CS 124 or its equivalent. Students should be able to program in a standard programming language; C or C++ is preferred. Knowledge of probability will be extremely helpful; however, the necessary probability will be covered in class. Students with less probability background will find it necessary to undertake some extra reading and preparation on their own outside of class. The course will have homework assignments due roughly every week. The assignments will be of two types: some assignments will consist of theoretical problems and programming exercises based on the material. Other assignments will involve reading a paper and being prepare to derive results from the paper in class for discussion. The homework will be worth roughly 2/3 of your grade. The remainder will be based on a take-home final exam. All assignments will be due at the beginning of class on the appropriate day. Late assignments are not acceptable without the prior consent of the instructor. Consent will be given for reasonable extenuating circumstances, including medical crises, job interviews, attending conferences, family situations, visiting potential graduate schools, etc. Required Text The class will be based on a book written by the instructor. The book is Probability and Computing: Randomized Algorithms and Probabilistic Analysis. There is an Amazon link available on the instructor's home page. For students who want more background in probability, there are many basic standard texts in the library. Sheldon Ross has written several introductory books; my personal favorite is "Introduction to Probability Models." Class Information/Notes Class notes, homework assignments, and other information will be made available on the Web when possible. For access go to the class web site. Generally this information will be available in Postscript and/or PDF. In many cases, the class web site may be the only location where information is posted or available, so look in from time to time! Student Lunches In order to ensure that all students, and especially undergraduates, have a chance to interact with me, I plan to arrange at least one day every other week where I will be available to go to lunch with students at the dining hall. Feel free to invite me! I'd be happy to talk about applying to graduate school, thesis topics, or whatever you want to talk about.
{"url":"http://www.eecs.harvard.edu/~michaelm/CS223/syllabus.html","timestamp":"2014-04-19T01:48:00Z","content_type":null,"content_length":"5045","record_id":"<urn:uuid:f0e43d49-b250-4dab-900d-022ba59de6b8>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
maths - geometry Number of results: 28,169 please help im maths sums of geometry Wednesday, August 22, 2007 at 7:18pm by neha Geometry : It is derived from an Arabic word genest which means figures. Thus geometry means problems based on figures. Saturday, May 10, 2008 at 3:09am by PULKIT Applied Geometry maths A Maths Applied student holidaying in Sydney notices that the Centrepoint tower has an angle of elevation of 15deg20'. After walking 400m directly towards the tower she now observes the elevation to be 24deg33' Wednesday, June 19, 2013 at 12:15am by Mandy do to you know geometry Saturday, May 10, 2008 at 3:09am by skye What are the uses of Geometry? Tuesday, February 26, 2008 at 9:28am by Susan Maths (Geometry) Wednesday, May 29, 2013 at 4:24am by Mathslover It is the definition of a parallelogram Tuesday, December 4, 2012 at 11:10pm by Mac Sunday, July 22, 2012 at 11:02pm by maths maths (geometry) False - only length Tuesday, June 18, 2013 at 12:49pm by Steve maths..... geometry word problems You're welcome. Sunday, September 8, 2013 at 9:30am by Ms. Sue how can we prove that parallelogram has parallel sides?? Tuesday, December 4, 2012 at 11:10pm by Anamika maths geometry (x-2)+x+(x+2) = 153 49,51,53 Thursday, December 27, 2012 at 10:36am by Steve maths..... geometry word problems thank u Miss. Sue Sunday, September 8, 2013 at 9:30am by leena http://www.khanacademy.org/math/geometry/basic-geometry/circum_area_circles/v/circles--radius--diameter-and-circumference Watch this teaching video. Take notes; replay if needed; work it out. Then let the math tutors know the answers you come up with. Someone will then be able... Thursday, July 11, 2013 at 9:28am by Writeacher maths geometry 1st --- x 2nd --- x+2 3rd ---- x+4 solve for x: x + x+2 + x+4 = 153 Thursday, December 27, 2012 at 10:36am by Reiny how do we prove that opposite angles and sides of a parallelogram are equal in measure? Sunday, August 17, 2008 at 10:44am by fsds maths - geometry i had fill out a chart stating the prperties of certine shapes Wednesday, April 8, 2009 at 3:36pm by kenneth maths (geometry) decide whether the statement is true or false, A line has length and breadth Tuesday, June 18, 2013 at 12:49pm by osa UET Peshawar P[C.S]=270/500 P[Maths]=345/500 P[C.S ∩ Maths]=175/500 ( P[C.S U Maths] )' = ? ANS. ( P[C.S U Maths] )' = 1-P[C.S U Maths] =1-{P[C.S] + P[Maths] - P[C.S ∩ Maths]} =1-{270/500 + 345/500 -175/500 } =1- {440/500} =1-0.88 =0.12 Friday, June 3, 2011 at 2:53pm by Sheraz 1.in which geometry do lines no contain an infinite number of points? (a) plane coordinate geometry (b) discrete geometry (c) graph theory 2. in which geometry do points have thickness? (a) plane coordinate geometry (b) discrete geometry (c) graph theory i think the answer is... Tuesday, August 30, 2011 at 4:39pm by SoccerStar maths geometry the sum of three consecutive odd natural number is 153 find the numbers Thursday, December 27, 2012 at 10:36am by adis Make a complete re search project on maths is interconnected with other subject i.e. Maths Vs computer using art with maths. Tuesday, June 5, 2012 at 4:01pm by Shivansh Tomar maths - geometry if the points A(1,-2) B(2,3) C(-3,2) and D(-4,-3) are the vertices of the paralleogram ABCD, then taking AB as the base, find the height of parallelogram. Sunday, March 3, 2013 at 10:04am by Anonymous 120/4 = 30 cm http://www.mathsisfun.com/geometry/rhombus.html Wednesday, April 24, 2013 at 1:37pm by Ms. Sue Applied Geometry maths How would I find out how far away from the base of the tower she was when she made her first measurement of 15deg20' Wednesday, June 19, 2013 at 12:15am by Amanda I need an answer--molecular/electron geometry Can anyone help me out with the differences between electron geometry and molecular geometry for SCl6? I think the molecular geometry is octahedral, but I'm not so sure for the electron geometry. Thanks :) *My earlier post wasn't answered, so I'd appreciate some help please =] Tuesday, February 8, 2011 at 8:03pm by Kate Molecular and Electron geometry Can anyone help me out with the differences between electron geometry and molecular geometry for SCl6? I think the molecular geometry is octahedral, but I'm not so sure for the electron geometry. Thanks :) *My earlier post wasn't answered, so I'd appreciate some help please =] Tuesday, February 8, 2011 at 6:47pm by Kate Maths B Charlotte received a score of 68 on both her English and Maths tests. The mean for English was 52 and the mean for Maths was 55. The standard deviations for English and Maths were 10 and 8 respectively. In which subject did Charlotte perform better? Explain your decision. Thursday, May 24, 2012 at 5:49am by Anna Maths B Charlotte received a score of 68 on both her English and Maths tests. The mean for English was 52 and the mean for Maths was 55. The standard deviations for English and Maths were 10 and 8 respectively. In which subject did Charlotte perform better? Explain your decision. Thursday, May 24, 2012 at 9:02pm by Anna geometry (maths) ABCD is a rectangle, in which BC=2AB. A point E lies on ray CD, such that CE=2BC. Prove that BE perpendicular AC. Monday, June 6, 2011 at 4:25am by math... maths letaracy,history,geography and life science im going to grade 10 next year,and i want to be a psychologist and im going for maths letaracy and history feild im realy not good with maths and i just found out a few weeks ago that i need pure maths to do psychology.i dont know what to do now ,i also dont know the jobs ... Saturday, October 26, 2013 at 8:02am by khanyisa maringa well the only thing about maths is that you get stuck on it which is so annoying well the way i know how to do maths is if your mum or dad has got a calculater then sneak into were they keep it an and use it for your homework or you could just get a nerd to do your maths now ... Sunday, April 13, 2008 at 4:19pm by permicouas well the only thing about maths is that you get stuck on it which is so annoying well the way i know how to do maths is if your mum or dad has got a calculater then sneak into were they keep it an and use it for your homework or you could just get a nerd to do your maths now ... Sunday, April 13, 2008 at 4:19pm by permicouas A letter is picked at random from the English alphabet. Find the probability that the letter : a) Appears in the word GEOMETRY Wednesday, May 22, 2013 at 4:38am by arzam Geometry..... reposted ques by me the maths tutor where online because ques which were posted after me were answered, and why reposts will be deleted Friday, September 6, 2013 at 7:59am by leena Molecular and Electron geometry Can anyone help me out with the differences between electron geometry and molecular geometry for SCl6? I think the molecular geometry is octahedral, but I'm not so sure for the electron geometry. Thanks :) Monday, February 7, 2011 at 7:02pm by Kate maths..... geometry word problems the length of a rectangle is 8 cm more than its width. if its perimeter is 56 cm, find its area. Sunday, September 8, 2013 at 9:30am by leena i am told to do a project in maths it should involve a maths consept.pls help as to what should i do? Sunday, November 18, 2007 at 4:43am by chandini Maths (coordinate geometry) two lines y=-2x+6 and y=1.5x+1 intersect. What is the size of the acute angle at the intersection? Ive got 59.4 degrees. Any confirmation/slight alteration? Wednesday, March 26, 2008 at 7:09am by Mikal What is your geometry question? What exactly do you need help with in terms of geometry? Write back.... Saturday, February 16, 2008 at 8:37pm by Guido Geometry riddl How did they like the story about towel manufacturing? Also, where can I find geometry worksheets on plane and simple? ? ? ? Tuesday, September 26, 2006 at 9:24pm by Debra hah funny thing. i put geometry instead of my name cus geometry was my question but my names anna. =P but no problem Tuesday, March 11, 2008 at 6:29pm by Geometry write a paragraph advantage of math math is scienc subject it plays very important role in every subject even every science subject is incomplete without maths these science subject comes after maths maths subject adventure of these subject maths is Saturday, July 2, 2011 at 5:34pm by Anonymous Take a look at paragraph 5 of http://www.regentsprep.org/Regents/math/geometry/GP15/CircleAngles.htm Friday, March 15, 2013 at 2:41pm by Steve "An measure of an exterior angle of a triangle is equal to the sum of the measures of the two non-adjacent interior angles." http://www.regentsprep.org/Regents/math/geometry/GP5/LExtAng.htm 126/6 = 21 42 and 84 Saturday, March 2, 2013 at 1:36pm by Ms. Sue MATHS GEOMETRY!!! URGENT PLEASE. A triangle has side lengths 10,17 and 21cm. Find the length of the shortest altitude. And please show working out to help me understand. Friday, February 15, 2013 at 1:56am by Edward Maths- very very simple Thursday, February 14, 2013 at 5:49pm by Ms. Sue i didn't understand the concept of symmetry in maths ? what is its use in the field of mathematics? what is tha aplication of symmetry of symmetry in various fields. somevdy please answer Saturday, June 14, 2008 at 10:30am by kate 7th grade geometry I need help desperately with geometry. the geometry is Area: Parallelograms, Triangles, and Trapezoids. It's really hard for me. Only if you get the concept. Anyway, Pleaswe help me!!!!!!!!!! Monday, January 28, 2008 at 6:52pm by Janelle-Marie Wednesday, April 2, 2014 at 11:23am by Ms. Sue A school has eleven Year 9 Maths classes of 27 students. Each class has 4 extra students added to it. Find the total number of students studying Year 9 Maths. Thursday, February 26, 2009 at 1:41am by Jessica Maths (Geometry) P is a point in rectangle ABCD. The distance from P to the 4 vertices of the rectangle are 7,15,24 and N in some order. If N is an integer, determine the value of N. Wednesday, May 29, 2013 at 4:24am by ABCD maths geometry d^2 = 36^2 + (30-15)^2 = 36^2 + 15^2 d = 39 the tops form the hypotenuse of a 5-12-13 right triangle (scaled by 3) Thursday, December 27, 2012 at 10:25am by Steve In a maths class of 13 students, 2/3 of boys and 1/4 of the girls love maths. How many girls love maths Tuesday, March 27, 2012 at 10:33am by kayode maths geometry two poles 15 meter and 30 meter high stand upright in a playground if their feet are 36 meter apart find the distance between their tops. Thursday, December 27, 2012 at 10:25am by adis Construct ∆ABC, AB=6cm, BC=9cm,CA=8cm. construct tangents from the vertex C to the circle through AB as the diameter Wednesday, March 6, 2013 at 5:49am by anoynomous Maths B Andrew got 68 for both English and Maths. The mean for english was 52 and the mean for Maths was 55. The standard deviations were 10 and 8 respectively. Determine the subject in which andrew did Saturday, March 9, 2013 at 10:56pm by Will Im sorry!!! Its so late and im half asleep. Worse time to do maths. I get it reiny!!!!!!! Thank you So the second point is -4 = -4 therefore it must be the correct equation as both points satisfy the Wednesday, January 30, 2013 at 6:25pm by Sal When you say shape do you refer to the electronic geometry or the molecular geometry. I assume molecular geometry. PH3, CHCl3 and PO4^3- are correct. H2S is angular (like water). H | S-H Wednesday, December 7, 2011 at 10:42am by DrBob222 I am really facing difficulty in maths. I do my best to improve in maths but after entering in the examination hall during my exams, i just get confused. please help The proper name for the subject is mathematics or just "math" for short. Calling it "maths" calls attention to ... Wednesday, June 14, 2006 at 7:23am by devendri among the exames in the examination 30%, 35% & 45% failed in statistics, in maths, & in atleast one of the subjects respectively. one student is selected at random. find the probability that: (a)he has failed in maths, (b)he passed in statistiics if it is known that he has ... Wednesday, October 27, 2010 at 12:24am by hhhh among the exames in the examination 30%, 35% & 45% failed in statistics, in maths, & in atleast one of the subjects respectively. one student is selected at random. find the probability that: (a)he has failed in maths, (b)he passed in statistiics if it is known that he has ... Wednesday, October 27, 2010 at 12:32am by hhhh maths..... geometry word problems P = 2L + 2W 56 = 2(W + 8) + 2W 56 = 2W + 16 + 2W 40 = 4W 10 = W Sunday, September 8, 2013 at 9:30am by Ms. Sue Yeah I thought it was physics but it's in my maths practice test :S Saturday, January 31, 2009 at 4:50am by Renee <3 Could somebody help me with this Maths squence 2N + 2, 3N - 1 , 10N + 6 Thursday, September 27, 2007 at 2:08pm by Robert How to prepare a working model in maths for standard IV? Thursday, May 1, 2008 at 7:53am by pawan Please show me the way of using formula log? Friday, October 14, 2011 at 4:50pm by Maths For each property listed from plane Euclidean geometry, write a corresponding statement for non-Euclidean spherical geometry. A line segment is the shortest path between two points Hey guys I want to know how to do this and can you help me with this question? Sunday, May 29, 2011 at 11:19am by Anonymous Hi Can someone give me a help in finding maths quiz websites for Ket Stage 3 Saturday, April 21, 2007 at 5:15am by Fahad i have a lot of maths HW and i dont understand how to Factorise??? Pleasea help me! Friday, October 26, 2007 at 10:10am by Vicky maths - geometry The line AB is y+2 = 5(x-1) 5x - y - 7 = 0 The distance from C to AB is |5(-3)+(-1)(2)-7|/√25+1) = 25/√26 Sunday, March 3, 2013 at 10:04am by Steve pplz tell me about maths working modals froom where can i down load then???? Thursday, January 22, 2009 at 5:54am by pulkit What is the best website which has printable free worksheets for grade 7 with the maths topic : decimals? thank you for your help. Sunday, January 23, 2011 at 5:19am by Alberta how to get the top rank in various maths olympiads...? Thursday, May 26, 2011 at 5:46am by ??????? what are the applications of maths in our daily life Monday, May 20, 2013 at 1:28am by Gagandeep Is given a function: f(x)=x2 + px + q Prove that: f(x+1) + f(x-1)= 2+2 f(x) Tuesday, August 20, 2013 at 1:48pm by Maths history,geography,life science,maths or maths lit While you are STUDYING those subjects?? No. Thursday, December 5, 2013 at 3:01pm by Writeacher Hey :) I'm sitting my maths non-calculator GCSE tomorrow and am currently revising. I'm doing a practise paper and am currently stuck on this question: Express A in terms of w. 3w + 20 A -------- = -------- 200 A + 12 -- What does the question mean? -- Can you explain to me ... Sunday, May 17, 2009 at 9:32am by Sarah Thanks. The final answer is 8 (if I'm right) but that means its not a fraction. so do I have to do something else or am I that bad at maths that I'm wrong again!! Wednesday, February 9, 2011 at 10:27am by Marian ms.sue can you practice me in maths for s.e.a like give me sites to do test then you will correct when i post the answers Sunday, March 17, 2013 at 1:37pm by fred We are given that log102<0.302. How many digits are there in the decimal representation of 5^500? Tuesday, April 30, 2013 at 5:56am by MAths lover Please Helppppp We are given that log10 2<0.302. How many digits are there in the decimal representation of 5^500? Tuesday, April 30, 2013 at 6:10am by MAths lover Please Helppppp economics business studies maths literacy i want to study financial management but in doing maths lit so what does this course requires? Thursday, January 24, 2013 at 8:38am by lerato bopape history,geography,life science,maths or maths lit can i be a social worker while am those subject Thursday, December 5, 2013 at 3:01pm by kamva solve the following. 1) x^3+3y^3+x+2y Friday, June 10, 2011 at 7:28am by MATHS im prett sure thats wrong Wednesday, August 29, 2012 at 4:38am by MATHS!!!!!!!!!!!!!!!!!! which is valuable abacus or vedic maths? Sunday, December 16, 2012 at 1:00pm by Anamika It comes from an Olympiad Maths problem Monday, June 10, 2013 at 10:54pm by Anonymous Please help on geometry - analytic geometry Knights, If you are in analytic geometry, I am pretty certain your teacher expects an analytical approach, not a graphical approach. See the example I posted. Friday, February 22, 2013 at 5:21pm by bobpursley i was in honors in jr. high and my new high school doesn't offer it so i'm stuck in just regular geometry. I also think algebra 2 is easy it is just that geometry gets you sometimes. Tuesday, October 2, 2007 at 10:48pm by alan In an interview of 50 math majors, 12 liked calculus and geometry 18 liked calculus but not algebra 4 liked calculus, algebra and geometry 25 liked calculus 15 liked geometry 10 liked algebra but neither calculus nor geometry 2 liked geometry and algebra but not calculus. Of ... Monday, January 11, 2010 at 8:45pm by Anita maths geometry Two parallel cords of a circle with radius 25 cm length of 30 cm and 48 cm. Find the distance between the two cords Tuesday, April 1, 2014 at 2:19pm by praddy Mom, You are never too old to re-visit maths. Many of my students studying undergraduate maths and chemistry are 35+. Tuesday, September 18, 2007 at 11:28pm by Dr Russ can any one tell me wat are the topics for class 10th maths project for the year 2010-2011. Saturday, May 15, 2010 at 12:54pm by prateek nayak Say the probability he'll pass English is Prob(E) = 0.6. The probability he'll pass in both English and Maths is Prob(E&M) = 0.54. Provided the probability that he'll pass English is independent of the probability that he'll pass Maths (and note that's an assumption we're ... Monday, May 2, 2011 at 12:56pm by David I need an answer--molecular/electron geometry The molecular geometry is octahedral and you are correct. If you draw the Lewis structure, you will see it has no unpaired electrons on the central atom(S); therefore, the electronic geometry is octahedral, also. Tuesday, February 8, 2011 at 8:03pm by DrBob222 A store clerk sold 25 maths books and 10 english books for $855.00 .if she sold 10 maths books and 40 English books, she would have got $135 more. calculate the cost of each book. Tuesday, February 9, 2010 at 3:38pm by JERMAINE Student has a probality to pass in english is 60% , and probality to pass in english and maths is 54% , what is probality he will fail in maths Monday, May 2, 2011 at 12:56pm by indranil chatterjee 8. In an interview of 50 math majors, 12 liked calculus and geometry 18 liked calculus but not algebra 4 liked calculus, algebra, and geometry 25 liked calculus 15 liked geometry 10 liked algebra but neither calculus nor geometry 2 liked geometry and algebra but not calculus. ... Monday, November 30, 2009 at 9:02pm by poo nevermind, after reading ahed in my maths book, I think I'm finally starting to get it; still, thank you for always helping me whenever I don't understand my homework! :) Tuesday, October 16, 2007 at 9:46pm by Audrey Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=maths+-+geometry","timestamp":"2014-04-17T16:42:53Z","content_type":null,"content_length":"33650","record_id":"<urn:uuid:9bb32dbe-08e3-4269-b5f9-6408b09586c1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
The waitress and the hostess at a restaurant share the tips at the end of the shift. The average amount earned in tips between the two of them is $75. The waitress gets $45 and the hostess gets $30. If they continue to earn money at this rate, how many dollars will the waitress receive if they earn $350 at the end of the shift? A) $216 B) $210 C) $213 D) $225 The waitress and the hostess at a restaurant share the tips at the end of the shift. The average amount earned in tips between the two of them is $75. The waitress gets $45 and the hostess gets $30. If they continue to earn money at this rate, how many dollars will the waitress receive if they earn $350 at the end of the shift? A) $216 B) $210 C) $213 D) Expert answered|selymi|Points 892| Asked 2/3/2011 7:57:49 AM 0 Answers/Comments There are no new answers.
{"url":"http://www.weegy.com/?ConversationId=84117AB8","timestamp":"2014-04-16T04:25:34Z","content_type":null,"content_length":"41975","record_id":"<urn:uuid:9c735f68-309b-4656-baa7-38701f1021de>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Pavel Materna: "Sense, Denotation, Reference : A terminological/philosophical Summary: The terms sense, meaning, denotation, reference are mostly used without any critical attempt at defining them. So it frequently happens that reference is used promiscue with denotation. The paper shows that at least the approach known as 'transparent intensional logic' is able to offer such definitions which, among other things, make a fundamental distinction between denotation and reference, and make it possible to explicate Frege's Sinn in a most inspiring way. As a consequence of such definitions one basic intuition is supported, viz. that whereas the sense and the denotation of an expression are given (relatively) a priori, reference cannot be unambiguously determined by the sense and is co-determined by the state of the world. A logical apparatus is briefly suggested which enables us to exactly formulate the above intuitions. Key words: sense, meaning, denotation, reference, functional approach, intensions, constructions. 1. Semantics vs. Pragmatics The topic of the present paper is a semantic rather than a pragmatic problem. Unlike Quine I do not replace semantics by pragmatics, and my 'slogan' is ask for meaning before asking for use in contradistinction to the well-known Quinean-Wittgensteinian slogan (see [Materna 1998]). This means that it is abstract expressions what we try to analyze, not the concrete events of uttering From this viewpoint we must be aware of the fact that the genuinely semantic entities like sense, meaning, denotation are given (relatively) a priori; this viewpoint is, of course, distinct from the viewpoint of theoretical linguistics, since the fact that a certain expression possesses a certain meaning (in the given language) is from the latter viewpoint contingent, whereas for semantics the given linguistic convention is supposed to be already given, so that the given expression necessarily possesses such and such meaning(s), so that we can 'calculate' the meaning independently of empirical facts. The same result holds for denotation, since the meaning should unambiguously determine the denotation. One of the results of our analysis consists in the claim that reference - unlike denotation - is not a priori. The area given by this specification can be called logical analysis of natural language (LANL), and it is important especially for philosophical logic, which should determine the class of correct arguments. To adduce an example, the reason why the following argument is not correct, although it seems to be due to the application of the Leibniz's principle The number of (the major) planets is nine. Necessarily, nine is greater than seven. Necessarily, the number of (the major) planets is greater than seven. can be found by LANL but not by any descriptive theory such as a general theory of language. 2. A (not only) terminological mess in semantics The origin fo the contemporary semantics and LANL is, at the same time, the origin of fundamental confusions in this area. I mean Frege's classical [Frege 1892]. Already from the terminological point of view, Frege's term Bedeutung is confusing: in German it means what should or at least could be translated as meaning, but if meaning is what makes us understand the given expression (this is one of the few points where a nearly general consensus among semanticists can be expected), then Frege's Bedeutung surely does not fulfil this role, especially when Bedeutung of a sentence is 'its' truth-value. The role of meaning is played by Frege's Sinn instead. Therefore the Fregean logician Alonzo Church (see [Church 1956]) has chosen the term denotation, translating Bedeutung into What led Frege to his distinguishing between sense and denotation is well-known. Two problems he began to solve and failed to do it inspired great a many philosophers and logicians to writing articles and monographs handling these problems, but mostly the source of Frege's failure has not been recognized, or at least a wrong therapy has been chosen. The vagueness of Frege's characteristics of sense and his not consistent and too coarse-grained use of the denoting relation have been mostly inherited by his followers. Let aus articulate the mentioned problems: A. What is sense? B. What does an (empirical) expression denote? Ad A.: The only formulation used by Frege to characterize sense is die Art des Gegebenseins, viz. der Bedeutung ("mode of presentation"), which, on the one hand is too vague to be construed as being a definition, and, on the other hand, cannot decide whether sense is structured (this can be implied from his first example with medians of a triangle), or not (see his more popular example with morning star vs. evening star). Ad B.: The second question begins to be a problem in virtue of the fact that the relation between an expression and the object it denotes (bezeichnet) is considered by Frege to be immediately obvious and essentially the same for empirical and non-empirical expressions. This Frege's opinion has been called by P.Tichý Frege's Thesis and formulated as follows: [u]nder the meaning (i.e., Bedeutung, P.M.) he does not understand what is connected with it by linguistic fiat, but rather the object which is presented thereby. [Tichý 1992, p.5] It should be intuitively clear that Frege's Thesis is strongly counterintuitive. Consider the case of empirical sentences. A consequence of Frege's Thesis is that such sentences denote truth-values (cf. the well-known 'slingshot argument', e.g., in [Church 1956]). As for the denotation we would have only two sentences, one denoting Truth, the other Falsity. Not only that, but in the case of empirical sentences which use to change their truth-value the denotation of such sentences would change with changing facts: the a priori character of denotation would disappear. And the general consequence thereof is that denotation is no more unambiguously determined by the sense, which contradicts Frege's intentions and his characteristics of sense (see above). The contemporary semantics is subconsciously dissatisfied with the term denotation. Sometimes or probably mostly it simply replaces it with the term reference. A typical example is Linsky's article on referring in [Linsky 1967]; there only the term refer is used where Church would use denote, and no distinction between the medians-example and the morning star vs. evening star-example is seen. We can see, however, that in the medians-example the point of intersection is unambiguously given by the sense of the respective expression, whereas the sense of the expression morning star - be it anything - cannot be said to unambiguously determine Venus. Thus we state the regrettable fact that at least four terms important for semantics/LANL are used without any rule based on definitions, without any more or less exact justification: they are meaning, sense, denotation, reference. One ambiguity could be tolerated: the term meaning can be used promiscue with the term sense, because the way the former is used seems to correspond with what Frege meant introducing the latter. Let us use meaning for Frege's sense, therefore. Our question can be formulated as follows: What is the distinction - if any - between meaning, denotation, and reference? 3. Objects denoted: the elementary case We will try to show that the conceptual framework proposed and successfully applied by Pavel Tichý in his transparent intensional logic (TIL) is able to disambiguate the above mentioned chaotic use of terms. Tichý himself did not do it explicitly on the level of terms (he also sometimes uses reference instead of denotation) but his system, explained in many articles and in his last book [Tichý 1988], makes it possible to correct Frege's Thesis and define meaning (no more an 'obscure entity'). Referring for details to his work (and perhaps to my [Materna 1998]) we will only describe some fundamental features and notions of TIL. Our first result will consist in our specification of the denoting relation in the most usual (elementary) case. Should what is denoted by an expression be independent of empirical facts then no expression should be construed as denoting distinct objects. This principle contradicts Frege's Thesis, according to which the expression the richest man in Europe denotes one individual at the time point T and another individual at the time point T'. Also, it is not at all clear how that expression could via its sense unambiguously determine - even at the same time point - the individual who in fact is the richest man in Europe - we have to inspect empirical facts to identify such an individual. Thus we can state that the variability of the denotation - which seems to contradict our principle - is of twofold character: it is a temporal and also a modal variability. To solve this problem we have to apply a principle formulated in [Janssen 1986]: according to it if we are tempted to say that the denotation (Janssen speaks about meaning) depends on some circumstances, then let these circumstances be arguments of a function and say that this very function is the denotation. Now to be able to handle temporal variability we need time points as elementary entities, and to be able to handle modal variability we need possible worlds. So the area of objects that can be in the elementary case denoted should be built up from - among other things - time points and possible worlds. This is not sufficient, of course. We need, of course, most simple material objects called usually individuals, and no discourse can be realized without two simple objects called truth-values, say, T, F. TIL is a type-theoretical system based on the above elementary types. Using o (Greek omíkron) for the set {T, F}, i (iota) for the set of individuals, t (tau) for real numbers and time points, and w (ómega) for the set of possible worlds, we define types of order 1: i) o , i , t , w are types of order 1. ii) If a , b [1], ..., b [m] are types of order 1, then (a b [1]...b [m]), i.e., the set of all partial functions with arguments (tuples) in b [1], ..., b [m] and values in a , is a type of order 1. iii) Nothing other is a type of order 1. ~ It can be shown that this definition covers all (important?, just all?) kinds of object which can be denoted in the elementary case, i.e., if we ignore the case when an expression denotes a meaning (of another expression). For we should be able to denote truth-values, individuals, numbers, classes of objects of any type, relations-in-extension of various types, propositions, properties of objects of any type, relations-in-intension of any types, magnitudes, etc., but all of these objects can be associated with a type. So let a be any type. A class of objects of the type a is an (o a ) -object, since it can be identified with a characteristic function: with any a -object such a class associates T if it is a member of it, and F otherwise. An easy analogy holds for relations-in-extension (the general schema is (o b [1]...b [m]) ). The remaining examples of objects represent intensions. What is a proposition? Setting aside the Russellian "structured propositions" we can accept the contemporary convention that holds among the possible-world-semanticists, and construe propositions as functions from possible worlds to (chronologies of) the truth-values. The modal variability is annihilated by taking propositions to be functions from possible worlds, the temporal variability disappears as soon as the value of such function is not simply a truth-value but rather a chronology of truth-values, i.e., a function from time points to truth-values. According to our definition the type of propositions is ((o t )w ). In general, intensions are functions whose type is ((a t )w ) for a any type. We will use the abbreviation accepted in TIL, writing a t w instead of ((a t )w ). So we have type (o a )t w for properties of a -objects, type (o b [1]...b [m])t w for relations-in-intension, type t t w for magnitudes, etc. Especially such objects that Church called individual concepts and are denoted by (empirical) definite descriptions, like our example the richest man in Europe, are i t w -objects. Against Frege's Thesis TIL shows that empirical expressions can denote only intensions. Thus empirical sentences denote propositions rather than truth-values, and rightly so: imagine that a logical analysis of an empirical sentence would discover the truth-value 'denoted' by that sentence. Then no verification of empirical sentences would be necessary: instead, any logician would be able (in principle) to 'calculate' the respective truth-value. Thus a first distinction can be stated: An expression denotes the object (if any) which is unambiguously determined by its sense. An empirical expression denotes an intension, never the value of the intension in the actual world. On the other hand: An expression refers to the object which is the value of its denotation in the actual world-time. 4. Sense (= meaning) Perhaps the most difficult task is to replace Frege's vague characteristics of sense by a more or less acceptable but in any case precise definition. Why we cannot accept Carnap's intensional isomorphism neither Cresswell's (and Kaplan's) tuples, is explained in [Tichý 1998, p.8-9] and [Tichý 1994, p.78]. We now only globally characterize Tichý's constructions and try to argue that they are probably the best starting point to explicating sense. Let us begin with any example from the area of arithmetics. So consider the expression 7 + 5 = 12 (in honor of Kant): anybody would agree that expressions 7, 5, 12 denote the numbers 7, 5, 12, respectively. What about + and = ? Even in this point only some very stubborn nominalists would disagree that + denotes the function of adding (its type being (t t t ) ), and = the identity relation (type (o t t ) ). What does the whole expression denote should also be clear - it is the truth-value T. Now what is the sense of the above equality, as the way to this T ? Accepting the useful principle of compositionality we claim that the sense of that equality is unambiguously determined by the senses of the particular parts of it. So let us ask what senses are expressed by these components, i.e., by the expressions 7, 5, 12, +, = . Understanding these simple expressions (= knowing their senses) means to be able to identify the respective numbers and functions (even relations are functions, viz. characteristic functions). Now we can have various ways to such an identification, but to get such a way for every object presupposes that there are some primitive ways where we have to stop to avoid regressus ad infinitum. Thus our first claim is that there are some primitive senses. Not claiming that we need them just when analyzing our equality we will choose them for the sake of a didactic explanation. But given that we have some primitive senses at our disposition we have to answer a second question : In which manner do the primitive senses combine so that they result in determining the non-primitive sense of the whole expression? Many semanticists seem not to see this problem. They say that this 'synthesis' is determined by the grammar of the given language. (see, e.g., [Sluga 1986]). But then we can say with Tichý in [Tichý 1988, p.36-37]: If the term '(2 × 2) - 3' is not diagrammatic of anything, in other words, if the numbers and functions mentioned in this term do not themselves combine into any whole, then the term is the only thing which holds them together. The numbers and functions hang from it like Christmas decorations from a branch. The term, the linguistic expression, thus becomes more than a way of referring to independently specifiable subject matter: it becomes constitutive of it. Independently of this excellent formulation the French computer scientist J.-Y. Girard formulates a very similar thought, saying [Girard 1990] about the equality 27 × 37 = 999: [t]he denotational aspect ... is undoubtedly correct, but it muisses the essential point: There is a finite computation process which shows that the denotations are equal. It is an abuse... to say that 27 × 37 equals 999, since if the two things were the same then we would never feel the need to state their equality. Concretely we say a question, 27 × 37, and get an answer, 999. The two expressions have different senses and we must do something (make a proof or a calculation...) to show that these two senses have the same denotation. The last sentence uses the terms sense and denotation not in a standard way, it should be reformulated as follows: "... to show that the two expressions differ in senses but have the same denotation", but the idea is clear enough and is in full harmony with the idea of the preceding quotation. Thus we have suggested a motivation for the choice of Tichý's constructions the definition of which is contained in [Tichý 1988]. Here only main points: i) The primitive constructions are variables (as not linguistic expressions but 'incomplerte constructions' constructing objects dependently on valuation) and trivialization: where X is any objects (or even construction), ^0X constructs this very object without any change. ii) Composition is a construction symbolized by [XX[1]...X[m]], where X constructs (maybe dependently on valuation - this possibility will be presupposed in the following) a function (type (a b [1]...b [m]) ) and X[i ]constructs an object of the type b [i]. It constructs the value (if any) of that function on the arguments constructed by X[i] for 1L iL m. iii) Closure is a construction symbolized by [l x[1]...x[m]X], where x[1]...x[m] are pairwise distinct varaibles constructing objets of types b [1],...,b [m], respectively (types not being necessarily distinct) and X is a construction constructing members of a type a . It constructs a function in the way well-known from the l -calculi. (We have omitted two other constructions from Tichý's book, they are not important here.) Now our Kantian equation gets the following construction as its sense : (7. 5, 12 / t , + / (t t t ), = / (o t t ) ) [^0= [^0+ ^07 ^05] ^012] The way the constructions are defined makes it possible to derive the resulting denotation. The sense given by the above construction is, of course, distinct from the above chain of characters: the respective composition does not contain brackets etc. - the above chain of characters only fixes in a standardized way what the abstract procedure called composition does, it fixes particular 'steps' of that procedure. To show some example from the area of senses of empirical expressions, let us analyze the expression the highest mountain. We already know that what is denoted by this expression is an individual role, an i t w -object and that Mount Everest, not being mentioned in this expression, is not the right candidate. (it is 'only' the reference of that expression.) Now looking for the way in which the i t w -object is determined, i.e., looking for the sense of our expression, we have to determine the primitive senses of the highest and mountain. Type-theoretically, mountain is clear: it denotes a property of individuals, so an (o i )t w -object; let its primitive sense be (for the sake of simplicity) ^0M(ountain). The highest is type-theoretically not as simple, but we can determine the type of the denoted object after some brief consideration. Our first claim will be that the expression is an empirical one. Thus its type will be a t w for some type a . But a has to be a type of a function: this function, if applied to a class of individuals, selects that individual (if any) which is the highest one in that class. So a is obviously a function of the type (i (o i )). The whole type is thus (i (o i ))t w [.] The primitive sense of the highest will be ^0H(ighest). The sense of the whole expression has to be such a construction which would construct the individual role that an individual has to play to be the highest mountain, so the type of the constructed object (and thus of the denotation of our expression) will be i t w . Not having at our disposal up to now the theory which would apply the above frame to a particular language we have to replace precise rules by intuitive considerations, which, however, are of key If we choose w as the (say, the first one) variable ranging over possible worlds, and t as the variable ranging over time points, we can see that the construction we look for will be [l wl t X] for X being a construction constructing (maybe dependently on valuation) individuals. To find such a construction we have to use the constructions ^0M and ^0H. The first suggestion is: observe that the value of H in the given world-time is a function from classes of individuals to individuals. Thus applying H to (first) a possible world and (then) to a time point we get simply a function from classes of individuals to individuals. Writing, in general, X[wt] instead of [[Xw]t], we have as the construction of an individual if Y constructs a class of individuals. Could Y be ^0M ? Surely not, since M is not a class but a property. But applying M to world-times represented by thevariables w, t we get a class (a definite class after w and t are evaluated). So we have now (omitting the outermost brackets): l wl t [^0H[wt]^0M[wt]] and can check that this construction serves as the sense of our expression to determine the intended denotation: According to the above definitions or characterizations we can see that the above construction constructs the function whose value in the world W is a chronology which at the time point T returns that individual (if any) which is (in W at T) the highest one in the class of those individuals that are in W at T mountains. But this is exactly what we mean by that expression. We cannot mean Mt Everest by it, since it is not mentioned in it; indeed we do not believe that this expression is a well formed expression of English because of the necessity to have some other name for Mt Everest --this expression is fully meaningful even at the time when nobody knew which mountain is the highest one. Of course, if we knew a priori which of the possible worlds is the actual one we would be able to 'calculate' Mt Everest from the sense of the expression the highest 5. Conclusion The above conceptual frame makes it possible to distinguish semantic relations which are frequently confused together. To sum up the results in a brief form we can state: We can distinguish between the relations A. expression - sense (=meaning), expression - denotation, sense - denotation on the one hand, and B. expression - reference, sense - reference, denotation - reference on the other hand. We claim (and have adduced some arguments above) that The relations sub A. are (relatively) a priori, whereas the relations sub B are necessarily mediated by experience, and are, therefore, not a priori. One of the 'byproducts' of our conception is our ability to logically distinguish between synonymy and a mere logical equivalence. Only those expressions which share their sense can be called synonymous. The (logical) equivalence of the expressions E and E' means only that E and E' share their denotation. [Church 1956] Church, Alonzo: Introduction to Mathematical Logic I. Princeton [Frege 1892] Frege, Gottlob: Über Sinn und Bedeutung. Zeitschrift für Philosophie und philsophische Kritik 100, 25-50 [Girard 1990] Girard, J-Y.: Proofs and Types. Cambridge UP [Janssen 1986] Janssen,T.M.V.: Foundations and Applications of Montague Grammar. Part I. Amsterdam [Linsky 1967] Linsky, Leonard: Referring. In: Edwards,P.,ed.: The Encyclopedia of Philosophy 7, 95-99. The Macmillan Co & The Free Press, New York [Materna 1998] Materna, Pavel: Concepts and Objects. Acta Philosophica Fennica 63, Helsinki [Tichý 1988] Tichý, Pavel: The Foundations of Frege's Logic. De Gruyter [Tichý 1992] Tichý, Pavel: Sinn & Bedeutung Reconsidered. From The Logical Point of View, 1992/2, 1-10 [Tichý 1994] Tichý, Pavel: The Analysis of Natural Language. From the Logical Point of View 1994/2, 42-80. This paper has been supported by the grant No 401/99/0006 of the Grant Agency of Czech Republic Edited: 2000
{"url":"http://www.phil.muni.cz/~materna/sense.html","timestamp":"2014-04-21T12:26:46Z","content_type":null,"content_length":"44909","record_id":"<urn:uuid:af598115-7455-4389-843b-2adabc482274>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Marlton Math Tutor Find a Marlton Math Tutor ...Whatever the difficulty the student is having, I can help! I am especially effective in the following grades and subjects:Reading - Kindergarten - Adult; Math - Kindergarten - 8th grade (including Pre-Algebra and Algebra I); Science - Kindergarten - 8th grade; Social Studies/History - Kindergart... 23 Subjects: including prealgebra, reading, English, writing ...For instance, a kindergartner will learn that the letter b will make a certain sound. As students progress they will learn more complex letter sounds in combinations such as tion making a shun sound instead of the sound it looks like it should be. As the grades progress the level of phonics pro... 13 Subjects: including differential equations, logic, prealgebra, reading ...My writing score was an R. I believe I would be highly qualified to tutor and provide advice for the medical school application process. I have taken Genetics at The College of New Jersey as part of the Biology major curriculum. 27 Subjects: including precalculus, MCAT, ACT Math, algebra 1 ...My goal as a conceptual teacher is not to tell the student what to write, but to identify the student's ambition in each particular work, and to help him/her realize his/her own intentions. The balance of technical teaching and conceptual guidance will depend on the student's age, prior knowledge, and goals. I'm a lifelong musician. 8 Subjects: including algebra 1, algebra 2, prealgebra, Java ...So, before every issue, I would work one-on-one with each of my writers to introduce them to new writing techniques and work to rewrite their articles to prepare for print. I'm generally a math and writing nerd. When I teach, I find it most important to -Give perspective about the field of study we're covering. 25 Subjects: including algebra 2, geometry, algebra 1, differential equations
{"url":"http://www.purplemath.com/Marlton_Math_tutors.php","timestamp":"2014-04-16T13:41:24Z","content_type":null,"content_length":"23825","record_id":"<urn:uuid:ca5bcb45-430b-4676-b2eb-6e439d728e31>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Boltzmann Gas At the macroscopic level, the temperature of a substance is a parameter—measured by a thermometer—that indicates how hot or cold the substance is. At the microscopic level, the temperature—according to Ludwig Boltzmann—is a measure of the kinetic energy associated with the disordered motion of the atoms or molecules that constitute the sample. This Demonstration illustrates the relation between atomic motion and temperature in the case of a classical gas (Boltzmann gas) modeled as an ensemble of noninteracting hard spheres. Change the temperature and observe the effect on the motion of atoms or molecules. In classical microscopic gas kinetic theory a gas is modeled as an ensemble of hard spheres that do not interact other than by elastic collisions among themselves and with the container walls. If the walls are at a temperature then the particles' velocity components –in the case of thermodynamic equilibrium—obey Maxwell–Boltzmann distributions, for which the probability that a given velocity component, , lies in the interval is given by is the mass of the atom and is the Boltzmann constant (see also The Maxwell Speed Distribution ). According to the equipartition theorem, the average kinetic energy of each atom is , which shows that, at the microscopic level, the temperature is a direct measure of the energy associated with the disordered atomic motion. The atomic collisions with the container walls are at the origin of the force exerted on the wall that is commonly expressed as the gas pressure (see also Simulation of a Simple Gas Pressure Model ). The temperature here is measured in Kelvin (K), and in classical theory all motion comes to a halt at the absolute zero of temperature (). The absolute (Kelvin) temperature scale is related to the most commonly used temperature scales of Celsius and Fahrenheit by simple linear relations (see also Temperature Scales Celsius and Fahrenheit Thermometers
{"url":"http://demonstrations.wolfram.com/BoltzmannGas/","timestamp":"2014-04-19T22:38:15Z","content_type":null,"content_length":"47304","record_id":"<urn:uuid:944b24b0-0762-4a8e-a02c-22c93bc92818>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
All about map scale A most important property of every map is its scale. Understanding map scales will help you decide what kind of map is best for your purposes and to evaluate alternatives. Scale is the ratio between the length of a feature on the map and the actual length of the same feature on earth. As you shop The Map Center you will find that our product information often includes a ratio such as 1:30m or 1:500k. This ratio is the representative fraction, a brief, versatile way to state a map's scale. It works for people who think metric, miles, or whatever. This is simpler than you may think! Say your house is 50 feet long and you draw a floor plan of it that is 1 foot long. Here are six ways to say the same thing. 1:50 is the representative fraction of that floor plan. The drawing is one fiftieth of actual size. One inch on the drawing represents 50 inches on the floor. One millimeter on the drawing depicts 5 centimeters on the floor. A room one pinkie finger long on the drawing is 50 pinkie fingers long in your house. The representative fraction functions with any unit of linear measure as long as you use the same unit on both sides of the ratio. The number on the right side of the representative fraction is really the "bottom number" (the denominator) of a fraction just like the familiar 1/2, 1/8 or 1/50. The bigger the denominator, the smaller the fraction. We call 1:10,000 a large map scale and 1:1,000,000 a small scale because a millionth is smaller than a ten thousandth. These abbreviations are used in the Map Center database. k or K = thousand m or M = million var = variable. Usually means that the publication includes several maps at various scales. na = not applicable or not available 33k = map scale 1:33,000 1.75M = map scale 1:1,750,000 50k/12.5k = main map scale is 1:50,000; major inset(s) 1:12,500; or 2 sided map. The one perfect map would be a reasonable size, cover a very large area AND show every small detail. Well, no map can do all three. The tradeoff whereby depiction of physically small features decreases as area mapped increases is inescapable. This table shows a range of map scales with typical applications and characteristics of each. │Representative │Appx. Miles │ Description │ │Fraction │per inch │ │ │1:10,000 │1/6 mi. │Appropriate for a small city or the downtown of a large one, this scale permits all streets to be clearly shown and labeled, sometimes with one-way arrows. │ │ │ │Pedestrian passageways, attractions, monuments and "Footprints" of buildings can be shown. │ │1:24,000 │3/8 mi. │Generally adequate to show and name all city streets. The popular USGS topographic maps at this scale can show bends in foot trails and practically every pond and │ │ │ │watercourse. │ │1:50,000 │4/5 mi. │A city street map at this scale is crowded with small lettering and may omit some streets or names. Scale is ample to map connecting roads and major streets. │ │1:100,000 │1 5/8 mi │Fairly complete presentation of country roads, lakes, streams and parks is possible. An urban street grid may be depicted, but there is no room to label individual │ │ │ │streets. │ │1:250,000 │4 mi │In rural areas, very minor dirt roads can be shown. Good for metro area maps that show roads "to and through" cities. Nearly all towns and villages can be │ │ │ │identified. │ │1:500,000 │8 mi │Typical highway maps of small to medium sized states or small countries use a scale close to this. Small towns in crowded areas may be omitted. │ │1:1,000,000 │16 mi │Typical highway maps of large states or European countries use a scale like this. │ │1:5,000,000 │79 mi │A wall map of the United States may be at this scale. │ │1:10,000,000 to │158 - 789 mi │These small scales are for maps of continents and the world. │ │50,000,000 │ │ │ For some wall maps our database provides miles per inch. Example: 8.5mi/in. You can translate any representative fraction to miles per inch if you remember that there are 63,360 inches in a mile. For example, 1:250,000 means one inch on the map equals 250,000 inches on the ground. 250,000 divided by the number of inches in a mile, 63,360 equals 3.95. Scale 1:250,000 is about 4 miles per inch. Back | Help Pages- Table of Contents
{"url":"https://www.mapcenter.com/index.php?section=map_scale","timestamp":"2014-04-20T08:15:34Z","content_type":null,"content_length":"13549","record_id":"<urn:uuid:5a25a49e-7316-4983-aafa-3252e44c65b5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Cosmic rays - minimum ionizing particles ? Vanadium 50 A proton of that high energy is nowhere near minimum ionizing. Also, it's losing a significant fraction of its energy through other processes, so ionization energy loss is only a tiny piece of the If an individual proton (a cosmic ray secondary) enters a 100-micron-thick silicon particle detector, there is less than 0.1% chance that there will be a nuclear interaction (nuclear cascade) before it exits. The nuclear interaction length is about 108 grams per cm . So the main (≈99.9% probability) signal would be the Bethe-Bloch dE/dx ionization. See Per Vanadium, a primary cosmic ray proton will develop a full nuclear cascade within ~450 grams per cm (5 interaction lengths) of the upper atmosphere, and never reach the ground. A lot of "cosmic rays" reaching the ground are actually muons from pion decay in the upper atmosphere.. Bob S
{"url":"http://www.physicsforums.com/showthread.php?t=381401","timestamp":"2014-04-20T21:20:47Z","content_type":null,"content_length":"34180","record_id":"<urn:uuid:37b6db74-6304-455e-9513-17410dd2bca7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
G01AHF Lineprinter scatterplot of one variable against Normal scores G01DAF Normal scores, accurate values G01DBF Normal scores, approximate values G01DCF Normal scores, approximate variance-covariance matrix G01DHF Ranks, Normal scores, approximate Normal scores or exponential (Savage) scores G01DHF Ranks, Normal scores, approximate Normal scores or exponential (Savage) scores G01EAF Computes probabilities for the standard Normal distribution G01FAF Computes deviates for the standard Normal distribution G01HAF Computes probability for the bivariate Normal distribution G01HBF Computes probabilities for the multivariate Normal distribution G01NAF Cumulants and moments of quadratic forms in Normal variables G01NBF Moments of ratios of quadratic forms in Normal variables, and related statistics G02GAF Fits a generalized linear model with Normal errors G05LAF Generates a vector of random numbers from a Normal distribution, seeds and generator number passed explicitly G05LZF Generates a vector of random numbers from a multivariate Normal distribution, seeds and generator number passed explicitly G07BBF Computes maximum likelihood estimates for parameters of the Normal distribution from grouped and/or censored data G07CAF Computes t-test statistic for a difference in means between two Normal populations, confidence interval S15ABF Cumulative Normal distribution function P(x) S15ACF Complement of cumulative Normal distribution function Q(x)
{"url":"http://www.nag.co.uk/numeric/fl/manual20/html/indexes/kwic/normal.html","timestamp":"2014-04-17T09:38:09Z","content_type":null,"content_length":"9091","record_id":"<urn:uuid:f1b604eb-a345-47b5-8c16-794e417c14e1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Household structure and infectious disease transmission One of the central tenets of modern infectious disease epidemiology is that an understanding of heterogeneities, both in host demography and transmission, allows control to be efficiently optimized. Due to the strong interactions present, households are one of the most important heterogeneities to consider, both in terms of predicting epidemic severity and as a target for intervention. We consider these effects in the context of pandemic influenza in Great Britain, and find that there is significant local (ward-level) variation in the basic reproductive ratio, with some regions predicted to suffer 50% faster growth rate of infection than the mean. Childhood vaccination was shown to be highly effective at controlling an epidemic, generally outperforming random vaccination and substantially reducing the variation between regions; only nine out of over 10 000 wards did not obey this rule and these can be identified as demographically atypical regions. Since these benefits of childhood vaccination are a product of correlations between household size and number of dependent children in the household, our results are qualitatively robust for a variety of disease Keywords: Household, influenza, modelling, transmission
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC2829934/?lang=en-ca","timestamp":"2014-04-23T12:17:49Z","content_type":null,"content_length":"81514","record_id":"<urn:uuid:a4169f5c-aa35-4ef3-a4db-3dbd06726ad6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
A Short Return to the Age-Earnings Profile March 8, 2011 By Adam.Hyland Two posts ago I mentioned the age-earnings profile but did not provide a regression of log earnings on wage. I also offered, without evidence, that fitting a simple linear regression would be inappropriate. How do I know that? How could we determine the appropriateness of a regression? There are a number of technical or econometric means to determine mechanically whether or not a regression is appropriate. We can test for the functional form with the Breusch-Pagan test (a story about which will be left for another time) or the White test. Both of these tests are specifically for heteroskedasticity, not the functional form. However if we can imagine a process where our model is: • $y_i = alpha + beta X_i + epsilon$ But the true process is • $y_i = alpha + beta X^2_i + epsilon$ Our residuals (different than the errors!) from fitting the first model on the second model will vary with the $X$ term, just as though our errors were heteroskedastic. But for simple enough models, we can take a step back and eyeball the regression. If we fit a linear model to a quadratic or otherwise partially linear term and plot the residuals against the $X$ term we should be able to see some shape emerge. If our model is very well fitted and the underlying process is linear then the residuals will be constant across independent variables. If our model is mis-specified (as in our example above) the residuals might look like this: The above plot is easily recovered by plot(lm(log(eph) ~ age, data=adams)), a command which will bring up a number of different diagnostic plots. Let’s fit a local regression to the data and see what comes out. We probably over-estimate the decline in earnings as age goes on, but this is much better than our linear regression. Some causes of mis-estimation might be within our capacity to easily solve. I mentioned in the last post that a proper age-earnings profile would correctly code the ages of workers in the dataset, subtracting years of schooling from age. We might also talk about non-wage compensation and how that may increase over time. Further, we have dropped all the zeros from our dataset, which is pretty inappropriate. Correcting for entry and exit from the labor force may change the shape of our profile. Code isn’t included because it is basically two lines stemming immediately from the past post. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/a-short-return-to-the-age-earnings-profile/","timestamp":"2014-04-20T13:24:58Z","content_type":null,"content_length":"39327","record_id":"<urn:uuid:77a2f33f-8d34-4f57-8636-10716ce3a18c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex subvarieties of hermitian symmetric spaces up vote 4 down vote favorite Assume that M=G/K is a non-compact Hermitian symmetric space, for G the real points of a semisimple (or even simple) algerbaic group and K a maximal compact subgroup. M admits the structure of a complex manifold. Now, if X is a complex subvariety of M then its pre-image in G is a real analytic subvariety of G. My question is: how does one recognize, among all real analytic subvarieties of G, those which project onto complex subvarieties in M? More concretely, if H is a real Lie subgroup of G (even real algebraic), when is it the case that its image under the projection map (into M=G/K) gives a complex subvariety of M? add comment 1 Answer active oldest votes I will explain how to check it in any particular case, but I am not sure whether there is a good classification available (perhaps, it is even easy and I did not think enough about it). Helgason's book and papers of Joe Wolf and Alan Huckleberry may contain some relevant information. Since the complex structure on $M=G/K$ is homogeneous, the image of $H$ is a complex subvariety if and only if its tangent subspace at $e$ is a complex subspace of $T_e M,$ i.e. invariant under the complex structure operator $J: T_e M\to T_e M.$ Let $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ be the Cartan decomposition, then $T_e (G/K)$ may be identified with $\mathfrak{p}$ via the projection onto the second summand and $J=\text{ad}(X_0)$ for a certain element $X_0$ in the center of $\mathfrak{k}$ (if $G$ is simple then this center is one-dimensional). Thus the answer is "yes" if and only if $$T_e(H/K\cap H)=\mathfrak{h}/\mathfrak{k}\cap\mathfrak{h} \subset \mathfrak{p} \text{ is } \text{ad}(X_0)\text{-invariant.}$$ up vote 2 Subalgebras $\mathfrak{h}$ with this property would need to be classified modulo conjugation by $K$. One can proceed a bit further by complexifying $\mathfrak{g}$, so that $\mathfrak{p}_ down vote \mathbb{C}=\mathfrak{p}_{+}\oplus\mathfrak{p}_{-}$ is the eigenspace decomposition for $J_{\mathbb{C}}$ (the eigenvalues are $\pm i$ and $\mathfrak{p}_{\pm}$ are abelian subalgebras of $ accepted \mathfrak{g}_{\mathbb{C}}$), keeping in mind that a complex Lie subalgebra of $\mathfrak{g}_\mathbb{C}=\mathfrak{g}\otimes_{\mathbb{R}}\mathbb{C}$ is a complexification of a Lie subalgebra of $\mathfrak{g}$ if and only if it is invariant under the complex conjugation. If $H$ is a connected semisimple group of Hermitian type and $f:H\to G$ is an embedding, then after a suitable conjugation, the image of a maximal compact subgroup $L<H$ is contained in $K,$ $f(H)=(f(H)\cap K)(f(H)\cap P)$ and $f(H)$ mod $K$ is isomorphic to a Hermitian symmetric space $H/L,$ which is thus realized in $G/K.$ This allows one to construct many examples by starting with (1) a group $H$ as above and (2) a faithful complex representation of $H$ that preserves an indefinite unitary form, so that $G=SU(p,q)$ if the signature is $(p,q).$ A final elementary observation is that there always exist nontrivial subgroups $H$ that do not arise from the previous construction, for example, $NA$ from the Iwasawa decomposition $G= NAK$ projects onto $G/K.$ Thanks. For the explanation and the references. – Kobi Jul 20 '10 at 7:33 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/32512/complex-subvarieties-of-hermitian-symmetric-spaces","timestamp":"2014-04-18T23:29:46Z","content_type":null,"content_length":"53128","record_id":"<urn:uuid:312cc4e5-2ce9-4c38-8a23-3e222b47cdb6>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
family of functions- verticle asymptotes November 20th 2007, 11:11 AM family of functions- verticle asymptotes im stuck on a question! a family of functions is given by: a) for what vales of a and b does the graph of r have a verticle asymptote? where are the verticle asymptotes in this case? b) find values of a and b so that the function r has a local maximum at the point (3,5) any help would be greatly appreciated :) thanks :) November 20th 2007, 11:42 AM I'm not sure what you mean: $r(x)=\frac1a + (x-b)^2$ $r(x)=\frac{1}{a + (x-b)^2}$ November 20th 2007, 11:52 AM the second one! November 20th 2007, 11:56 AM Have you tried solving a + (x - b)^2 = 0? I think that would pin it down for you. November 20th 2007, 12:02 PM im stuck on a question! a family of functions is given by: a) for what vales of a and b does the graph of r have a verticle asymptote? where are the verticle asymptotes in this case? b) find values of a and b so that the function r has a local maximum at the point (3,5) A function has vertiacal asymptotes if the denominator equals zero: $a+(x-b)2=0~\iff~(x-b)^2=-a~ \iff~x=b+\sqrt{-a}~ \vee~x=b-\sqrt{-a}$ That means: you get no vertical asymptote if a > 0 you get exactly one vertical asymptote if a = 0 then the equation of the asymptote is : x = b you get 2 vertical asymptotes if a < 0. The equations of the asymptotes are: $x=b+\sqrt{-a}~ \vee~x=b-\sqrt{-a}$ to b): $r(x)=(a+(x-b)^2)^{-1}$ Use chain rule to derivate: $r'(x)=(-1)(a+(x-b)^2)^{-2} \cdot (2(x-b))$ Rearrange: $r'(x) = 0 ~\implies~2(x-b)=0~\iff~x=b$ From your problem you know that now b = 3. Calculate $r(3)=\frac{1}{a+(3-3)^2} = 5 ~\implies~a = \frac15$ November 20th 2007, 12:08 PM ahhhhh.... i see! thanks so much :)
{"url":"http://mathhelpforum.com/calculus/23202-family-functions-verticle-asymptotes-print.html","timestamp":"2014-04-19T15:10:09Z","content_type":null,"content_length":"8344","record_id":"<urn:uuid:dda3fc6b-5419-4971-b4d7-9ac7c320a9e3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Pi with Machin's formula (Haskell) From LiteratePrograms Other implementations: Erlang | Haskell | Java | Lisp | Python [edit] Machin's formula A simple way to compute the mathematical constant π ≈ 3.14159 with any desired precision is due to John Machin. In 1706, he found the formula $\frac{\pi}{4} = 4 \, \arccot \,5 - \arccot \,239$ which he used along with the Taylor series expansion of the arc cotangent function, $\arccot x = \frac{1}{x} - \frac{1}{3 x^3} + \frac{1}{5 x^5} - \frac{1}{7 x^7} + \dots$ to calculate 100 decimals by hand. The formula is well suited for computer implementation, both to compute π with little coding effort (adding up the series term by term) and using more advanced strategies (such as binary splitting) for better speed. In order to obtain n digits, we will use fixed-point arithmetic to compute π × 10^n as a regular integer. [edit] High-precision arccot computation To calculate arccot of an argument x, we start by dividing the number 1 (represented by 10^n, which we provide as the argument unity) by x to obtain the first term. We then repeatedly divide by x^2 and a counter value that runs over 3, 5, 7, ..., to obtain each next term. The summation is stopped at the first zero term, which in this fixed-point representation corresponds to a real value less than 10^-n. arccot x unity = arccot' x unity 0 start 1 1 where start = unity `div` x arccot' x unity sum xpower n sign | xpower `div` n == 0 = sum | otherwise = arccot' x unity (sum + sign*term) (xpower `div` (x*x)) (n+2) (-sign) where term = xpower `div` n [edit] Applying Machin's formula Finally, the main function, which uses Machin's formula to compute π using the necessary level of precision (the name "pi" conflicts with the pre-defined value "pi" in Prelude): machin_pi digits = pi' `div` (10 ^ 10) where unity = 10 ^ (digits+10) pi' = 4 * (4 * arccot 5 unity - arccot 239 unity) Now we put it all together in a module: [edit] Running $ ghci GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help Loading package base ... linking ... done. Prelude> :l machin.hs [1 of 1] Compiling Main ( machin.hs, interpreted ) Ok, modules loaded: Main. *Main> machin_pi 100 hijacker hijacker
{"url":"http://en.literateprograms.org/Pi_with_Machin's_formula_(Haskell)","timestamp":"2014-04-20T05:42:16Z","content_type":null,"content_length":"25956","record_id":"<urn:uuid:9edcd581-aca3-42fe-b767-6f31aa51ea9c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Another calculus problems. December 15th 2010, 04:21 PM Another calculus problems. 1. Find the limit, if it exists, if it does not exist, explain why lim n to infinity (n!)^(1/n) Recall the n factorial is defined to be the product of all the integers between 1 and n inclusive. 2. You are driving through the remote Ontario wilderness, and notice that you are low on fuel. According to your map, which has a grid measuring a 1km scale drawn on it (ie, the distance from one grid line to the next represents 1km in the real world), you are at the point (0,1) and the road you are on appears to be described by the curve y= arcsin(x)+sqr root of (1-x^2) You have enough gas to drive for exactle half a km along this road, and there is a gas station at the end of the road, at the point (1, 1/2 pi) Do you make it to the gas station? The following approximation may be useful: 1/(sqr root of 2) approx equal to 0.707... 3. Given the equation dp/dt=kP-m and an initial population P(0) equal to P0. Solve the initial value problem for the population December 15th 2010, 05:09 PM 3. $\displaystyle \int\frac{dp}{kp-m}=\int dt\Rightarrow\frac{ln(kp-m)}{k}=t+c\Rightarrow ln(kp-m)=kt+c_2\Rightarrow$ $\displaystyle e^{ln(kp-m)}=c_3e^{kt}\Rightarrow kp-m=c_3e^{kt}\Rightarrow p(t)=\frac{c_3e^{kt}+m}{k}$ Solve for P(0)
{"url":"http://mathhelpforum.com/calculus/166352-another-calculus-problems-print.html","timestamp":"2014-04-20T17:20:51Z","content_type":null,"content_length":"5087","record_id":"<urn:uuid:13986517-b145-4e18-be16-ff54ca4ae9f5>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Hoboken, NJ Algebra Tutor Find a Hoboken, NJ Algebra Tutor ...I am extremely patient and look forward to helping students achieve their academic goals. I have a high math aptitude and scored above 700 on the math portion of the SAT in high school. After working for the last twenty years in the financial services industry in New York City, I am transitioning to the field of education. 6 Subjects: including algebra 2, algebra 1, French, ESL/ESOL ...I believe in incorporating both a whole word and phonics approach to Reading. I specialize in Math, which would provide a solid foundation for more difficult future Math courses. I believe I have an excellent approach to study skills. 41 Subjects: including algebra 1, physics, chemistry, algebra 2 ...My career in education began with teaching a self-contained special education class at an elementary school for 2 years. Since then, I have served as both the Special Education Coordinator and Director at a network of charter schools. I have just started a part-time MBA program, and am eager to set up a more flexible schedule that also allows me to return to working directly with 31 Subjects: including algebra 1, algebra 2, reading, Spanish I came from China and I hold a Mandarin certificate. I also got a full score in GRE math. I have extensive experience in teaching and tutoring. 20 Subjects: including algebra 2, algebra 1, calculus, precalculus ...To me, philosophy is a great door to many of life's most important questions: How does the world work? Why are things the way they are? Can you prove the existence of God? 18 Subjects: including algebra 1, reading, French, English Related Hoboken, NJ Tutors Hoboken, NJ Accounting Tutors Hoboken, NJ ACT Tutors Hoboken, NJ Algebra Tutors Hoboken, NJ Algebra 2 Tutors Hoboken, NJ Calculus Tutors Hoboken, NJ Geometry Tutors Hoboken, NJ Math Tutors Hoboken, NJ Prealgebra Tutors Hoboken, NJ Precalculus Tutors Hoboken, NJ SAT Tutors Hoboken, NJ SAT Math Tutors Hoboken, NJ Science Tutors Hoboken, NJ Statistics Tutors Hoboken, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/hoboken_nj_algebra_tutors.php","timestamp":"2014-04-17T04:16:15Z","content_type":null,"content_length":"23740","record_id":"<urn:uuid:041173c7-3061-42c4-a132-b792a96a492f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Antonio Gulli's coding playground Random commentary about C++, STL, Boost, Perl, Python, Algorithms, Problem Solving and Web Search Tuesday, October 16, 2012 Given a matrix nxn Zero all the columns and rows which contains at least one zero. Pubblicato da codingplayground a 12:17 PM 1 comment: 1. for(int c=0; c < N; c++) { for(int r=0; r < N; r++) { if (A[r][c]==0) { markRow(A, r); markCol(A, c); } for(int c=0; c < N; c++) { for(int r=0; r < N; r++) { if (isMarked(A, r, c)) { A[r][c] = 0; } Now, the question is about what it means to mark a column/row. Some ideas: 1) Assuming a positive matrix, to mark a cell means to replace the content with it's negative value. Then a cell is marked if it's content is negative. 2) Assuming a matrix populated with references to Integer objects, mark a cell means set it to null. All these assumptions don't cover the case where the matrix contains any kind of primitive integer (positive or negative). In this case it is not possible to identify a "marker". An idea to cover this case is to use a support matrix of size NxN (let's call it B) that contains zeroes everywhere except in the cells corresponding to the cells in A that need to be be zeroed. The value of such cells in B will be equals to -1* A[r][c]. Then the solution is A = A + B. Of course this solution not only require O(n^2) in time but also O(n^2) in space... pretty inefficient! A second before to press the publish button I had this other idea which I think is good: List colsWithZeroes = new ArrayList(); for(int c=0; c < N; c++) { boolean foundZeroInRow = false; for(int r=0; r < N; r++) { if (A[r][c]==0) { foundZeroInRow = true; zeroPrevInRow(A, r, c); zeroPrevInCol(A, r, c); if (colsWithZeroes.contains(c) || foundZeroInRow) { A[r][c] = 0; zeroPrevInRow and zeroPrevInCol just set to zero the previous cells on the row or on the column. This algo is O(n^2). The main idea is to scan the matrix top-left to bottom-right. Every time a zero is found we say that a zero has been found on the current row. We zero all the *previous* elements on the current row and on the current column. We also add the current column to the list of columns containing a zero no matter the row. After these operations we move on the next element. If the current element is not zero, then either: 1) zero the element if we found a zero previously on the row 2) zero the element if the current column is contained in the list of previous columns containing a zero 3) we leave the element as is if none of the previous conditions apply
{"url":"http://codingplayground.blogspot.com/2012/10/given-matrix-nxn.html","timestamp":"2014-04-17T06:42:24Z","content_type":null,"content_length":"136048","record_id":"<urn:uuid:0faf8171-7261-4a8c-a82d-154feed54354>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Arbitrary decays for a viscoelastic equation In this paper, we consider the nonlinear viscoelastic equation , in a bounded domain with initial conditions and Dirichlet boundary conditions. We prove an arbitrary decay result for a class of kernel function g without setting the function g itself to be of exponential (polynomial) type, which is a necessary condition for the exponential (polynomial) decay of the solution energy for the viscoelastic problem. The key ingredient in the proof is based on the idea of Pata (Q Appl Math 64:499-513, 2006) and the work of Tatar (J Math Phys 52:013502, 2010), with necessary modification imposed by our problem. Mathematical Subject Classification (2010): 35B35, 35B40, 35B60 Viscoelastic equation; Kernel function; Exponential decay; Polynomial decay 1 Introduction It is well known that viscoelastic materials have memory effects. These properties are due to the mechanical response influenced by the history of the materials themselves. As these materials have a wide application in the natural sciences, their dynamics are of great importance and interest. From the mathematical point of view, their memory effects are modeled by an integro-differential equations. Hence, questions related to the behavior of the solutions for the PDE system have attracted considerable attention in recent years. Many authors have focused on this problem for the last two decades and several results concerning existence, decay and blow-up have been obtained, see [1-28] and the reference therein. In [3], Cavalcanti et al. studied the following problem where Ω ⊂ R^N, N ≥ 1, is a bounded domain with a smooth boundary ∂Ω, γ ≥ 0, if N ≥ 3 or ρ > 0 if N = 1, 2, and the function g: R^+ → R^+ is a nonincreasing function. This type of equations usually arise in the theory of viscoelasticity when the material density varies according to the velocity. In that paper, they proved a global existence result of weak solutions for γ ≥ 0 and a uniform decay result for γ > 0. Precisely, they showed that the solutions goes to zero in an exponential rate for γ > 0 and g is a positive bounded C^1-function satisfying for all t ≥ 0 and some positive constants ξ[1 ]and ξ[2]. Later, this result was extended by Messaoudi and Tatar [15] to a situation where a nonlinear source term is competing with the dissipation terms induced by both the viscoelasticity and the viscosity. Recently Messaoudi and Tatar [14] studied problem (1.1) for the case of γ = 0, they improved the result in [3] by showing that the solution goes to zero with an exponential or polynomial rate, depending on the decay rate of the relaxation function g. The assumptions (1.2) and (1.3), on g, are frequently encountered in the linear case (ρ = 0), see [1,2,4-6,13,22,23,29-31]. Lately, these conditions have been weakened by some researchers. For instance, instead of (1.3) Furati and Tatar [8] required the functions e^αt g(t) and e^αtg'(t) to have sufficiently small L^1-norm on (0, ∞) for some α > 0 and they can also have an exponential decay of solutions. In particular, they do not impose a rate of decreasingness for g. Later on Messaoudi and Tatar [21] improved this result further by removing the condition on g'. They established an exponential decay under the conditions g'(t) ≤ 0 and e^αt g(t) ∈ L^1(0, ∞) for some large α > 0. This last condition was shown to be necessary condition for exponential decay [7]. More recently Tatar [25] investigated the asymptotic behavior to problem (1.1) with ρ = γ = 0 when h(t)g(t) ∈ L^1(0, ∞) for some nonnegative function h(t). He generalized earlier works to an arbitrary decay not necessary of exponential or polynomial rate. Motivated by previous works [21,25], in this paper, we consider the initial boundary value problem for the following nonlinear viscoelastic equation: with initial conditions and boundary condition where Ω ⊂ R^N, N ≥ 1, is a bounded domain with a smooth boundary ∂Ω. Here ρ, p > 0 and g represents the kernel of the memory term, with conditions to be stated later [see assumption (A1)-(A3)]. We intend to study the arbitrary decay result for problem (1.4)-(1.6) under the weaker assumption on g, which is not necessarily decaying in an exponential or polynomial fashion. Indeed, our result will be established under the conditions g'(t) ≤ 0 and for some nonnegative function ξ(t). Therefore, our result allows a larger class of relaxation functions and improves some earlier results concerning the exponential decay or polynomial decay. The content of this paper is organized as follows. In Section 2, we give some lemmas and assumptions which will be used later, and we mention the local existence result in Theorem 2.2. In Section 3, we establish the statement and proof of our result related to the arbitrary decay. 2 Preliminary results In this section, we give some assumptions and lemmas which will be used throughout this work. We use the standard Lebesgue space L^p(Ω) and Sobolev space with their usual inner products and norms. Lemma 2.1. (Sobolev-Poincaré inequality) Let , the inequality holds with the optimal positive constant c[s], where || · ||[p ]denotes the norm of L^p(Ω). Assume that ρ satisfies With regards to the relaxation function g(t), we assume that it verifies (A1) g(t) ≥ 0, for all t ≥ 0, is a continuous function satisfying (A2) g'(t) ≤ 0 for almost all t > 0. (A3) There exists a positive nondecreasing function ξ(t): [0, ∞) → (0, ∞) such that is a decreasing function and Now, we state, without a proof, the existence result of the problem (1.4)-(1.6) which can be established by Faedo-Galerkin methods, we refer the reader to [3,5]. Theorem 2.2. Suppose that (2.1) and (A1) hold, and that . Assume , if N ≥ 3, p > 0, if N = 1, 2. Then there exists at least one global solution u of (1.4)-(1.6) satisfying Next, we introduce the modified energy functional for problem (1.4)-(1.6) Lemma 2.3. Let u be the solution of (1.4)-(1.6), then the modified energy E(t) satisfies Proof. Multiplying Eq. (1.4) by u[t ]and integrating it over Ω, then using integration by parts and the assumption (A1)-(A2), we obtain (2.6). Remark. It follows from Lemma 2.3 that the energy is uniformly bounded by E(0) and decreasing in t. Besides, from the definition of E(t) and (2, 2), we note that 3 Decay of the solution energy In this section, we shall state and prove our main result. For this purpose, we first define the functional where λ[i ]are positive constants, i = 1, 2, 3 to be specified later and Remark. This functional was first introduced by Tatar [25] for the case of ρ = 0 and without imposing the dispersion term and forcing term as far as (1.4) is concerned. The following Lemma tells us that L(t) and E(t) + Φ[3](t) are equivalent. Lemma 3.1. There exists two positive constants β[1 ]and β[2 ]such that the relation holds for all t ≥ 0 and λ[i ]small, i = 1, 2. Proof. By Hölder inequality Young's inequality Lemma 2.1, (2.7) and (2.2), we deduce that where . Therefore, from above estimates, the definition of E(t) by (2.4) and (2.2), we have where , , and . Hence, selecting λ[i ], i = 1, 2 such that and again from the definition of E(t), there exist two positive constants β[1 ]and β[2 ]such that To obtain a better estimate for , we need the following Lemma which repeats Lemma 2 in [25]. Lemma 3.2. For t ≥ 0, we have Proof. Straightforward computations yield this identity. Now, we are ready to state and prove our result. First, we introduce the following notations as in [24,25]. For every measurable set A ⊂ R^+, we define the probability measure by The flatness set and the flatness rate of g are defined by Before proceeding, we note that there exists t[0 ]> 0 such that since g is nonnegative and continuous. Theorem 3.3. Let be given. Suppose that (A1)-(A3), (2, 1) and the hypothesis on p hold. Assume further that , and with Then the solution energy of (1.4)-(1.6) satisfies where μ and K are positive constants. Proof. In order to obtain the decay result of E(t), it suffices to prove that of L(t). To this end, we need to estimate the derivative of L(t). It follows from (3.2) and Eq. (1.4) that which together with the identity (3.6) and (2.2) gives Next, we would like to estimate . Taking a derivative of Φ[2 ]in (3.3) and using Eq. (1.4) to get We now estimate the first two terms on the right-hand side of (3.11) as in [25]. Indeed, for all measure set A and F such that A = R^+ - F, we have To simplify notations, we denote Using Hölder inequality Young's inequality and (2.2), we see that, for δ[1 ]> 0, Thus, from the definition of by (3.8), (3.12) becomes The second term on the right-hand side of (3.11) can be estimated as follows (see [25]), for δ[2 ]> 0, Using Hölder inequality Young's inequality and (A2) to deal with the fifth term, for δ[3 ]> 0, Exploiting Hölder inequality Young's inequality Lemma 2.1 and (A2) to estimate the sixth term, for δ[4 ]> 0, For the last term, thanks to Hölder inequality Young's inequality Lemma 2.1, (2.7), (2.2) and (3.8), we have, for δ[5 ]> 0, where . Thus, gathering these estimates (3.13)-(3.17) and using (3.9), we obtain, for t ≥ t[0], Further, taking a derivative of Φ[3](t), using the fact that is a decreasing function and the definition of Φ[3](t) by (3.4), we derive that (see [25]) Hence, we conclude from (2.6), (3.10), (3.18) and (3.19) that for any t ≥ t[0 ]> 0, For , we consider the sets (see [24,25]) and observe that where F[g ]is given in (3.7) and N[g ]is the null set where g' is not defined. In addition, denoting F[n ]= R^+ - A[n], then because A[n ]are increasingly nested. Thus, choosing A = A[n], F = F[n ]and λ[1 ]= (g[* ]- ε) λ[2 ]for some ε > 0 in (3.20), we obtain At this point, we take and select λ[2 ]so that then (3.21) becomes For ε, δ[2 ]small enough and large value of n and t[0], we see that if Note that α > 0 and 0 < δ < 1 due to . Furthermore, we require λ[2 ]and λ[3 ]satisfying this is possible because of . Then, letting δ[1 ]be small enough and using (3.22), we see that Hence, from the definition of E(t) by (2.4), we have, for all t ≥ t[0], for some positive constant c[4]. As η(t) is decreasing, we have η(t) ≤ c[4 ]after some t[* ]≥ t[0]. Hence, with the help of the right hand side inequality in (3.5), we find for some positive constant c[5 ]> 0. An integration of (3.23) over (t[*], t) gives Then using the left hand side inequality in (3.5) leads to Therefore, by virtue of the continuity and boundedness of E(t) and ξ(t) on the interval [0, t[*]], we infer that for some positive constants K and μ. Similar to those remarks as in [25], we have the following remark. Remark. Note that there is a wide class of relaxation functions satisfying (A3). More precisely, if ξ(t) = e^αt, α > 0, then η(t) = α, this gives the exponential decay estimate , for some positive constants c[1 ]and c[2]. Similarly, if ξ(t) = (1 + t)^α , α > 0, then we obtain the polynomial decay estimate E (t) ≤ c[3 ](1 + t)^-μ, for some positive constants c[3 ]and μ. The authors would like to thank very much the anonymous referees for their valuable comments on this work. Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2011/1/28","timestamp":"2014-04-18T15:42:24Z","content_type":null,"content_length":"151761","record_id":"<urn:uuid:751b66e8-1829-452c-bec4-2ab08a39c48b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
The topic Optics is discussed in the following articles: discussed in biography • TITLE: Euclid (Greek mathematician)SECTION: Other writings Among Euclid’s extant works are the Optics, the first Greek treatise on perspective, and the Phaenomena, an introduction to mathematical astronomy. Those works are part of a corpus known as “the Little Astronomy” that also includes the Moving Sphere by Autolycus of Pitane. history of mathematics • TITLE: mathematicsSECTION: Applied geometry In optics, Euclid’s textbook (called the Optics) set the precedent. Euclid postulated visual rays to be straight lines, and he defined the apparent size of an object in terms of the angle formed by the rays drawn from the top and the bottom of the object to the observer’s eye. He then proved, for example, that nearer objects appear larger and appear to move faster and showed how to...
{"url":"http://www.britannica.com/print/topic/430551","timestamp":"2014-04-18T13:27:48Z","content_type":null,"content_length":"7358","record_id":"<urn:uuid:355e6756-1c9d-46d3-ae30-920b10fc8c20>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial Fraction Integration - Set up problems? 1. The problem statement, all variables and given/known data [tex]\int[/tex][tex]\frac{2x^{2}}{(x^{2}-1)}[/tex] dx 2. Relevant equations Partial Fractions 3. The attempt at a solution 2[tex]\int[/tex][tex]\frac{x^{2}}{(x^{2}-1)}[/tex] dx 2[tex]\int[/tex][tex]\frac{x^{2}}{(x-1)(x+1)}[/tex] dx [tex]\frac{A}{x-1}[/tex] + [tex]\frac{B}{x+1}[/tex] = [tex]\frac{x^{2}}{(x+1)(x-1)}[/tex] A(x+1) + B(x-1) = x[tex]^{2}[/tex] Solving for coefficients does not work because there is no x[tex]^{2}[/tex] term of A or B and also when solving for the x and the constant contradictory answers occur. Did I set this up wrong??
{"url":"http://www.physicsforums.com/showthread.php?t=258507","timestamp":"2014-04-17T03:53:03Z","content_type":null,"content_length":"27545","record_id":"<urn:uuid:3467915e-fcb6-4151-a67a-651f5c499979>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] solving linear equations in one variable???? July 23rd 2009, 06:25 AM #1 Jul 2009 New York [SOLVED] solving linear equations in one variable???? here are the problems i need help with. can u tell me the steps to solve these problems?? thank you! first problem: a(x+b)=c second problem: 9x+2a=-3a+4x Last edited by student897; August 31st 2009 at 05:40 AM. Hi student897! What do you mean? Solving for x? In this case 1) a(x+b) = c a*x+a*b = c minus a*b a*x = c-a*b divide by a (a is not equal to 0 ) $x = \frac{c-a*b}{a}$ $x = \frac{c}{a}-\frac{a*b}{a}$ $x = \frac{c}{a}-b$ 2) 9x+2a=-3a+4x minus 2a 9x = -3a+4x - 2a 9x = -3a-2a + 4x 9x = -5a + 4x minus 4x 9x-4x = -5a 5x = -5a divide by 5 x = -a Do you understand? i dont like math its always been my worst subjectt! but thnx for the help i sort of get it noww thnx i understand but i sort of did it in a different way ... the way they taught me in school much less complicated i think.. but thnxxx i understand now " divide by a (a is not equal to 0 ) " This is extremely important and a cause for many errors when solving equations. That being said, the trick is to try to get X alone on one side. Divide, add, remove as much as you have to until it's just X = ..... Might be helpful to repeat some of the basic rules; $<br /> a(b + c) = a \cdot b + a \cdot c<br />$ $<br /> d(e - f) = d \cdot e - d \cdot f<br />$ Nice work The basic idea is to "undo" whatever has been done to x. In the first problem, I see that two things have been done to x: first b is added to it, then that sum is multiplied by a. We can "undo" that by doing the opposite, in the opposite order: The opposite of "add b" is to subtract b and the opposite of "multiply by a" is to divide by a. And, since I do this in the opposite order, I first divide by a then subtract b. And, of course, whatever I do on one side of the equation, I do on the other to keep them "balanced". Starting from a(x+ b)= c and dividing both sides by a, a(x+b)/a= c/a or x+ b= c/a since "a/a"= 1. Subtracting b from both sides, x+ b- b= c/a- b or x= c/a- b since b- b= 0. You may notice that this is not exactly what Rapha did. He chose to "multiply" out a(x+b)= ax+ ab first. But the answer is the same, of course. The second, 9x+2a=-3a+4x, is slightly more complicated because it has "x" on both sides. We can fix that by getting rid of the "x" on the right- subtract 4x from both sides: 9x+ 2a- 4x= -3a+ 4x- 4x is the same as 5x+ 2a= -3a. Now that has x multiplied by 5 and then 2a added. The opposite of that is "subtract 2a" and then "divide by 5": 5x+ 2a- 2a= -3a- 2a is the same as 5x= -5a. Now divide both sides by 5: 5x/5= -5a/5 is the same as x= -a. thnx alot for the help now i think i understand it much better !!!i did it the way u said but at the end i ended up just putting 5x=-5a and then i didnt know what to pput! July 23rd 2009, 06:38 AM #2 Senior Member Nov 2008 July 23rd 2009, 06:49 AM #3 Jul 2009 New York August 2nd 2009, 09:37 AM #4 Jul 2009 New York August 5th 2009, 07:10 AM #5 Aug 2009 August 24th 2009, 10:10 PM #6 Aug 2009 August 25th 2009, 07:00 AM #7 MHF Contributor Apr 2005 August 28th 2009, 05:23 AM #8 Jul 2009 New York
{"url":"http://mathhelpforum.com/algebra/95885-solved-solving-linear-equations-one-variable.html","timestamp":"2014-04-17T07:22:07Z","content_type":null,"content_length":"52183","record_id":"<urn:uuid:58e5e70e-a4c9-43a3-9668-7825660c314d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Ideals in the recursively enumerable degrees "... The set A is low for Martin-Lof random if each random set is already random relative to A. A is K-trivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of Ambos-Spies and Kucera [2 ..." Cited by 79 (21 self) Add to MetaCart The set A is low for Martin-Lof random if each random set is already random relative to A. A is K-trivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of Ambos-Spies and Kucera [2], showing that each low for Martin-Lof random set is # 2 . Our class induces a natural intermediate # 3 ideal in the r.e. Turing degrees (which generates the whole class under downward closure). Answering "... The biinterpretability conjecture for the r.e. degrees asks whether, for each sufficiently large k, the # k relations on the r.e. degrees are uniformly definable from parameters. We solve a weaker version: for each k >= 7, the k relations bounded from below by a nonzero degree are uniformly definabl ..." Cited by 34 (13 self) Add to MetaCart The biinterpretability conjecture for the r.e. degrees asks whether, for each sufficiently large k, the # k relations on the r.e. degrees are uniformly definable from parameters. We solve a weaker version: for each k >= 7, the k relations bounded from below by a nonzero degree are uniformly definable. As applications, we show that...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1562474","timestamp":"2014-04-17T01:39:03Z","content_type":null,"content_length":"13925","record_id":"<urn:uuid:d6f06221-61f7-4544-920d-f3ddc05a0134>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Ejection Charge Sizing Derivation of Table for Estimated Ejection Charge Size First we assume the entire mass of the ejection charge is burned and converted to a gas. Next from basic chemistry we use the ideal gas law equation: PV = NRT The constants for 4F black powder are: R = 266 in-lbf/lbm T = 3307 degrees R (combustion temp) P = pressure in psi V = volume in cubic inches = pi*(D/2)^2L N = mass in pounds. (Note: 454 gm/lb) A good rule-of-thumb is to generally design for 15 psi pressure. If this is used as the design goal, then the ideal gas equation reduces to: N = 0.006*D^2L (grams) where D is the diameter in inches and L is the length in inches of the compartment in the rocket that is to be pressurized. N is the size of the ejection charge in grams. However, on large diameter rockets, 15 psi will probably generate too much force! For example, a 7.5-inch diameter rocket has 44 square inches of area on the end of it so 15 psi would produce over 15*44 = 660 pounds of force!! The amount of force needed for a large rocket is going to depend on a great many factors, but a reasonable limit is probably some where around 300-350 pounds. This is the same amount of force generated in a 5.5-inch rocket at 15 psi. We can refine our equations for large rockets by adding a limit on the force that is to be generated. The force F (in pounds) is given by: F = PA where P is the pressure in psi and A is the area in square inches. Since A = pi*(D/2)^2 we can combine this equation with the ideal gas law equation to get: N = 0.00052*FL (grams) This last equation tells us how many grams N of ejection charge to use to generate a specified force F in pounds for a given length L of pressurized compartment. What is interesting about this equation is that the diameter D is not present. It means that for large rockets the ejection charge size does not need to increase with body tube diameter. Using these equations I created a handy reference table for various body tube diameters. That table is the one listed above.
{"url":"http://www.vernk.com/EjectionChargeSizing.htm","timestamp":"2014-04-16T04:10:55Z","content_type":null,"content_length":"62337","record_id":"<urn:uuid:7126168e-5e76-4dee-891d-1b17508c5d09>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
[undergrad further calculus] Partial differentiation - stationary points (self.learnmath) submitted ago by Waving_from_heights sorry, this has been archived and can no longer be voted on Just having trouble understanding two small segments of a past papers work-through. If you could explain how to attain the values of the underlined bit of the solution that would be great as i am currently quite flumexed. Here are the quetions and solutions In the first one i may just be having a brain fart but how the solution for y be the simple, i have pages of attempts trying to get y(y&2-1)(y^2-9) I just a large string of numbers The second one i am just not sure how you can just get the values of a, b, c and d. According to my books its partial differentiation stationary point which may involve the taylor series (in which case i have messed up more than i oringinally thought...) if further explanation is needed then let me know. all 5 comments [–]casact9211 point2 points3 points ago sorry, this has been archived and can no longer be voted on In the first one i may just be having a brain fart but how the solution for y be the simple, i have pages of attempts trying to get y(y^2 -1)(y^2 -9) I just a large string of numbers Set each factor equal to zero, solve for y. For the first factor, you get y=0. For the second factor, you get y^2 -1=0. Add 1 to both sides, take square roots, you get y=1 or y=-1. For the third factor, you get y^2 -9=0. Add 9 to both sides, take square roots, you get y=3 or y=-3. The second one i am just not sure how you can just get the values of a, b, c and d. I don't understand the question - are you asking how do compute the first partial derivatives? [–]casact9211 point2 points3 points ago sorry, this has been archived and can no longer be voted on Yeah, for the second part i am asking how to compute the partial derivatives. Ok. To find the first partial with respect to y, treat y as a variable and the rest of the unknowns as constants, then differentiate. You'll have to use the product rule. You should get: [; f_y = (-4y)e^{-(x^2+y^2)/a^2}+(x^2-2y^2)e^{-(x^2+y^2)/a^2}(-2y/a^2) ;] Factor this to get [; e^{-(x^2+y^2)/a^2}(-2y)(2+(x^2-2y^2)/a^2) ;] The exponential term is never negative, so set the other two terms equal to zero, and you'll get the two conditions listed in parts (c) and (d) of your paper.
{"url":"http://www.reddit.com/r/learnmath/comments/140htq/undergrad_further_calculus_partial/","timestamp":"2014-04-21T02:28:45Z","content_type":null,"content_length":"63644","record_id":"<urn:uuid:5875fa9b-319f-4e10-b6a1-5addade258c5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
The Optimal Degree of Discretion in Monetary Policy The Optimal Degree of Discretion in Monetary Policy International Finance Discussion Papers numbers 797-807 were presented on November 14-15, 2003 at the second conference sponsored by the International Research Forum on Monetary Policy sponsored by the European Central Bank, the Federal Reserve Board, the Center for German and European Studies at Georgetown University, and the Center for Financial Studies at the Goethe University in Frankfurt. NOTE: International Finance Discussion Papers are preliminary materials circulated to stimulate discussion and critical comment. The views in this paper are solely the responsibility of the author and should not be interpreted as reflecting the views of the Board of Governors of the Federal Reserve System or any other person associated with the Federal Reserve System. References in publications to International Finance Discussion Papers (other than an acknowledgment that the writer has had access to unpublished material) should be cleared with the author or authors. Recent IFDPs are available on the Web at http://www.federalreserve.gov/pubs/ifdp/. This paper can be downloaded without charge from the Social Science Research Network electronic library at http:// How much discretion should the monetary authority have in setting its policy? This question is analyzed in an economy with an agreed-upon social welfare function that depends on the economy's randomly fluctuating state. The monetary authority has private information about that state. Well-designed rules trade off society's desire to give the monetary authority discretion to react to its private information against society's need to prevent that authority from giving in to the temptation to stimulate the economy with unexpected inflation, the time inconsistency problem. Although this dynamic mechanism design problem seems complex, its solution is simple: legislate an inflation cap. The optimal degree of monetary policy discretion turns out to shrink as the severity of the time inconsistency problem increases relative to the importance of private information. In an economy with a severe time inconsistency problem and unimportant private information, the optimal degree of discretion is none. Keywords: Rules vs. discretion, time inconsistency, optimal monetary policy, inflation targets, inflation caps JEL Classification: E5, E6, E52, E58, E61 Suppose that society can credibly impose on the monetary authority rules governing the conduct of monetary policy. How much discretion should be left to the monetary authority in setting its policy? The conventional wisdom from policymakers is that optimal outcomes can be achieved only if some discretion is left in the hands of the monetary authority. But starting with Kydland and Prescott (1977), most of the academic literature has contradicted that view. In summarizing this literature, Taylor (1983) and Canzoneri (1985) argue that when the monetary authority does not have private information about the state of the economy, the debate is settled: there should be no discretion; the best outcomes can be achieved by rules that specify the action of the monetary authority as a function of observables. The unsettled question in this debate is Canzoneri's: What about when the monetary authority does have private information? What, then, is the optimal degree of monetary policy discretion? To answer this question, we use a model of monetary policy similar to that of Kydland and Prescott (1977) and Barro and Gordon (1983). In our legislative approach to monetary policy, we suppose that society designs the optimal rules governing the conduct of monetary policy by the monetary authority. The model includes an agreed-upon social welfare function that depends on the random state of the economy. We begin with the assumption that the monetary authority observes the state and individual agents do not. In the context of our model, we say that the monetary authority has discretion if its policy is allowed to vary with its private information.^2 The assumption of private information creates a tension between discretion and time inconsistency.^3 Tight constraints on discretion mitigate the time inconsistency problem in which the monetary authority is tempted to claim repeatedly that the current state of the economy justifies a monetary stimulus to output. However, tight constraints leave little room for the monetary authority to fine tune its policy to its private information. Loose constraints allow the monetary authority to do that fine tuning, but they also allow more room for the monetary authority to stimulate the economy with surprise inflation. We find the constraints on monetary policy that, in the presence of private information, optimally resolve this tension between discretion and time inconsistency. Formally, we cast this problem as a dynamic mechanism design problem. Canzoneri (1985) conjectures that because of the dynamic nature of the problem, the resulting optimal mechanism with regard to monetary policy is likely to be quite complex. We find that, in fact, it is quite simple. For a broad class of economies, the optimal mechanism is static and can be implemented by setting an inflation cap, an upper limit on the permitted inflation rate. More formally, our model can be described as follows. Each period, the monetary authority observes one of a continuum of possible privately observed states of the economy. These states are i.i.d. over time. In terms of current payoffs, the monetary authority prefers to choose higher inflation when higher values of this state are realized and lower inflation when lower values are realized. Here a mechanism specifies what monetary policy is chosen each period as a function of the history of the monetary authority's reports of its private information. We say that a mechanism is static if policies depend only on the current report by the monetary authority and dynamic if policies depend also on the history of past reports. Our main technical result is that, as long as a monotone hazard condition is satisfied, the optimal mechanism is static. We also give examples in which this monotone hazard condition fails, and the optimal mechanism is dynamic. We then show that our result on the optimality of a static mechanism implies that the optimal policy has one of two forms: either it has bounded discretion or it has no discretion. Under bounded discretion, there is a cutoff state: for any state less than this, the monetary authority chooses its static best response, which is an inflation rate that increases with the state, and for any stategreater than this cutoff state the monetary authority chooses a constant inflation rate. Under no discretion, the monetary authority chooses some constant inflation rate regardless of its We then show that we can implement the optimal policy as a repeated static equilibrium of a game in which the monetary authority chooses its policy subject to an inflation cap and in which individual agents' expectations of future inflation do not vary with the monetary authority's policy choice. In general, the inflation cap would vary with observable states, but to keep the model simple, we abstract from observable states, and the inflation cap is a single number. Depending on the realization of the private information, sometimes the cap will bind, and sometimes it will not. These results imply that the optimal constraints on discretion take the form of an inflation cap: the monetary authority is allowed to choose any inflation rate below this cap, but cannot choose one above it. We say that a given inflation cap implies less discretion than another cap if it is more likely to bind. We show that the optimal degree of discretion for the monetary authority is smaller in an economy the more severe the time inconsistency problem is and the less important private information is. It is immediate that we can equivalently implement the optimal policy by choosing a range of acceptable inflation rates. The optimal range will decrease as the time inconsistency problem becomes more severe relative to the importance of private information. Here the rationale for discretion clearly depends in a critical way on the monetary authority having some private information that the other agents in the economy do not have. Of course, if the amount of such private information is thought to be very small in actual economies, relative to time inconsistency problems, then our work argues that in such economies the logical case for a sizable amount of discretion is weak, and the monetary authority should follow a rather tightly specified rule. One interpretation of our work is that we solve for the optimal inflation targets. As such, our work is related to the burgeoning literature on inflation targeting. (See the work of Cukierman and Meltzer (1986), Bernanke and Woodford (1997), and Faust and Svensson (2001), among many others.) In terms of the practical application of inflation targets, Bernanke and Mishkin (1997) discuss how inflation targets often take the form of ranges or limits on acceptable inflation rates similar to the ranges we derive. Indeed, our work here provides one theoretical rationale for the type of constrained discretion advocated by Bernanke and Mishkin. Here we have assumed that the monetary authority maximizes the welfare of society. As such, the monetary authority is viewed as the conduit through which society exercises its will. An alternative approach is to view the monetary authority as an individual or an organization motivated by concerns other than that of society's well-being. If, for example, the monetary authority is motivated in part by its own wages, then, as Walsh (1995) has shown, the full-information, full-commitment solution can be implemented. Hence, with such a setup, monetary policy has no binding incentive problems to begin with. As Persson and Tabellini (1993) note, there many reasons such contracts are either difficult or impossible to implement, and the main issue for research following this approach is why such contracts are, at best, rarely used. Our work is related to several other literatures. One is some work on private information in monetary policy games. See, for example, that of Backus and Driffill (1985); Ireland (2000); Sleet (2001); Da Costa and Werning (2002); Angeletos, Hellwig, and Pavan (2003); Sleet and Yeltekin (2003); and Stokey (2003). The most closely related of these is the work of Sleet (2001), who considers a dynamic general equilibrium model in which the monetary authority sees a noisy signal about future productivity before it sets the money growth rate. Sleet finds that, depending on parameters, the optimal mechanism may be static, as we find here, or it may be dynamic. Our work is also related to a large literature on dynamic contracting. Our result on the optimality of a static mechanism is quite different from the typical result in this literature, that static mechanisms are not optimal. (See, for example, Green (1987), Atkeson and Lucas (1992), and Kocherlakota (1996).) We discuss the relation between our work and these literatures in more detail after we present our results. At a technical level, we draw heavily on the literature on recursive approaches to dynamic games. We use the technique of Abreu, Pearce, and Stacchetti (1990), which has been applied to monetary policy games by Chang (1998) and is related to the policy games studied by Phelan and Stacchetti (2001), Albanesi and Sleet (2002), and Albanesi, Chari, and Christiano (2003). The mechanism design problem that we study is related, at an abstract level, to some work on supporting collusive outcomes in cartels by Athey, Bagwell, and Sanchirico (2004), work on risk-sharing with nonpecuniary penalties for default by Rampini (forthcoming), and work on the tradeoff between flexibility and commitment in savings plans for consumers with hyperbolic discounting by Amador, Werning, and Angeletos (2004). However, our paper is both substantively and technically quite different from those. We discuss the details of the relation after we present our results. 1 The Economy A The Model Here we describe our simple model of monetary policy. The economy has a monetary authority and a continuum of individual agents. The time horizon is infinite, with periods indexed by At the beginning of each period, agents choose individual action from some compact set. We interpret as (the growth rate of) an individual's nominal wage and let denote the (growth of the) average nominal wage. Next, the monetary authority observes the current realization of its private information about the state of the economy. This private information is an i.i.d., mean 0 random variable with support , with a strictly positive density and a distribution function . Given this private information , referred to as the state, the monetary authority chooses money growth in some large compact set The monetary authority maximizes a social welfare function that depends on the average nominal wage growth the monetary growth rate , and a privately observed state . We interpret to be private information of the monetary authority regarding the impact of a monetary stimulus on social welfare in the current period. Throughout, we assume that is strictly concave in and twice continuously A leading interpretation of the private information in our economy follows that of Sleet and Yeltekin (2003) and Sleet (2004). Individual agents in the economy have either heterogeneous preferences or heterogeneous information regarding the optimal inflation rate, and the monetary authority sees an aggregate of that information which the private agents do not see. (Informally, we imagine this private information takes resources to acquire, so that while agents in the economy feasibly can acquire the information, the costs involved in doing so outweigh the benefits.) When we pose our optimal policy problem as a mechanism design problem, we are presuming that the mechanism designer is a separate agent with no independent information of its own. We interpret the society's objective as a weighted average of the preferences of the heterogeneous agents. As a benchmark example, we use this function: We interpret (1) as the reduced form that results from a monetary authority which maximizes a social welfare function that depends on unemployment, inflation, and the monetary authority's private information . Each period, inflation is equal to the money growth rate chosen by the monetary authority. Unemployment is determined by a Phillips curve. The unemployment rate is given by where is a positive constant, which we interpret as the natural rate of unemployment. In (1), is a weight on the private information. Social welfare in period is a function of and and the state Our benchmark example is derived from a quadratic objective function of the form which is similar to that used by Kydland and Prescott (1977) and Barro and Gordon (1983). Using (2) and in (3), we obtain (1). Here the monetary authority's private information is about the social cost of inflation, but we develop our model for general specifications of the social welfare function which subsume (1) as a special case. Notice that in our general formulation, we allow the current payoff to vary with expected inflation, through ; with actual inflation, through ; and with the state . This formulation thus subsumes many other versions of the Kydland-Prescott and Barro-Gordon models in the literature.^4 Throughout, a policy for the monetary authority in any given period, denoted specifies the money growth rate for each level of the state For any we define the static best response to be the policy that solves ( ) We assume that if then B Two Ramsey Benchmarks Before we analyze the economy in which the monetary authority has private information, we consider two alternative economies. The optimal policies in these economies are useful as benchmarks for the optimal policy in the private information economy. One benchmark, the Ramsey policy, denoted yields the highest payoff that can be achieved in an economy with full information. The gap between that Ramsey payoff and the payoff in the economy with private information measures the welfare loss due to private information. The other benchmark, the expected Ramsey policy, denoted yields the highest payoff that can be achieved when the policy is restricted to not depend on private information. In our environment, there is no publicly observed shock to the economy; hence, this policy is a constant. The expected Ramsey policy is a useful benchmark because it is the best policy that can be achieved by a rule which specifies policies as a function only of observables. This policy is analogous to the strict targeting rule discussed by Canzoneri (1985). For the Ramsey policy benchmark, consider an economy with full information with the following timing scheme. Before the state is realized, the monetary authority commits to a schedule for money growth rates . Next, individual agents choose their nominal wages with associated average nominal wages Then the state is realized, and the money growth rate is implemented. The optimal allocations and policies in this economy solve the Ramsey problem: subject to For our example (1), the Ramsey policy is Note that the Ramsey policy has the monetary authority choosing a money growth rate which is increasing in its private information. Thus, with full information, it is optimal to have the monetary authority fine tune its policy to the state of the economy. This feature of the environment leads to a tension in the economy with private information between allowing the monetary authority discretion for fine tuning and experiencing the resulting time inconsistency problem. For the other benchmark, consider an economy in which the monetary authority is restricted to choosing money growth that does not vary with its private information. The equilibrium allocations and policies in the economy with these constraints solve the expected Ramsey problem: subject to For our example (1), the expected Ramsey policy is For our example (1), the Ramsey policy obviously yields strictly higher welfare than does the expected Ramsey policy. More generally, when the Ramsey policy is strictly increasing in and yields strictly higher welfare than does the expected Ramsey policy. C The Dynamic Mechanism Design Problem To analyze the problem of finding the optimal degree of discretion, we use the tools of dynamic mechanism design. Without loss of generality, we formulate the problem as a direct revelation game. In this problem, society specifies a monetary policy, the money growth rate as a function of the history of the monetary authority's reports of its private information about the state of the economy. Given the specified monetary policy, the monetary authority chooses a strategy for reporting its private information. Individual agents choose their wages as functions of the history of reports of the monetary authority. A monetary policy in this environment is a sequence of functions , where specifies the money growth rate that will be chosen in period following the history of past reports together with the current report The monetary authority chooses a reporting strategy all , in period where is the current realization of private information and is the reported private information in As is standard, we restrict attention to public strategies, those that depend only on public histories and the current private information, not on the history of private information.^5 Also, from the Revelation Principle, we need only restrict attention to truth-telling equilibria, in which for all and In each period, each agent chooses the action as a function of the history of reports Since agents are competitive, the history need not include either agents' individual past actions or the aggregate of their past actions.^6 Each agent chooses nominal wage growth equal to expected inflation. For each history with monetary policy given, agents set equal to expected inflation: where we have used the fact that agents expect the monetary authority to report truthfully, so that . Aggregate wages are defined by The optimal monetary policy maximizes the discounted sum of social welfare: where the future histories are recursively generated from the choice of monetary policy in the natural way, starting from the null history. The term normalizes the discounted payoffs to be in the same units as the per-period payoffs. A perfect Bayesian equilibrium of this revelation game is a monetary policy, a reporting strategy, a strategy for wage-setting by agents and average wages such that (6) is satisfied in every period following every history average wages equal individual wages in that , and the monetary policy is incentive-compatible in the standard sense that, in every period, following every history and realization of the private information the monetary authority prefers to report rather than any other value Note that since average wages always equal wages of individual agents we need only record average wages from now on. Note that this definition of a perfect Bayesian equilibrium includes no notion of optimality for society. Instead, it simply requires that in response to a given monetary policy, private agents respond optimally and truth-telling for the monetary authority is incentive-compatible. The set of perfect Bayesian equilibria outcomes is the set of incentive-compatible outcomes that are implementable by some monetary policy. The mechanism design problem is to choose a monetary policy, a reporting strategy, and a strategy for average wages, the outcomes of which maximize social welfare (7) subject to the constraint that these strategies are incentive-compatible. D A Recursive Formulation Here we formulate the problem of characterizing the solution to this mechanism design problem recursively. The repeated nature of the model implies that the set of incentive-compatible payoffs that can be obtained from any period on is the same that can be obtained from period Thus, the payoff from any incentive-compatible outcome for the repeated game can be broken down into payoffs from current actions for the players and continuation payoffs that are themselves drawn from the set of incentive-compatible payoffs. Following this logic, Abreu, Pearce, and Stacchetti (1990) show that the set of incentive-compatible payoffs can be found using a recursive method that we exploit here. In our environment, this recursive method is as follows. Consider an operator on sets of the following form. Let be some compact subset of the real line, and let be the largest element of . The set may be interpreted as a candidate set of incentive-compatible levels of social welfare. In our recursive formulation, the current actions are average wages and a report for every realized value of the state For each possible report there is a corresponding continuation payoff that represents the discounted utility for the monetary authority from the next period on. Clearly, these continuation payoffs cannot vary directly with the privately observed state We say that the actions and and the continuation payoff are enforceable by if and the incentive constraints are satisfied for all and all , where Constraint (8) requires that each continuation payoff be drawn from the candidate set of incentive-compatible payoffs while constraint (9) requires that average wages equal expected inflation. Constraint (10) requires that for each privately observed state the monetary authority prefer to report the truth rather than any other message That is, the monetary authority prefers the money growth rate and the continuation value rather than a money growth rate and its corresponding continuation value The payoff corresponding to and is Define the operator that maps a set of payoffs into a new set of payoffs as As demonstrated by Abreu, Pearce, and Stacchetti (1990), the set of incentive-compatible payoffs is the largest set that is a fixed point of this operator: For any given candidate set of incentive-compatible payoffs we are interested in finding the largest payoff that is enforceable by or the largest element We find this payoff by solving the following problem, termed the best payoff problem: subject to the constraint that , and are enforceable by , in that they satisfy (8)-(10). Throughout, we assume that is a piecewise, continuously differentiable function. The best payoff problem is a mechanism design problem of choosing an incentive-compatible allocation which maximizes utility. Following the language of mechanism design, we now refer to as the type of the monetary authority, which changes every period. When we solve this problem with (13) implies that the resulting payoff is the highest incentive-compatible payoff. We will prove our main result in Proposition 1 for any Hence, we will not have to explicitly solve the fixed-point problem of finding Moreover, to prove our main result, we also need focus only on the best payoff problem, which gives the highest payoff that can be obtained from period 0 onward. For completeness, however, notice that given some from the best payoff problem, a period policy and continuation value, and that satisfy exist by the definition of Equation ( and its analog for other periods are sometimes referred to as a promise-keeping constraint. In our approach, we do not need to mention this constraint since it is built into the definition of the operator 2 Characterizing the Optimal Mechanism Now we solve the best payoff problem and use the solution to characterize the optimal mechanism. Our main result here is that under two simple conditions, a single-crossing condition and a monotone hazard condition, the optimal mechanism is static. To highlight the importance of the monotone hazard condition for this result, we discuss in an appendix three examples which show that if the monotone hazard condition is violated, the optimal mechanism is dynamic. A Preliminaries We begin with some definitions. In our recursive formulation, we say that a mechanism is static if the continuation value for (almost) all We say that a mechanism is dynamic if for some set of which is realized with strictly positive probability. Our characterization of the solution to the best payoff problem does not depend on the exact value of Hence, to simplify the notation, we suppress explicit dependence on and think of the term as being subsumed in the function and as being subsumed in the function. We assume that the preferences are differentiable and satisfy a standard single-crossing assumption, that This implies that higher types of monetary authority have a stronger preference for current inflation. Standard arguments can be used to show that the static best response is strictly increasing in Under the single-crossing assumption (A1), a standard lemma lets us replace the global incentive constraints (10) with some local versions of them. We say that an allocation is locally incentive-compatible if it satisfies three conditions: is nondecreasing in ; wherever and exist; and for any point at which these derivatives do not exist, Standard arguments give the following result: under the single-crossing assumption (A1), the allocation ( ) satisfies the incentive constraints (10) if and only if the allocation is locally incentive-compatible. (See, for example, Fudenberg and Tirole's 1991 text.) Given any incentive-compatible allocation, we define the utility of the allocation at to be Local incentive-compatibility implies that is continuous and differentiable almost everywhere, with derivative ( ). Integrating from up to gives that while integrating from down to gives that With integration by parts, it is easy to show that for interval endpoints Using (18) and (20), we can write the value of the objective function as Next we make some joint assumptions on the probability distribution and the social welfare function. Assume that, for any action profile with nondecreasing, (A2a) () is strictly decreasing in (A2b) () is strictly increasing in We refer to assumptions (A2a) and (A2b) together as (A2) and, in a slight abuse of terminology, call them the monotone hazard condition. In our benchmark example (1), ( ), so that (A2) reduces to the standard monotone hazard condition familiar from the mechanism design literature, that be strictly decreasing and be strictly increasing. B Showing That the Optimal Mechanism Is Static Here we show that the optimal mechanism is static by proving this proposition: Proposition 1: Under assumptions (A1) and A2), the optimal mechanism is static. The approach we take in proving Proposition 1 is different from the standard approach used by Fudenberg and Tirole (1991, Chapter 7.3) for solving a mathematically related principal-agent problem. To motivate our approach, we first show why the standard approach does not work for our problem. We discuss the forces that lead to the failure of the standard approach here because these forces suggest a variational argument we use to prove Proposition 1. The best payoff problem can be written as follows: Choose to maximize social welfare subject to the constraints that is nondecreasing, and the continuation values defined by satisfy for all Alternatively, we can write the best payoff problem as choosing to maximize subject to the constraints and with the continuation values defined by satisfying for all The standard approach to solving either version of this problem is to guess that the analog of constraints and do not bind, take the corresponding first-order conditions of either version to find the implied and then verify that constraints and are in fact satisfied at that choice of If we take that approach here, it fails. The first-order conditions with respect to are for the first version of the best payoff problem and for the second version, where is the Lagrange multiplier on constraint . The solution to these first-order conditions (22) and (23), from the relaxed problem in which we have dropped constraints and implies a decreasing schedule. To see why, note, for example, that the left side of equation (22) is the increment to social welfare from marginally increasing at some particular and adjusting the continuation values for to preserve incentive-compatibility, while the right side is the cost in terms of welfare from raising expected inflation Under assumption (A2a), the benefits of raising are higher for low values of than for high values of . Thus, in the relaxed problem, it is optimal to have a downward-sloping schedule Similar logic applies to (23). Clearly, then, the solution to the relaxed problem violates at least one of the dropped constraints or , and hence, we cannot use this standard approach. We also cannot use the ironing approach designed to deal with cases in which the monotonicity constraint binds, because in our problem, the constraint that binds is constraint , which is not dealt with in that approach. Instead, in the proof of Proposition 1 that follows, we use a variational argument to show that constraint binds for all at the solution to the best payoff problem. (We discuss below the reason our model differs from others in the literature.) Before proving Proposition 1, we sketch our basic argument. Our discussion of the first-order conditions of the relaxed problem (22) and (23) suggests that given any strictly increasing schedule, a variation that flattens this schedule will improve welfare if it is feasible in the sense that the associated continuation value satisfies constraint Our proof of Lemma 1 formalizes this logic. Our objective is to show that the optimal continuation value is constant at We prove this by contradiction. We start with the observation that is piecewise-differentiable since is piecewise-differentiable and (16) holds. We first show that must be a step function. If not, there is some interval over which is nonzero, and hence, from local incentive-compatibility, is strictly increasing. In Lemma 2, we show that a variation that flattens over that interval is feasible. From Lemma 1, we know it is welfare-improving. We next show that must be continuous, and since it is a step function, it must be constant. We prove this by showing that if either or are discontinuous at some point then (17) implies that must be increasing in the sense that it jumps up at that point. In Lemma 3, we show that a variation that flattens in a neighborhood of that point is feasible, and again from Lemma 1, we know that it is It is convenient in the proof of Proposition 1 to use a definition of increasing on an interval which covers the cases we will deal with in Lemmas 2 and 3. This definition subsumes the case of Lemma 2 in which for some interval and the case of Lemma 3 in which jumps up at We say that is increasing on if is weakly increasing on this interval and there is some in this interval such that for and for , where is the conditional mean of on this interval, namely, In words, on this interval, the function is weakly increasing and is strictly below its conditional mean up to and strictly above its conditional mean after ^7 Throughout, we will also say that the policy is flat at some particular point if the derivative exists and equals zero at that point. Consider now some dynamic mechanism ( ) in which the policy is increasing on some interval, say, In our variation, we marginally move the function toward its conditional mean on this interval and adjust the continuation values to preserve incentive-compatibility. In particular, our variation moves our original policy marginally toward a policy defined by This policy differs from the original policy only on the interval and there the original policy is replaced by the conditional mean of the original policy over the interval. Clearly, the expected inflation under is the same as the expected inflation under the original policy. We let ( ) and denote our variation and the associated utility. The policy in our variation is a convex combination of the policy and the original policy and is defined by for (For a graph of , see Figure 1.) Clearly, the expected inflation in our variation equals that of the original allocation for all The delicate part of the variation is to construct the continuation value so as to satisfy the feasibility constraint for all in addition to incentive-compatibility. It turns out that we can ensure feasibility if we use one of two ways to adjust continuation values. In the up variation, we leave the continuation values unchanged below and pass up any changes induced by our variation in the policy to higher types by suitably adjusting the continuation values to maintain incentive-compatibility. In the down variation, we leave the continuation values unchanged above and pass down any changes induced by our variation in the policy to lower types by suitably adjusting the continuation values to maintain incentive-compatibility. In the up variation, we determine the continuation values by substituting ( ) into (18) to get that is defined by In the down variation, we use (19) in a similar way to get that is defined by By construction, these variations are incentive-compatible. In the following lemma, we show that, if either variation is feasible, it improves welfare. LEMMA 1: Assume (A1) and (A2), and let ( )be an allocation in which is increasing on some interval Then the up variation and the down variation both improve welfare by increasing the objective function ( PROOF: To see that the up variation improves welfare, use (21) to write the value of the objective function under this variation as To evaluate the effect on welfare of a marginal change of this type, take the derivative of and evaluate it at to get which, with the form of reduces to If we divide (31) by the positive constant , then we can interpret (31) to be the expectation of the product of two functions, namely, defined as and defined as , where is the density of over the interval By assumption (A2a), we know that the function is strictly decreasing. Because the function is increasing on the interval , the function is decreasing on this interval in the sense that is weakly decreasing and lies strictly below its conditional mean for and strictly above its conditional mean for By the definition of a covariance, we know that cov , where the expectation is taken with respect to the density By the construction of in (24), we know that so that cov, which is clearly positive because is strictly decreasing and is decreasing on the interval . Thus, (31) is strictly positive, and the variation improves welfare. The down variation also improves welfare. The value of the objective function under this variation is by arguments similar to those given before. Q.E.D. To gain some intuition for how these variations improve welfare, we begin by emphasizing a critical insight: changing the inflation for any given type not only has direct effects on the welfare of that type, but also has indirect effects on the welfare of other types through the incentive constraints. For example, making a given type better off not only helps that type, but also makes that type less tempted to mimic higher types. Thus, the continuation values of those higher types can then be increased, if that is feasible, as in the up variation. In that variation, the term measures the importance of higher types relative to the rate at which changing affects expected inflation as measured by When continuation values are adjusted for types below a given type (as in the down variation) the term measures the importance of lower types relative to . In each variation, the term relates to the rate at which changing inflation for type relaxes incentive constraints. Using these ideas, let us now focus on the up variation, and consider the effects of increasing as formalized in (31). The variation affects inflation within the interval and the expression inside the integral represents, for each the direct and indirect effects of changing inflation for type We now argue that the flattening of the inflation schedule has a positive effect for a type in the bottom part of the interval, namely, for some , due to an increase in the inflation, which in turn relaxes the incentive constraint for and enables the continuation value to increase. This also creates a positive indirect effect for all types since the increase in continuation values can be passed upward without violating incentive constraints. In contrast, for a type in the top part of the interval, namely, for some the flattening of the inflation schedule has a negative effect, an effect that is passed on through the incentive constraints in the form of lower continuation values for all types Our monotone hazard rate assumption (A2a) ensures that the positive effect outweighs the negative effect: when appropriately normalized, help to lower types is more important than harm to higher types, because relative to type exerts greater indirect effects on types above . More formally, let us derive expressions for the impact of the flattening of the policy on the current payoffs of the directly affected types on as well as the continuation values of directly and indirectly affected types. The impact of increasing on the current payoff for type is while the impact on is zero outside In the up variation, the impact of increasing on the continuation value for a type is Hence, the impact on the utility of type is simply the sum of these pieces, or Notice from (34) that any change in the policy for some particular type has an indirect effect (through the incentive constraints) on the utility of all types above . Thus, each term in the integral (30) can be thought of as the sum of the change in welfare for all types and above resulting from the change in the inflation schedule for the type . Under our single-crossing assumption, ( ) so the impact of changing the policy at depends on the sign of On the interval , where is the conditional mean on this interval. By definition of the type on the interval , , and on the interval Under assumption (A2a), it is more beneficial to help lower types and hurt higher types once the cross-type externalities generated by the incentive constraints are accounted for. In the down variation, the intuition for the derivative (32) is the same as that for (31), except that, in this variation, a change in the inflation rate chosen by type affects the continuation value of all types below . Making a type at the top of the interval worse off (by flattening the inflation schedule) leaves nearby types less tempted to mimic thus, the continuation value for can be increased without inducing mimicry, and this increase can be passed on to all types . Making a type at the bottom of the interval better off necessitates a lower continuation value for in order to deter mimicry by nearby types, and again this decrease is passed on to types . Condition (A2b) ensures that, when weighted by the effects on average inflation, the indirect effect generated by dominates that generated by , so that flattening the schedule increases expected welfare. The following lemma proves that if is not a step function, then is increasing on some interval, and there is a feasible variation that flattens and improves welfare. LEMMA 2: Under (A1) and (A2), in the optimal mechanism, the continuation value function is a step function. PROOF: Since by assumption is piecewise-differentiable, we know from (16) that is too. By way of contradiction, assume that is not a step function. Then there is an interval over which exists and does not equal zero. Clearly, then, there is a subinterval over which is either strictly positive or strictly negative, and for some From local incentive-compatibility, we know that so regardless of the sign of we have that on this interval. Hence, is increasing on in the sense defined above. From Lemma 1, we know that if the up and down variations are feasible, then they both improve welfare. To complete the proof, we show that either the up variation or the down variation is always feasible. Under the up variation, ( and (27) imply that equals for and for where Figure 2 is a graph of in the up variation This graph illustrates several features of : it coincides with for it differs from by the constant for and it jumps at both and This last feature follows from (17) and the fact that jumps at these points. Notice in the graph that for Under the down variation, ( and (28) imply that equals for and for . Figure 3 is a graph of in the down variation. To ensure that the continuation value satisfies feasibility, we use the up variation when the term and the down variation when that term is positive. By doing so, we ensure that outside the interval the continuation value under this variation is no larger than the original continuation value , which, by assumption, is feasible. We know that inside the interval , Since is continuous in , we can choose small enough to ensure that In the next lemma, we show that and are continuous. Since we know from Lemma 2 that is a step function, we conclude that is a constant. Optimality implies that this constant is LEMMA 3: Under (A1) and (A2), and are continuous. In Appendix A, we prove that is continuous by contradiction. We show that if jumps at some point , then the same up variation and down variation we used in Lemma 1 will improve welfare. The only difficult part of the proof is showing that when the appropriate interval is selected that contains the jump point the associated continuation values are feasible. Here it may turn out that the feasibility constraint binds inside the interval in that the original allocation has for some in Thus, we cannot simply shrink the size of the weight in the variation to ensure feasibility on , as we did in the proof of Lemma 2. Instead we show that the variation is feasible inside the interval with arguments that we relegate to Appendix A. Together Lemmas 2 and establish Proposition 1, that under our assumptions, the optimal mechanism is static. Our characterization of optimal policy relied on the monotone hazard condition (A2). Under this condition, we showed that the dynamic mechanism design problem has a static solution. In Appendix B, we give three simple examples in which the monotone hazard condition (A2) is violated, and the dynamic mechanism design problem does not have a static solution. In the first two examples, (A2) fails because [ ] is not monotone; in the third, (A2) fails because is increasing at a sufficiently rapid rate. 3 The Optimal Degree of Discretion So far we have demonstrated that the optimal mechanism is static. Now we describe three key implications of an optimal static mechanism for monetary policy: The optimal policy has either bounded discretion or no discretion; the optimal policy can be implemented by society setting an upper limit, or cap, on the inflation rate that the monetary authority is allowed to choose; and the optimal degree of discretion is decreasing the more severe is the time inconsistency problem and the less important is private information. A Characterizing the Optimal Policy In the optimal static mechanism, the monetary policy maximizes subject to the constraints that and ( ) ( ) for all We say that a monetary policy has bounded discretion if it takes the form where is the static best response given wages Thus, for the monetary authority chooses the static best response, and for the monetary authority chooses the upper limit A policy has if for some constant so that regardless of the monetary authority chooses the same growth rate Clearly, the best policy with no discretion is the expected Ramsey policy.^8 We now show that the optimal policy has either bounded discretion or no discretion. Here, as before, we can replace the global incentive constraint in (38) with the local incentive constraints, with the restriction that In particular, Lemma 3 implies that is continuous, while (16), the condition that implies that for all is either flat or equal to the static best response. Clearly, if is flat everywhere, it is a constant; hence, it equals the expected Ramsey policy, which by definition is the best constant policy. If is not flat everywhere, then it must be of the following form for some and : where In words, the policy must be constant up to some point and equal to the static best response of type ; it must be equal to the static best response of type with ; and then it must be constant and equal to the static best response of type In the following proposition, we show that if the optimal policy is not the expected Ramsey policy, then it must be of the form ( with equal to , so that the policy's form reduces to the bounded discretion form (39). Proposition 2: Under assumptions(A1) and (A2), the optimal policy has either bounded discretion or no discretion. Proof: We have argued that if the optimal policy is constant, then it must be an expected Ramsey policy, which has no discretion. If the optimal policy is not constant, then it must be of the form ( But having the form (40) with cannot be optimal. To see this, observe that an alternative policy of the same form would exist with and We illustrate this alternative policy in Figure 4. This alternative policy would be closer to wherever it differs from and would satisfy Hence, this alternative policy would be strictly preferred to ; the change from to directly improves welfare for all types with held fixed. The change also reduces which by (4) contributes to improving total welfare. More formally, observe that the marginal impact on welfare of a marginal reduction in is given by equal to whichis positive since ( ), , , and (4). Q.E.D. B Implementing Optimal Policy with an Inflation Cap or a Range of Inflation Rates We have characterized the solution to a dynamic mechanism design problem. We now imagine implementing the resulting outcome with an inflation cap, a highest allowable level of inflation We imagine that society legislates this highest allowable level and that doing so restricts the monetary authority's choices to be If this cap is appropriately set and agents simply play the repeated one-shot equilibrium of the resulting game with this inflation cap, then the monetary authority will optimally choose the outcome of the mechanism design problem. In this sense, the repeated one-shot game with an inflation cap implements the policy that solves the best payoff problem. The intuition for this result--that a policy with either bounded discretion or no discretion can be implemented by setting an upper limit on permissible inflation rates--is simple. In our environment, the only potentially beneficial deviations from either type of policy are ones that raise inflation. Under bounded discretion, the types in are choosing their static best response to wages and, hence, have no incentive to deviate, whereas the types in have an incentive to deviate to a higher rate than Similarly, from Proposition 3 (stated and proved below), we know that if the expected Ramsey policy is optimal, then at this policy all types have an incentive to deviate to higher rates of inflation. Hence, an inflation cap of implements such a policy. (For completeness, we formalize this argument in Appendix C.) Clearly, we can also implement the optimal policy with a range of inflation rates denoted The top end of such a range is the inflation cap, just discussed. The bottom end of the range, , is simply the optimal policy chosen by the lowest type in the optimal static mechanism. Under a policy of bounded discretion, while under a policy of no discretion, . C Linking Discretion With Time Inconsistency and Private Information So far we have shown that the optimal policy has either bounded discretion or no discretion and discussed how to implement such a policy. Here we link the optimal degree of discretion to the severity of the time inconsistency problem and the importance of private information. We show that the optimal degree of discretion shrinks as the time inconsistency problem becomes more severe and private information becomes less important. The literature using general equilibrium models to study optimal monetary policies suggests a qualitative way to measure the severity of the time inconsistency problem. In most of this literature, the time inconsistency problem is extremely severe, in that the static Nash equilibrium is always at the highest feasible inflation rate This result follows because the static best response of the monetary authority to any given level of expected inflation is always above that level; thus, the monetary authority is always tempted to generate a monetary surprise. Examples of the models with the more severe problems are those of Ireland (1997); Chari, Christiano, and Eichenbaum (1998); and Sleet (2001). In the rest of the literature, the problem is less severe, in that the static Nash equilibrium is interior. Examples of the models with the less severe problems are those of Chang (1998), Nicolini (1998), and Albanesi, Chari, and Christiano (2003). In our reduced-form model, we can mimic the general equilibrium models with the more severe problems by choosing a payoff function for which for all That is, in response to any choice of wages the monetary authority wants to choose inflation higher than , regardless of its type. Under (A1), this condition is equivalent to requiring that the static best response function satisfies for all , . We show in the next proposition that this condition implies that the optimal policy has no discretion. We can mimic the general equilibrium models with less severe problems by choosing a payoff function for which the static Nash equilibrium best response is interior. For such a payoff function, the optimal policy will typically depend on parameters. When the time inconsistency problem is sufficiently mild, however, we can show a general result: that optimal policy must have bounded discretion. Here, by mild, we mean that when wages are set at the expected Ramsey level, the lowest type wants to set inflation at some level lower than the expected Ramsey level. Technically, we can state this condition as that the static best response satisfies or, equivalently, that the payoff function satisfies We summarize this discussion in a proposition: PROPOSITION 3: Assume (A1) and (A2). Two cases follow: (i) if the static best response satisfies for all, then the optimal policy has no discretion, and (ii) if the static best response satisfies , then the optimal policy has bounded discretion. PROOF: Under (A1) and (A2), the optimal mechanism is static. To prove (i), note that in any equilibrium with bounded discretion, Under A1), is strictly increasing in whenever Thus, for all , implies that whenever the right side of (41) is greater than the left side for any The only feasible policies of the bounded discretion form must have or and, hence, reduce to policies with no discretion. The optimal policy with no discretion, the expected Ramsey policy, by definition yields higher welfare. We prove (ii) by contradiction. Assume that but that the optimal policy has no discretion. The variation used in Proposition 2 immediately implies that such a policy cannot be optimal. Thus, the optimal policy must have bounded discretion. In Proposition we have characterized the form of the optimal policy for two cases for which this can be done independently of parameters. To characterize the optimal policy in the remaining case (iii ) in which but there exists an such that we return to our benchmark example (1). In general, the choice of the optimal inflation cap depends on the importance of private information relative to the severity of the time inconsistency problem. In our benchmark example, the parameter indexes the importance of private information, and the parameter indexes the severity of the time inconsistency problem. To see why indexes the importance of private information, note that the Ramsey policy is so that the slope of the policy increases with . Hence, as increases, the Ramsey policy responds more to the private information , and the gap in welfare between the Ramsey policy and the expected Ramsey policy grows. To see why indexes the severity of the time inconsistency problem, note that the Nash inflation rate is , and the Nash policies are The Ramsey inflation rate is and the Ramsey policies are Thus, for each type the Nash policies are simply the Ramsey policies shifted up by As gets smaller, the Nash policies converge to the Ramsey policies. When is zero, the Nash and Ramsey policies coincide. When the objective function satisfies (1), the condition in Proposition 3 reduces to , where is a negative number. Proposition 3 thus implies that bounded discretion is optimal when private information is important relative to the severity of the time inconsistency problem. We characterize the optimal mechanism in the benchmark case more fully in the next proposition, to get a more precise link between the severity of the time inconsistency problem and the optimal degree of discretion. For policies of the bounded discretion form (39), we think of as indexing the degree of discretion. If then all types are on their static best responses; hence, we say there is complete discretion. As decreases, fewer types are on their static best responses; hence, we say there is less discretion. We then have this proposition: PROPOSITION 4: Assume (1), (A1), and (A2a). If then the optimal policy has complete discretion. If then that policy has bounded discretion with The optimal degree of discretion is decreasing in As approaches, the cutoff approaches . If then the optimal policy is the expected Ramsey policy with no discretion. We prove this proposition in Appendix D. Figure 5 illustrates the proposition for two economies with different degrees of relative importance of private information and severity of time inconsistency problems, . In these two economies, we denote the optimal policies by indexed by and indexed by , along with the inflation caps and 4 Comparison to the Literature Our result on the optimality of a static mechanism is quite different from what is typically found in dynamic contracting problems, that static mechanisms are not optimal. Using a recursive approach, we have shown how our dynamic mechanism design problem reduces to a simple quasi-linear mechanism design problem. Our result is thus also directly comparable to the large literature on mechanism design with broad applications, including those in industrial organization, public finance, and auctions. (See Fudenberg and Tirole's 1991 book for an introduction to mechanism design and its applications.) In this comparison, the continuation values in our framework correspond to the contractual compensation to the agent in the mechanism design literature. Our result that the optimal mechanism is static, so that the continuation values do not vary with type, stands in contrast to the standard result in the mechanism design literature that under the optimal contract, the compensation to the agent varies with the agent's type. In this sense, our result is also quite different from what is found in the mechanism design literature. The key feature of our model that distinguishes it from much of the dynamic incentive literature is the feasibility constraint The implication of this constraint is that in our model the continuation values of one type cannot be traded off against other types as they can be in many other models. To highlight the importance of this constraint, we consider a highly stylized example in Appendix E that replaces the constraint with and show that the resulting optimal value of then differs radically from our result: the optimal value of then varies with In providing incentives under (43), a low continuation value for one type can be traded off against a high continuation value for another. This feature is common in a wide variety of incentive problems, and in them, the optimal incentive scheme has varying with the type In contrast, when providing incentives under (42), this tradeoff cannot be made: a low value of for one type does not let us raise the value of for some other type. Hence, under (42), using to provide incentives is akin to burning money. A large class of dynamic incentive models include a feature like (43); they might usefully be thought of as debt models. Early versions of these include the private debt models of Green (1987), Thomas and Worrall (1990), Atkeson (1991), and Atkeson and Lucas (1992, 1995) while later versions include the government debt models of Sleet and Yeltekin (2003) and Sleet (2004). All of these models share the feature that optimal contracts are dynamic because in each of these settings a low continuation for one type can be traded off against a high continuation value for another type. In this sense, the debt models share many of the features of models with constraints of the form (43) rather than those with constraints of the form (42). Having a constraint like (42) rather than (43) is important for our result that the optimal mechanism is static, but it is not sufficient, for at least two reasons. First, even in our model, we have given examples in which the optimal mechanism is dynamic when our monotone hazard condition is violated. Second, the information structure also matters. In our model, private agents receive no direct information about the state of the economy. If private agents receive a noisy signal about the state before the monetary authority takes its action, then our result goes through pretty much unchanged; the noisy signal is just a publicly observed variable upon which the inflation cap is conditioned. If, however, private agents receive a noisy signal about the information the monetary authority received after the monetary authority takes its action, then dynamic mechanisms in which continuation values vary with this signal may be optimal. Sleet (2001) considers such an information structure and shows that the optimality of the dynamic mechanism depends on the parameters governing the noise. He finds that when the public signal about the monetary authority's information is sufficiently noisy, having the monetary authority's action depend on its private information is not optimal; hence, the optimal mechanism is static. In contrast, when this public signal is sufficiently precise, the optimal mechanism is dynamic. The logic of why a dynamic mechanism is optimal is roughly similar to that in the literature of industrial organization which follows Green and Porter (1984) on optimal collusive agreements that are supported by periodic reversion to price wars, even though these price wars lower all firms' profits. Our work here is also related to some of the repeated game literature in industrial organization about supporting collusion in oligopolies. Athey and Bagwell (2001) and Athey, Bagwell, and Sanchirico (2004) solve for the best trigger strategy-type equilibria in games with hidden information about cost types. Athey and Bagwell (2001) show that, in general, the best equilibrium is dynamic (nonstationary). In this equilibrium, a firm which sets low prices gets a lower discounted value of profits from then on. Athey, Bagwell, and Sanchirico (2004) show that when strategies are restricted to be strongly symmetric, so that all firms receive the same continuation values even though they take observably different actions, a different result emerges. In particular, under some conditions, the best equilibrium is stationary and entails pooling of all cost types. When those conditions fail, and when firms are sufficiently patient, there may be a set of stationary and nonstationary equilibria that yield the same payoffs. (The latter result relies heavily on the Revenue Equivalence Theorem from auction theory.) 5 Conclusion What is the optimal degree of discretion in monetary policy? For economies in which private information is not important and time inconsistency problems are severe, the optimal degree of discretion is zero. For economies in which private information is important and time inconsistency problems are less severe, it is not zero, but bounded. More generally, the optimal degree of discretion is decreasing the more severe is the time inconsistency problem and the less important is private information. For all of these economies, the optimal policy can be implemented by legislating and enforcing a simple inflation cap. In our simple model, the optimal inflation cap is a single number because there is no publicly observed state. If the model were extended to have a publicly observed state, then the optimal policy would respond to this state, but not to the private information. To implement optimal policy, therefore, society would need to specify a rule for setting the inflation cap, where the cap would vary with public information. Equivalently, society could specify a rule for setting ranges for acceptable inflation, where these ranges would vary with public information. We interpret these rules as a type of inflation targeting that is broadly similar to the types actually practiced by a fair number of countries. (For a discussion of inflation targeting in practice, see Bernanke and Mishkin To keep our theoretical model simple, we have abstracted from exotic events which are both unforeseeable and unquantifiable. Anyone interpreting the implications of our results for an actual society, therefore, should keep in mind that to handle such exotic events, the optimal policy rule would need to be adapted to deal with them, perhaps by the addition of some type of escape clauses. Appendix A: Proof of Lemma 3 Here we prove Lemma 3, that under (A1) and (A2), the optimal allocation is continuous. The proof is by contradiction. PROOF. In Lemma 2, we showed that in an optimal allocation must be a step function. Thus, two types of potential discontinuities in the allocation must be ruled out. In the first type, and, potentially, jump at some point and are both constant in some intervals and on either side of the jump point In the second type of discontinuity, and both jump at the point , and is equal to the static best response in some interval or on either side of the jump point Consider now the first type of discontinuity, when and are constant on some intervals and on either side of the point of discontinuity Let denote the allocation on and denote the allocation on . By the continuity of , we can choose the interval small enough so that if is strictly positive, then so is , and if is strictly negative, then so is Under these assumptions, is increasing on the interval We next show that if, for the chosen interval the term , defined in (36), is negative for small then the up variation is feasible. That this variation is feasible outside the interval is clear from the proof of Lemma 2. What needs to be proved is that this variation is also feasible inside the interval Using essentially the same argument, we show that if is positive for small then the down variation is feasible. Hence, by the same logic as in the proof of Lemma 2, the optimal allocation cannot have this first type of discontinuity. Suppose that for the chosen interval the term is negative for small Since this implies that Using the form of on the interval , we have that To show that the up variation is feasible inside the interval , we show that on for small We do so by showing that either or for and, similarly, either or for . To show that, we differentiate (27) to obtain that is given by Using we can rewrite these expressions as Consider first (46). By construction and so if then so is , and we have that for Alternatively, if since is strictly concave, then it must be true that , and hence, Consider next (47). Note that we can rewrite (44) as Compare this expression for to the right side of (47) to see that (47) is negative if is positive. Since by construction, (47) is less than zero if is, because then is also negative. Alternatively, if is nonnegative, since is strictly concave it must be true that Hence, from (17), we know that These arguments establish that if is negative for small then on for small If the term is positive for small , we use the down variation and an analogous argument to the one above to establish the same result that on for small Now consider the second type of discontinuity, when is constant on one side of and equal to the static best response on the other side of . Suppose, for example, that equals the static best response for on some interval . Clearly, is increasing on the interval Since jumps up at it must be true that ( ) ( ) Hence, from condition (17) in local incentive-compatibility, we know that Thus, for . Hence, either the up variation or the down variation can be applied to this allocation in the interval as in the proof of Lemma 2, and thus, such an allocation cannot be optimal. With an analogous argument, we can rule out the case in which equals the static best response for on the other side of the jump point, on some interval . Appendix B: Optimal Policy without Monotone Hazards Here we give three examples in which our monotone hazard condition (A2) is violated and in which the optimal mechanism is dynamic. In the first two examples, we assume that the hazard is decreasing in at all points except the point , where the hazard jumps up. We also assume that is increasing throughout. In the third example, we shed light on the role of in (A2) by assuming that the hazard is decreasing throughout but that is not. For the first two examples, assume that at the point To interpret this inequality, note that the left side is the conditional mean of the function over the interval while the right side is the conditional mean of this function over the interval Clearly, for any distribution for which is decreasing throughout , this inequality is reversed. It is easy to show that a two-piece uniform distribution with if and if will satisfy (48) if is chosen to be sufficiently small relative to In this case, illustrated in Figure 6, the function will jump up sufficiently at so that the conditional mean of this function over the higher interval is larger than the conditional mean over the lower interval In the first example, the linear example, we make the calculations trivial by assuming that with . In the second example, which is the benchmark example of (1), we assume that In the third example, the discrete example, with an increasing nonlinear function. All three of these examples satisfy the single-crossing property (A1). In the first two examples, , so that the condition (A2) reduces to the standard monotone hazard condition. Note that for the first two examples, any distribution that satisfies ( is inconsistent with the monotone hazard condition (A2a). The Linear Example Any solution to the mechanism design problem must have the two-piece form This follows because the arguments used in Lemmas 1 and 2 can be applied separately to the intervals and and because for any the static best response to any in the interval is a constant, namely, the upper limit Since this policy must satisfy the incentive constraint the monotonicity condition implies that Thus, we know that and that the constraint will be automatically satisfied by any monotonic The mechanism design problem then reduces to the linear problem of choosing , , and to maximize subject to the constraints that and If ( holds and if the lower and upper limits include the expected Ramsey policy, then the optimal policy will have either or To see this, consider spreading out the policy by decreasing by and increasing by , so that the change in expected inflation is zero. The associated welfare change can be written as where the inequality follows from ( . Hence, the solution must have ,and from the incentive constraint, we then know that Thus, the solution to the mechanism design problem is necessarily dynamic. The Benchmark Example Now assume that the policy which solves the static mechanism design problem, has bounded discretion and that so that the jump point in the hazard occurs on the flat portion of that policy. (We can construct a numerical example in which this assumption holds.) We will show that there is a dynamic mechanism that improves on the optimal static mechanism. The basic idea is to use a variation that spreads out the inflation schedule as a function of type instead of flattens it as did the variation in Lemmas 1 and 2. This variation is similar to the one in the linear example. Consider an alternative policy that lowers inflation for types at or below raises it for types above , and keeps expected inflation with and so that expected inflation is constant. Note that this alternative policy is monotonically increasing, since must be. Our variation is a marginal shift from toward defined as for each Welfare is given by The impact of this variation on welfare is given by Since has bounded discretion, ( ) In our quadratic example, ( ) ; hence, (52) reduces to (51), which we know from (48) is positive It is straightforward, but somewhat tedious, to show that the associated continuation values defined by have for all and for . To show this, we use the facts that ( ) and , so that for These results imply that this variation both improves welfare and is feasible. Thus, the optimal mechanism must be Note that if has no discretion, then we need a different condition on the distribution to show that the static mechanism is not optimal. This is because when has no discretion, we can have ( ), and the above argument that for all does not go through. When has no discretion, the analog of the condition (48) is that at there exists a such that With this condition, the optimal mechanism is dynamic rather than static. Note that, in our linear example, this distinction did not come up because there our utility function is such that ( ) with no discretion. The Discrete Example Now let the types be for with associated probabilities , and let Then it is easy to show that under the discrete analog of (A1), the only relevant incentive constraints are for and . The discrete analog of (A2) for types and is which here reduces to We now give an example in which the hazard ( is monotone but is so convex that (54) is violated, and the optimal policy is dynamic. Suppose that is part of a candidate optimal policy. Consider the variation of decreasing and by and increasing by ( , so that expected inflation is constant. We can maintain incentives by keeping and unchanged and lowering by This variation leads to a change in welfare of With a uniform distribution, , and with , this variation is welfare-improving as long as In Sum In each of the three examples, we have shown that welfare could be improved relative to a static policy by raising inflation for high types and lowering inflation for low types so as to keep expected inflation constant. In the first two examples, this improved welfare because there were sufficiently few high types relative to low types; we could raise inflation a lot for the types who valued it more and lower it only a little for the types who valued it less. In the third example, even though the distribution of types is uniform, the high types valued inflation so much more than the low types that raising inflation for the high types and lowering it for the low types still improved welfare. Appendix C: Implementation with an Inflation Cap Here we prove that the equilibrium outcome in an economy with an inflation cap is the optimal outcome of the mechanism design problem. We show this result formally using a one-shot game in which we drop time subscripts. With an inflation cap of in the current period, the problem of the monetary authority at a given is, given aggregate wages , to choose money growth for the state to maximize subject to The private agents' decisions on wages are summarized by An equilibrium of this one-shot game consists of aggregate wages and a money growth policy such that (i) with given, satisfies , and (ii) We denote the optimal choice of the monetary authority as This notation reflects the fact that the monetary authority is choosing a static best response to given that its choice set is restricted by , which we call the inflation cap. To implement the best equilibrium in the dynamic game, we choose as follows. Whenever the expected Ramsey policy is optimal, we choose the inflation cap to be Whenever bounded discretion is optimal, we choose the cap to be the money growth rate chosen by the cutoff type : where is the equilibrium inflation rate with this level of bounded discretion. PROPOSITION 5: Assume (A1), (A2), and that the inflation cap is set according to (55) and (56). Then the equilibrium outcome of the one-shot game with the inflation cap for each period coincides with the optimal equilibrium outcome of the dynamic game. PROOF: We establish this result in two steps. We first show that the monetary authority will choose the upper bound when the expected Ramsey policy is optimal in the dynamic game. Note that Proposition 3 implies that whenever the expected Ramsey policy is optimal, Also, recall that the single-crossing assumption (A1) implies that the best response is strictly increasing in . Thus, for all Hence, at the expected Ramsey policy and the associated inflation rate, all types want to deviate by increasing their inflation above ; hence, the constraint binds, and all types choose the expected Ramsey level. We next show that if bounded discretion is optimal in the dynamic game, then in the associated static game with the inflation cap, all types choose the bounded discretion policies. For all types the policies under bounded discretion are simply the static best responses, and these clearly coincide with those in the static game. For all types above the policies under bounded discretion are the static best responses of the type, namely, , where is the equilibrium expected inflation rate under bounded discretion. Under assumption (A1), the static best responses are increasing in the type, so that the best response of any type is above Thus, in the one-shot game with the inflation cap, the constraint (56) binds for such types. Thus, the equilibrium outcomes of the two games coincide. Appendix D: Proof of Proposition 4 Here we prove Proposition 4, which links monetary policy discretion to both time inconsistency and private information. PROOF: The optimal policy with bounded discretion is found as the solution to the problem of choosing and to maximize Let be the Lagrange multiplier on (57); then the first-order conditions for and imply that the derivative of the objective function with respect to is Using our functional forms and , we can simplify this derivative to We can show that, under (A2a), this derivative is strictly decreasing in as follows. Integration by parts gives that so that (58) is equivalent to and this expression is clearly strictly decreasing in under (A2a). The fact that (58) is strictly decreasing in implies that three possible cases characterize the optimal policy with bounded discretion, all of which depend on the value of In one case, the derivative (58) is positive for all , and the solution is Since the first term of (58) equals zero when this case occurs only when As is clear, in this case, there is no time inconsistency problem, and the Ramsey policy is incentive-compatible. In a second case, the derivative (58) is negative for all , and the solution is Since the derivative (58) evaluated at reduces to this case occurs when Note that in this case, the optimal policy with bounded discretion specifies a constant inflation rate and, hence, is dominated, at least weakly, by the expected Ramsey policy with no discretion. Hence, we say that in this case, the optimal policy has no discretion. In the third case, there is an interior that sets the derivative (58) to zero. This case occurs when Clearly, in this case, the value of characterizing the optimal degree of discretion is decreasing in Finally, to complete the proof of Proposition 4, we must show that when the optimal policy with bounded discretion dominates the expected Ramsey policy. To do so, we use part (ii) of Proposition 3. Note that when , we have that The result then follows directly from Proposition 3. Appendix E: The Role of Our feasibility Constraint Here we develop a highly stylized example (about traffic congestion) that illustrates the importance of the feasibility constraint in generating our result that the optimal policy is static. In the example, we replace this constraint with the constraint and show that the resulting optimal mechanism differs radically from ours. To be concrete, consider a mechanism design problem of choosing and to solve [( )] and One interpretation of this problem is as follows. A large number of people want to share a road. Each person differs from the others in their desire to use the road, as indexed by the privately observed Let denote the time that type is allowed to drive. Let denote the average traffic on the road, as denoted by (62). Because of congestion, people dislike higher average traffic ( is decreasing in . Let denote the toll to drive Constraint (60) is a budget constraint on tolls, where is the money needed to operate the road, possibly zero. It is easy to see that here the optimal varies with Specifically, can be chosen in such a way as to support the first best. (Here we are assuming (A1), so that the first best schedule for is upward-sloping. To see this result, drop the incentive constraint (61) and solve for the first best ; then use the local incentive-compatibility condition to construct the function, up to the constant , that makes incentive-compatible. Finally, choose the constant to satisfy (60).) Clearly, the answer to this problem is very different from the answer to our problem; here the optimal varies with while in ours it does not and Note that the result that the first best is incentive-compatible is special to this functional form in which payoffs are linear in If instead we had [( )()] with concave, then we would have the standard tradeoff between insurance (or redistribution) and incentives. How could we interpret our model and results in this road congestion context? Suppose that using tolls is not feasible, and the only way to ration road use is to make people wait to get on the road. Let be the amount of time someone has to wait to drive and let be the associated utility from waiting Then is, of course, equivalent to In this context, we get a very different answer than when using tolls is feasible. Under (A1) and (A2), the optimal scheme is to have no one wait ( and let everyone drive as much as they like, subject to a cap, ABREU, D., D. PEARCE, and E. STACCHETTI (1990): `` Toward a Theory of Discounted Repeated Games with Imperfect Monitoring,'' Econometrica, 58, 1041-1063. ALBANESI, S., V. CHARI, and L. CHRISTIANO (2003): `` Expectation Traps and Monetary Policy,'' Review of Economic Studies, 70, 715-741. ALBANESI, S., and C. SLEET (2002): `` Optimal Policy with Endogenous Fiscal Constitutions,'' Manuscript, Fuqua School of Business, Duke University. AMADOR, M., I. WERNING, and G. ANGELETOS (2004): `` Commitment vs. Flexibility,'' Manuscript, Massachusetts Institute of Technology. ANGELETOS, G., C. HELLWIG, and A. PAVAN (2003): `` Coordination and Policy Traps,'' NBER Working Paper 9767. ATHEY, S., and K. BAGWELL (2001): `` Optimal Collusion with Private Information,'' RAND Journal of Economics, 32, 428-465. ATHEY, S., K. BAGWELL, and C. SANCHIRICO (2004): `` Collusion and Price Rigidity,'' Review of Economic Studies, 71, 317-349. ATKESON, A. (1991): `` International Lending with Moral Hazard and Risk of Repudiation,'' Econometrica, 59, 1069-1089. ATKESON, A., and R. LUCAS (1992): `` On Efficient Distribution with Private Information,'' Review of Economic Studies, 59, 427-453. ----- (1995): `` Efficiency and Equality in a Simple Model of Efficient Unemployment Insurance,'' Journal of Economic Theory, 66, 64-88. BACKUS, D., and J. DRIFFILL (1985): `` Inflation and Reputation,'' American Economic Review, 75, 530-538. BARRO, R., and D. GORDON (1983): `` Rules, Discretion and Reputation in a Model of Monetary Policy,'' Journal of Monetary Economics, 12, 101-121. BERNANKE, B., and F. MISHKIN (1997): `` Inflation Targeting: A New Framework for Monetary Policy?'' Journal of Economic Perspectives, 11, 97-116. BERNANKE, B., and M. WOODFORD (1997): `` Inflation Forecasts and Monetary Policy,'' Journal of Money, Credit, and Banking, 39, 653-684. CANZONERI, M. (1985) `` Monetary Policy Games and the Role of Private Information,'' American Economic Review, 75, 1056-1070. CHANG, R. (1998): `` Credible Monetary Policy in an Infinite Horizon Model: Recursive Approaches,'' Journal of Economic Theory, 81, 431-461. CHARI, V., L. CHRISTIANO, and M. EICHENBAUM (1998): `` Expectation Traps and Discretion,'' Journal of Economic Theory, 81, 462-492. CHARI, V., and P. KEHOE (1990): `` Sustainable Plans,'' Journal of Political Economy, 98, 783-802. CUKIERMAN, A., and A. MELTZER (1986): `` A Theory of Ambiguity, Credibility, and Inflation under Discretion and Asymmetric Information,'' Econometrica, 54, 1099-1128. DA COSTA, C., and I. WERNING (2002): `` On the Optimality of the Friedman Rule with Heterogeneous Agents and Non-Linear Income Taxation,'' Manuscript, Massachusetts Institute of Technology. FAUST, J., and L. SVENSSON (2001): `` Transparency and Credibility: Monetary Policy with Unobservable Goals,'' International Economic Review, 42, 369-397. FUDENBERG, D., and J. TIROLE (1991): Game Theory. Cambridge, Mass.: MIT Press. GREEN, E. (1987): `` Lending and the Smoothing of Uninsurable Income,'' in Contractual Arrangements for Intertemporal Trade. Minneapolis: University of Minnesota Press. GREEN, E., and R. PORTER (1984): `` Noncooperative Collusion under Imperfect Price Information,'' Econometrica, 52, 87-100. IRELAND, P. (1997): `` Sustainable Monetary Policies,'' Journal of Economic Dynamics and Control, 22, 87-108. ----- (2000): `` Expectations, Credibility, and Time-Consistent Monetary Policy,'' Macroeconomic Dynamics, 4, 448-466. KOCHERLAKOTA, N. (1996): `` Implications of Efficient Risk Sharing without Commitment,'' Review of Economic Studies, 63, 595-609. KYDLAND, F., and E. PRESCOTT (1977): `` Rules Rather Than Discretion: The Inconsistency of Optimal Plans,'' Journal of Political Economy, 85, 473-491. NICOLINI, J. (1998): `` More on the Time Consistency of Monetary Policy,'' Journal of Monetary Economics, 41, 333-350. PERSSON, T., and G. TABELLINI (1993): `` Designing Institutions for Monetary Stability,'' Carnegie-Rochester Conference Series on Public Policy,'' 39, 53-84. PHELAN, C., and E. STACCHETTI (2001): `` Sequential Equilibria in a Ramsey Tax Model,'' Econometrica, 69, 1491-1518. RAMPINI, A. (Forthcoming): `` Default and Aggregate Income,'' Journal of Economic Theory. ROMER, C., and D. ROMER (2000): `` Federal Reserve Information and the Behavior of Interest Rates,'' American Economic Review, 90, 429-457. SLEET, C. (2001): `` On Credible Monetary Policy and Private Government Information,'' Journal of Economic Theory, 99, 338-376. ----- (2004): `` Optimal Taxation with Private Government Information,'' Review of Economic Studies, 71, 1217-1239. SLEET, C., and S. YELTEKIN (2003): `` Credible Monetary Policy with Private Government Preferences,'' Manuscript, Kellogg School of Management, Northwestern University. STOKEY, N. (2003): ```Rules versus Discretion' After Twenty-Five Years,'' in NBER Macroeconomics Annual 2002, vol. 17, ed. by M. Gertler and K. Rogoff. Cambridge, Mass.: MIT Press. TAYLOR, J. (1983): `` Rules, Discretion and Reputation in a Model of Monetary Policy: Comments,'' Journal of Monetary Economics, 12, 123-125. THOMAS, J., and T. WORRALL (1990): `` Income Fluctuation and Asymmetric Information: An Example of a Repeated Principal-Agent Problem,'' Journal of Economic Theory, 51, 367-390. WALSH, C. (1995): `` Optimal Contracts for Central Bankers,'' American Economic Review, 85, 150-167. Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Also, for simplicity, our formulation abstracts from direct costs due to future inflation. One interpretation of this feature is that it captures what happens in the cashless limit of a sticky price model. Return to text This version is optimized for use by screen readers. A printable pdf version is available.
{"url":"http://www.federalreserve.gov/pubs/ifdp/2004/801/ifdp801.htm","timestamp":"2014-04-16T22:33:03Z","content_type":null,"content_length":"335450","record_id":"<urn:uuid:9b3489d6-bcc2-4cc7-b2c8-fbb2ed900a0b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Independence of Square Roots of Primes Date: 11/07/96 at 07:17:49 From: Mats Oldin Subject: Linear independence of square roots of primes I need to find a way of proving that the square roots of a finite set of different primes are linearly independent over the field of rationals. I've tried to solve the problem using elementary algebra and also using the theory of field extensions, without success. To prove linear independence of two primes is easy but then my problems arise. I would be very thankful for an answer to this question. Best regards, Mats Oldin Date: 11/11/96 at 14:37:15 From: Doctor Rob Subject: Re: Linear independence of square roots of primes Actually, something more general is true. You need only assume that the integers you are taking the square roots of are >1, squarefree and pairwise relatively prime. One proof goes as follows. We will prove that as a vector space, the dimension of the field extension gotten from Q, the field of rational numbers, by adjoining the square roots of s integers > 1 which are squarefree and pairwise relatively prime, is 2^s. Proceed by induction on the number of such integers adjoined. We know it works for s = 1 and s = 2, as you state in your question. Assume it works for 1, 2, ..., s roots. Let n(1), n(2), ... n(s), n(s+1) be such a set of s+1 integers. Let us use the letters E and F to represent the following fields: E = Q(sqrt[n(1)], ..., sqrt[n(s-1)]) F = E(sqrt[n(s)]) By induction, dim(E) = 2^(s-1) and dim(F) = 2^s. We would be done with the proof if we could show that sqrt[n(s+1)] is not an element of F. Assume otherwise, that sqrt[n(s+1)] is in F. Then for some a, b in E, we can write: sqrt[n(s+1)] = a + b*sqrt[n(s)] Squaring both sides gives: n(s+1) = a^2 + 2*a*b*sqrt[n(s)] + b^2*n(s) 2*a*b*sqrt[n(s)] = n(s+1) - a^2 - b^2*n(s) The righthand side lies in E. Three cases are possible: 1) a = 0. 2) b = 0. 3) sqrt[n(s)] is in E. Case 1: a = 0. Then sqrt[n(s+1)] = b*sqrt[n(s)]. This implies that sqrt[n(s)*n(s+1)] = b*n(s) lies in E. Then the set of s integers n(1), n(2), ..., n(s-1), n(s)*n(s+1) satisfies the induction hypothesis, so the dimension of this field extension must be 2^s. On the other hand, this field extension must be exactly E, whose dimension is 2^(s-1), a clear contradiction. Thus this case is Case 2: b = 0. Then sqrt[n(s+1)] = a lies in E, and the set of s integers n(1), n(2), ..., n(s-1), n(s+1) satisfies the induction hypothesis, so the dimension of this field extension must be 2^s. On the other hand, this field extension must be exactly E, whose dimension is 2^(s-1), again a contradiction. Thus this case is Case 3: sqrt[n(s)] is in E. Then F = E, which is also a contradiction, since the dimension of F is twice that of E. Thus this case, too, is impossible. The conclusion is that sqrt[n(s+1)] cannot lie in F, so we are done. -Doctor Rob, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/51638.html","timestamp":"2014-04-17T13:27:02Z","content_type":null,"content_length":"7970","record_id":"<urn:uuid:d1538b06-a841-4d78-a164-57f6bdcd0acf>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Kuratowski closure-complement problem for other mathematical objects? up vote 10 down vote favorite The original Kuratowski closure-complement problem asks: How many distinct sets can be obtained by repeatedly applying the set operations of closure and complement to a given starting subset of a topological space? My question is: what is known about analogous questions in other settings? Here's an example of what I'm thinking of, for rings: How many distinct ideals can be obtained by repeatedly applying the operations of radical and annihilator to a given starting ideal $I$ of a commutative ring $R$? Note that $r(r(I))=r(I)$ and $I\subseteq Ann(Ann(I))=\{x\in R: x\cdot Ann(I)=(0)\}$, which are the best analogs I could think of to $\overline{\overline{S}}=\overline{S}$ and $(S^C)^C=S$. Also: what is the structure necessary to formulate this kind of question called, and where does it occur naturally? It seems like we need at least a poset, but with distinguished idempotent and involution operations to generalize the closure and complement, respectively. The radical or annihilator of an arbitrary subset of a ring is an ideal, so you'd gain exactly one set by changing "ideal" to "arbitrary subset." – Qiaochu Yuan Feb 25 '10 at 4:29 Well, unless you're working in the zero ring, every subset of which is an ideal! – Qiaochu Yuan Feb 25 '10 at 4:31 Atiyah-Macdonald, p.9 says that the radical of an arbitrary subset of a ring is not necessarily an ideal - an example would be, in $\mathbb{Z}$, $rad({2})={2}$. – Zev Chonoles Feb 25 '10 at 5:05 Ah. I was thinking of defining the radical of a subset as the intersection of the prime ideals containing it. This is the analogue of defining the closure of a subset as the intersection of the closed sets containing it. – Qiaochu Yuan Feb 25 '10 at 5:05 Good point - that might make more sense. Well, let's just see what people come up with for the case of ideals, first. – Zev Chonoles Feb 25 '10 at 5:29 show 2 more comments 1 Answer active oldest votes Here's a paper that might be of interest: D. Peleg, A generalized closure and complement phenomenon, Discrete Math., v.50 (1984) pp.285-293. Other than what's found in the above paper I do not know of any general theory or framework specifically aimed at organizing results similar to the Kuratowski closure-complement problem, i.e., those which involve starting with a seed object (or objects) and repeatedly applying operations to generate further objects of the same type in a given space. up vote 5 Here's a general sub-question I thought of recently, that might be interesting to study: down vote accepted "What's the minimum possible cardinality of a seed set that generates the maximum number of sets via the given operations?" A few years ago I proposed a challenging Monthly problem (11059) that essentially asks this question for the operations of closure, complement, and union in a topological space. It does turn out there's a space containing a singleton that generates infinitely many sets under the three operations, but it's a bit tricky to find. I haven't looked into the question yet for other operations. As far as I know it hasn't been discussed yet in the literature (apart from the specific case addressed by my problem proposal). Thanks for the reference and intriguing subquestion! – Zev Chonoles Feb 27 '10 at 1:55 add comment Not the answer you're looking for? Browse other questions tagged order-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/16363/kuratowski-closure-complement-problem-for-other-mathematical-objects?sort=votes","timestamp":"2014-04-23T20:26:13Z","content_type":null,"content_length":"59412","record_id":"<urn:uuid:a14fb122-7a27-4d4c-96d9-0ae6f0246805>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Applying Occam’s razor in modeling cognition: A Bayesian approach Results 1 - 10 of 55 - Psychological Review , 2002 "... The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. Computational models of cognition are increasingly being advanced as explanations of behavior. The success of this line of inquiry depends on the development of robust methods to ..." Cited by 74 (4 self) Add to MetaCart The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. Computational models of cognition are increasingly being advanced as explanations of behavior. The success of this line of inquiry depends on the development of robust methods to guide the evaluation and selection of these models. This article introduces a method of selecting among mathematical models of cognition known as minimum description length, which provides an intuitive and theoretically well-grounded understanding of why one model should be chosen. A central but elusive concept in model selection, complexity, can also be derived with the method. The adequacy of the method is demonstrated in 3 areas of cognitive modeling: psychophysics, information integration, and categorization. How should one choose among competing theoretical explanations of data? This question is at the heart of the scientific enterprise, regardless of whether verbal models are being tested in an experimental setting or computational models are being evaluated in simulations. A number of criteria have been proposed to assist in this endeavor, summarized nicely by Jacobs and Grainger - Psychonomic Bulletin & Review , 2004 "... An evidence accumulation model of forced-choice decision making is proposed to unify the fast and frugal take the best (TTB) model and the alternative rational (RAT) model with which it is usually contrasted. The basic idea is to treat the TTB model as a sequential-sampling process that terminates a ..." Cited by 31 (2 self) Add to MetaCart An evidence accumulation model of forced-choice decision making is proposed to unify the fast and frugal take the best (TTB) model and the alternative rational (RAT) model with which it is usually contrasted. The basic idea is to treat the TTB model as a sequential-sampling process that terminates as soon as any evidence in favor of a decision is found and the rational approach as a sequential-sampling process that terminates only when all available information has been assessed. The unified TTB and RAT models were tested in an experiment in which participants learned to make correct judgments for a set of real-world stimuli on the basis of feedback, and were then asked to make additional judgments without feedback for cases in which the TTB and the rational models made different predictions. The results show that, in both experiments, there was strong intraparticipant consistency in the use of either the TTB or the rational model but large interparticipant differences in which model was used. The unified model is shown to be able to capture the differences in decision making across participants in an interpretable way and is preferred by the minimum description length model selection criterion. A simple but pervasive type of decision requires choosing which of two alternatives has the greater (or the lesser) value on some variable of interest. Examples of - Psychological Review , 2007 "... A model of memory retrieval is described. The model embodies 4 main claims: (a) temporal memory— traces of items are represented in memory partly in terms of their temporal distance from the present; (b) scale-similarity—similar mechanisms govern retrieval from memory over many different timescales; ..." Cited by 29 (2 self) Add to MetaCart A model of memory retrieval is described. The model embodies 4 main claims: (a) temporal memory— traces of items are represented in memory partly in terms of their temporal distance from the present; (b) scale-similarity—similar mechanisms govern retrieval from memory over many different timescales; (c) local distinctiveness—performance on a range of memory tasks is determined by interference from near psychological neighbors; and (d) interference-based forgetting—all memory loss is due to interference and not trace decay. The model is applied to data on free recall and serial recall. The account emphasizes qualitative similarity in the retrieval principles involved in memory performance at all timescales, contrary to models that emphasize distinctions between short-term and long-term - Psychological Review , 2006 "... A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to back-propagate the target data to interior modules, such that an interior component’s target is the input to the next component that maximizes the probab ..." Cited by 26 (7 self) Add to MetaCart A scheme is described for locally Bayesian parameter updating in models structured as successions of component functions. The essential idea is to back-propagate the target data to interior modules, such that an interior component’s target is the input to the next component that maximizes the probability of the next component’s target. Each layer then does locally Bayesian learning. The approach assumes online trial-by-trial learning. The resulting parameter updating is not globally Bayesian but can better capture human behavior. The approach is implemented for an associative learning model that first maps inputs to attentionally filtered inputs and then maps attentionally filtered inputs to outputs. The Bayesian updating allows the associative model to exhibit retrospective revaluation effects such as backward blocking and unovershadowing, which have been challenging for associative learning models. The back-propagation of target values to attention allows the model to show trial-order effects, including highlighting and differences in magnitude of forward and backward blocking, which have been challenging for Bayesian learning models. - Journal of Experimental Psychology: Learning, Memory, and Cognition , 2002 "... Exemplar theory was motivated by research that often used D. L. Medin and M. M. Schaffer’s (1978) 5/4 stimulus set. The exemplar model has seemed to fit categorization data from this stimulus set better than a prototype model can. Moreover, the exemplar model alone predicts a qualitative aspect of p ..." Cited by 24 (1 self) Add to MetaCart Exemplar theory was motivated by research that often used D. L. Medin and M. M. Schaffer’s (1978) 5/4 stimulus set. The exemplar model has seemed to fit categorization data from this stimulus set better than a prototype model can. Moreover, the exemplar model alone predicts a qualitative aspect of performance that participants sometimes show. In 2 experiments, the authors reexamined these findings. In both experiments, a prototype model fit participants ’ performance profiles better than an exemplar model did when comparable prototype and exemplar models were used. Moreover, even when participants showed the qualitative aspect of performance, the exemplar model explained it by making implausible assumptions about human attention and effort in categorization tasks. An independent assay of participants’ attentional strategies suggested that the description the exemplar model offers in such cases is incorrect. A review of 30 uses of the 5/4 stimulus set in the literature reinforces this suggestion. Humans ’ categorization processes are a central topic in cognitive psychology. One prominent theory—prototype theory—assumes that categories are represented by a central tendency that is abstracted from a person’s experience with a category’s exemplars "... For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational a ..." Cited by 23 (1 self) Add to MetaCart For over 200 years, philosophers and mathematicians have been using probability theory to describe human cognition. While the theory of probabilities was first developed as a means of analyzing games of chance, it quickly took on a larger and deeper significance as a formal account of how rational agents should reason in situations of uncertainty - Perception & Psychophysics , 2001 "... Four observers completed perceptual matching, identification, and categorization tasks using separable-dimension stimuli. A unified quantitative approach relating perceptual matching, identification, and categorization was proposed and tested. The approach derives from general recognition theory (As ..." Cited by 21 (17 self) Add to MetaCart Four observers completed perceptual matching, identification, and categorization tasks using separable-dimension stimuli. A unified quantitative approach relating perceptual matching, identification, and categorization was proposed and tested. The approach derives from general recognition theory (Ashby & Townsend, 1986) and provides a powerful method for quantifying the separate influences of perceptual processes and decisional processes within and across tasks. Good accounts of the identification data were obtained from an initial perceptual representation derived from perceptual matching. The same perceptual representation provided a good account of the categorization data, except when selective attention to one stimulus dimension was required. Selective attention altered the perceptual representation by decreasing the perceptual variance along the attended dimension. These findings suggest that a complete understanding of identification and categorization performance requires an understanding of perceptual and decisional processes. Implications for other psychological tasks are discussed. An important goal of psychological inquiry is to understand how behavior is influenced by the environmental stimulation and the task at hand. Information about the environment - Journal of Mathematical Psychology , 2001 "... It has been argued by Shepard that there is a robust psychological law that relates the distance between a pair of items in psychological space and the probability that they will be confused with each other. Specifically, the probability of confusion is a negative exponential function of the dista ..." Cited by 19 (4 self) Add to MetaCart It has been argued by Shepard that there is a robust psychological law that relates the distance between a pair of items in psychological space and the probability that they will be confused with each other. Specifically, the probability of confusion is a negative exponential function of the distance between the pair of items. In experimental contexts, distance is typically defined in terms of a multidimensional Euclidean space---but this assumption seems unlikely to hold for complex stimuli. We show that, nonetheless, the Universal Law of Generalization can be derived in the more complex setting of arbitrary stimuli, using a much more universal measure of distance. This universal distance is defined as the length of the shortest program that transforms the representations of the two items of interest into one another: the algorithmic information distance. It is universal in the sense that it minorizes every computable distance: it is the smallest computable distance. We show ... - Journal of Mathematical Psychology , 2004 "... We present a general sampling procedure to quantify model mimicry, defined as the ability of a model to account for data generated by a competing model. This sampling procedure, called the parametric bootstrap cross-fitting method (PBCM; cf. Williams (J. R. Statist. Soc. B 32 (1970) 350; Biometrics ..." Cited by 19 (3 self) Add to MetaCart We present a general sampling procedure to quantify model mimicry, defined as the ability of a model to account for data generated by a competing model. This sampling procedure, called the parametric bootstrap cross-fitting method (PBCM; cf. Williams (J. R. Statist. Soc. B 32 (1970) 350; Biometrics 26 (1970) 23)), generates distributions of differences in goodness-of-fit expected under each of the competing models. In the data informed version of the PBCM, the generating models have specific parameter values obtained by fitting the experimental data under consideration. The data informed difference distributions can be compared to the observed difference in goodness-of-fit to allow a quantification of model adequacy. In the data uninformed version of the PBCM, the generating models have a relatively broad range of parameter values based on prior knowledge. Application of both the data informed and the data uninformed PBCM is illustrated with several examples. r 2003 Elsevier Inc. All rights reserved. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=157086","timestamp":"2014-04-20T21:27:27Z","content_type":null,"content_length":"40195","record_id":"<urn:uuid:72840f02-c908-495c-b882-9b1ea0874202>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
New to C programming and need help. 02-04-2009 #1 Registered User Join Date Feb 2009 New to C programming and need help. Ok so I am trying to write a basic program that will take a set of numbers, and determine the 2 largest numbers in the set. The stipulation is that I can use a while loop and some if statements. Oh and all the numbers have to be different (cannot have 2 equal numbers). I came up with the following code: int counter, num=0, largest=0, ndlargest=0, pnum=0; while (counter != 10) printf("Please enter a number (cannot be equal to any other number):"); scanf("%d", &num); if (num>largest){ \*this takes care of the largest number*\ largest = num; if (num>ndlargest) \*this takes care of the 2nd largest number but only if descending*\ ndlargest = num;} if (largest>pnum) \*should test the values for all cases other than the previous if*\ pnum = largest; Am I doing something wrong? Because when I consider the logic I think I have all the possibilities covered. However, when I compile and use 1,2,3 (counter != 3, for faster testing) I get 3 for largest value and 1 for 2nd largest. When I run the program with 3,2,1 as the input, I get the correct results. When I use 3,1,2, I get 3 as largest and 1 as 2nd largest. Any help would be appreciated Oh and I forgot to mention I use DMC as my compiler in DOS on a WindowsXP platform. You're missing braces after the while. Thusfar the while will run 10 times but the code after counter++; runs once. Also do you include negative numbers in "what's the largest"? If so then your logic is broken. Last edited by zacs7; 02-04-2009 at 03:42 AM. 1. you used var "counter" without being initialized. you sould initialized it before usuing it in the loop. while (counter != 10) This does nothing just increase the counter, witch however is not initialized. I recomend you to use "for" instead of "while" and don't forget the "{}". here's some more info about why stuff is recommended. if you don't set a value the variable has whatever value is at that address. it's probably not going to be 10, so not initializing will probably work, but it's safer to set the value before using a for loop is better because it's a counted loop and that's what the code needs. a while loop works too but most programmers don't read a while as a counted loop. you'd confuse your peers. curly braces are only needed for a block with more than one statement but if you don't know whether to use them or not, it's safer to use them. I apologize, I actually had the counter set to 0 in my code. I had to retype it from memory because the MWR (morale, welfare, recreation) at this base does not allow us to bring our own computers. However, as I was coming up with the code from memory I made the solution...When I got back I realized I had as the last statement so therefore the value was being replaced with a smaller digit and therefore giving me incorrect readings. You're missing braces after the while. Thusfar the while will run 10 times but the code after counter++; runs once. Also do you include negative numbers in "what's the largest"? If so then your logic is broken. I did have brackets in the while loop I just forgot to add them here. P.S. the code works fine now. But Zacs why is the logic wrong when I use negatives? This is the revised code: int counter=0, num=0, largest=0, ndlargest=0, pnum=0; while (counter != 10){ printf("Please enter a number (cannot be equal to any other number):"); scanf("%d", &num); if (num>largest){ \*this takes care of the largest number*\ largest = num; if (num>ndlargest) \*this takes care of the 2nd largest number but only if descending*\ ndlargest = num;} if (largest>pnum) \*should test the values for all cases other than the previous if*\ pnum = largest;} this code will not compile - your usage of braces is horrible while (counter != 10) printf("Please enter a number (cannot be equal to any other number):"); scanf("%d", &num); if (num>largest) { \*this takes care of the largest number*\ largest = num; if (num>ndlargest) \*this takes care of the 2nd largest number but only if descending*\ ndlargest = num; if (largest>pnum) \*should test the values for all cases other than the previous if*\ pnum = largest; The first 90% of a project takes 90% of the time, the last 10% takes the other 90% of the time. > But Zacs why is the logic wrong when I use negatives? Largest and second largest start off at 0. Given that and you enter -1, -5, -6. What do you think your program will say the second and largest are? Remember that 0 > -1. 02-04-2009 #2 02-04-2009 #3 02-04-2009 #4 Registered User Join Date Feb 2009 02-04-2009 #5 Registered User Join Date Feb 2009 02-04-2009 #6 02-05-2009 #7
{"url":"http://cboard.cprogramming.com/c-programming/111859-new-c-programming-need-help.html","timestamp":"2014-04-19T23:50:23Z","content_type":null,"content_length":"63903","record_id":"<urn:uuid:6804e089-4634-42dd-a9f5-c4082676e4df>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Thomas Industrial Library 8.2 DYNAMIC MODELS It is often convenient in dynamic analysis to create a simplified model of a complicated part. These models are sometimes considered to be a collection of point masses connected by massless rods, referred to as a lumped-parameter model, or just lumped model. For a lumped model of a rigid body to be dynamically equivalent to the original body, three things must be true: 1 The mass of the model must equal that of the original body. 2 The center of gravity must be in the same location as that of the original body. 3 The mass moment of inertia must equal that of the original body. 8.3 MASS Mass is not weight. Mass is an invariant property of a rigid body. The weight of the same body varies depending on the gravitational system in which it sits. We will assume the mass of our parts to be constant over time in our calculations. When designing cam-follower systems (or any machinery), we must first do a complete kinematic analysis of our design in order to obtain information about the rigid body accelerations of the moving parts. We can then use Newton’s second law to calculate the dynamic forces. But to do so, we need to know the masses of all the moving parts that have these known accelerations. If we have a design of the follower train done in a CAD program that will calculate masses and mass moments of inertia, then we are in good shape, as the data needed for dynamic calculations are available. Lacking that luxury, we will have to calculate estimates of the mass properties of the follower train to do the dynamics calculations. Absent a solids-modeler representation of your design, a first estimate of your parts’ masses can be obtained by assuming some reasonable shapes and sizes for all the parts and choosing appropriate materials. Then calculate the volume of each part and multiply its volume by the material’s mass density (not weight density) to obtain a first approximation of its mass. These mass values can then be used in Newton’s equation to estimate the dynamic forces. [Portions of this chapter were adapted from R. L. Norton, Design of Machinery 2ed, McGraw-Hill, 2001, with permission.] How will we know whether our chosen sizes and shapes of links are even acceptable, let alone optimal? Unfortunately, we will not know until we have carried the computations all the way through a complete stress and deflection analysis of the parts. It is often the case, especially with long, thin elements such as shafts or slender links, that the deflections of the parts under their dynamic loads will limit the design even at low stress levels. In other cases the stresses at design loads will be excessive. If we discover that the parts fail or deflect excessively under the dynamic forces, then we will have to go back to our original assumptions about the shapes, sizes, and materials of these parts, redesign them, and repeat the force, stress, and deflection analyses. Design is, unavoidably, an iterative process . We need the dynamic forces to do the stress analyses on our parts. (Stress analysis is addressed in a later chapter.) It is also worth noting that, unlike a static force situation in which a failed design might be fixed by adding more mass to the part to strengthen it, to do so in a dynamic-force situation can have a deleterious effect. More mass with the same acceleration will generate even higher forces and thus higher stresses. The machine designer often needs to remove mass (in the right places) from parts in order to reduce the stresses and deflections due to F = m a. The designer needs to have a good understanding of both material properties and stress and deflection analysis to properly shape and size parts for minimum mass while maximizing strength and stiffness to withstand dynamic forces. 8.4 MASS MOMENT AND CENTER OF GRAVITY When the mass of an object is distributed over some dimensions, it will possess a moment with respect to any axis of choice. Figure 8-1 shows a mass of general shape in an xyz axis system. A differential element of mass is also shown. The mass moment (first moment of mass) of the differential element is equal to the product of its mass and its distance along the axis of interest. With respect to the x , y, and z axes these are: dM x = r x 1 dm (8.2a) dM y = r y 1 dm (8.2b) dM z = r z 1 dm (8.2c) The radius from the axis of interest to the differential element is shown with an exponent of 1 to emphasize the reason for this property being called the first moment of mass. To obtain the mass moment of the entire body we integrate each of these expressions. M x = ∫ r x dm (8.3a) M y = ∫ r y dm (8.3b) M z = ∫ r z dm (8.3c) If the mass moment with respect to a particular axis is numerically zero, then that axis passes through the center of mass ( CM ) of the object, which for earthbound systems is coincident with its center of gravity ( CG ) . By definition, the summation of first moments about all axes through the center of gravity is zero. We will need to locate the CG of all moving bodies in our designs because the linear acceleration component of each body is calculated as acting at that point. It is often convenient to model a complicated shape as several interconnected simple shapes whose individual geometries allow easy computation of their masses and the locations of their local CGs . The global CG can then be found from the summation of the first moments of these simple shapes set equal to zero. Appendix C contains formulas for the volumes and locations of centers of gravity of some common shapes. Of course, if the system is designed in a solids modeling CAD package, then the mass and other properties can be automatically calculated. Figure 8-2 shows a simple model of a mallet broken into two cylindrical parts, the handle and the head, which have masses m h and m d , respectively. The individual centers of gravity of the two parts are at l d and l h /2, respectively, with respect to the axis ZZ . We want to find the location of the composite center of gravity of the mallet with respect to ZZ . Summing the first moments of the individual components about ZZ and setting them equal to the moment of the entire mass about ZZ gives. This equation can be solved for the distance d along the x axis, which, in this symmetrical example, is the only dimension of the composite CG not discernible by inspection. The y and z components of the composite CG are both zero. Dynamic models, composite center of gravity, and radius of gyration of a mallet
{"url":"http://www.thomasglobal.com/library/UI/Cams/Cam%20Design%20and%20Manufacturing%20Handbook/Dynamics%20of%20Cam%20Systems_2/2/default.aspx","timestamp":"2014-04-21T02:28:27Z","content_type":null,"content_length":"159136","record_id":"<urn:uuid:a64a8757-5b6f-4f8e-ade0-6186d45ff58b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Coordinate system From Wiki.GIS.com A coordinate system is a system for specifying points using coordinates measured in some specified way. The simplest coordinate system consists of coordinate axes oriented perpendicularly to each other, known as Cartesian coordinates. Depending on the type of problem under consideration, coordinate systems possessing special properties may allow particularly simple solution.^[1] In mathematics and its applications, a coordinate system is a system for assigning an n-tuple of numbers or scalars to each point in an n-dimensional space. This concept is part of the theory of manifolds.^[2] "Scalars" in many cases means real numbers, but, depending on context, can mean complex numbers or elements of some other commutative ring. For complicated spaces, it is often not possible to provide one consistent practical coordinate system for the entire space. In this case, a collection of coordinate systems, called graphs, are put together to form an atlas covering the whole space. A simple example (which motivates the terminology) is the surface of the earth. In informal usage, coordinate systems can have singularities: these are points where one or more of the coordinates is not well-defined. For example, the origin in the polar coordinate system (r,θ) on the plane is singular, because although the radial coordinate has a well-defined value (r = 0) at the origin, θ can be any angle, and so is not a well-defined function at the origin. [edit] Examples The prototypical example of a coordinate system is the Cartesian coordinate system, which describes the position of a point P in the Euclidean space R^n by an n-tuple P = (r[1], ..., r[n]) of real numbers r[1], ..., r[n]. These numbers r[1], ..., r[n] are called the coordinates linear polynomials of the point P. If a subset S of a Euclidean space is mapped continuously onto another topological space, this defines coordinates in the image of S. That can be called a parametrization of the image, since it assigns numbers to points. That correspondence is unique only if the mapping is bijective. The system of assigning longitude and latitude to geographical locations is a coordinate system. In this case the parametrization fails to be unique at the north and south poles. [edit] Defining a coordinate system based on another one In geometry and kinematics, coordinate systems are used not only to describe the (linear) position of points, but also to describe the angular position of axes, planes, and rigid bodies. In the latter case, the orientation of a second (typically referred to as "local") coordinate system, fixed to the node, is defined based on the first (typically referred to as "global" or "world" coordinate system). For instance, the orientation of a rigid body can be represented by an orientation matrix, which includes, in its three columns, the Cartesian coordinates of three points. These points are used to define the orientation of the axes of the local system; they are the tips of three unit vectors aligned with those axes. To read the coordinate system you have to know what side is "n" (the bottom side with numbers) then you go from "n" to whatever your number is. [edit] Transformations A coordinate transformation is a conversion from one system to another, to describe the same space. With every bijection from the space to itself two coordinate transformations can be associated: • such that the new coordinates of the image of each point are the same as the old coordinates of the original point (the formulas for the mapping are the inverse of those for the coordinate • such that the old coordinates of the image of each point are the same as the new coordinates of the original point (the formulas for the mapping are the same as those for the coordinate For example, in one dimension, if the mapping is a translation of 3 to the right, the first moves the origin from 0 to 3, so that the coordinate of each point becomes 3 less, while the second moves the origin from 0 to -3, so that the coordinate of each point becomes 3 more. [edit] Systems commonly used Some coordinate systems are the following: • The Cartesian coordinate system (also called the "rectangular coordinate system"), which, for three-dimensional flat space, uses three numbers representing distances. • Curvilinear coordinates are a generalization of coordinate systems generally; the system is based on the intersection of curves. • The polar coordinate systems: □ Circular coordinate system (commonly referred to as the polar coordinate system) represents a point in the plane by an angle and a distance from the origin. □ Cylindrical coordinate system represents a point in space by an angle, a distance from the origin and a height. □ Spherical coordinate system represents a point in space with two angles and a distance from the origin. • Plücker coordinates are a way of representing lines in 3D Euclidean space using a six-tuple of numbers as homogeneous coordinates. • Generalized coordinates are used in the Lagrangian treatment of mechanics. • Canonical coordinates are used in the Hamiltonian treatment of mechanics. • Parallel coordinates visualise a point in n-dimensional space as a polyline connecting points on n vertical lines. [edit] Geographic Coordinate Systems Coordinate systems can be projected onto the earth to create a geographic coordinate system, which describes every location on the globe using three points, usually using a spherical coordinate system. Most commonly, the first two numbers in the coordinate are latitude and longitude. Latitude is measured in degrees north and south of the equator while longitude is measured in degrees east and west of the Prime Meridian. The third point in the coordinate is the elevation of the given point. Coordinates can be described using different frames of reference, or datums that are designed to be more accurate for different areas of the earth. The most common datum encountered is the World Geodetic System 1984, and is used in GPS applications. Geographic coordinate systems vary from projected coordinate systems in that they reference the earth as a 3D object measured in degrees rather than using a 2D projection of the earth's surface to measure it using meters or feet. [edit] See also • Active and passive transformation • Galilean transformation • Coordinate-free approach • Nomogram, graphical representations of different coordinate systems [edit] References and notes 1. ↑ Weisstein, Eric W. Coordinate System. From MathWorld--A Wolfram Web Resource. Accessed 11 May 2010 2. ↑ Shigeyuki Morita, Teruko Nagase, Katsumi Nomizu; Geometry of Differential Forms. American Mathematical Society Bookstore, 2001. [edit] External links Look up coordinate in Wiktionary, the free dictionary.
{"url":"http://wiki.gis.com/wiki/index.php/Coordinate_system","timestamp":"2014-04-16T20:12:33Z","content_type":null,"content_length":"34242","record_id":"<urn:uuid:26e67f7c-4c74-468d-bd6d-d84941a4c6a6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Venice, CA Math Tutor Find a Venice, CA Math Tutor ...From my experience, I have found many creative ways of explaining common problems. I love getting to the point when the student finally understands the concept and tells me that they want to finish the problem on their own. I look forward to helping you with your academic needs. 14 Subjects: including SAT math, algebra 1, algebra 2, calculus ...I’m capable of reducing the material into digestible chunks without losing sight of the big picture. I have been happily tutoring Statistics for the last 3 years and look forward to many more fruitful tutoring sessions. My Credentials include the following 11 credit units: Intro to Probability and Statistics, Statistics with Computer Application, and Graduate level Advanced 8 Subjects: including SPSS, Microsoft Excel, prealgebra, statistics ...I specialize in tutoring math because it is my passion and my expertise, and I hope I can help you enjoy it too. Math is a daunting subject for many students because it is cumulative. If you missed a past topic or simply forgot it, new material can quickly become confusing or meaningless. 14 Subjects: including geometry, trigonometry, statistics, SAT math ...Everyone reads. Right? Yes, but how many really know HOW to read properly? 39 Subjects: including calculus, Microsoft Excel, organic chemistry, physics ...For students with dyslexia, I always provide a card with the correct directions of the letters and numbers they have difficulty with on their desks and on the board for reference. I quietly correct them when they make reversal mistakes and do not count it against them when they reverse during te... 13 Subjects: including prealgebra, reading, biology, English Related Venice, CA Tutors Venice, CA Accounting Tutors Venice, CA ACT Tutors Venice, CA Algebra Tutors Venice, CA Algebra 2 Tutors Venice, CA Calculus Tutors Venice, CA Geometry Tutors Venice, CA Math Tutors Venice, CA Prealgebra Tutors Venice, CA Precalculus Tutors Venice, CA SAT Tutors Venice, CA SAT Math Tutors Venice, CA Science Tutors Venice, CA Statistics Tutors Venice, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Venice_CA_Math_tutors.php","timestamp":"2014-04-16T16:25:25Z","content_type":null,"content_length":"23687","record_id":"<urn:uuid:2ea2a90e-ecd2-4cf4-a0db-9f093e444c56>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
The getConditionalProbability function you've developed operates on counts and frequencies rather than on probabilities. In reading the literature on Bayesian reasoning, you will notice that the enumeration method for computing P(A | B) is only briefly discussed. Most authors quickly move onto describing how P(A | B) can be formulated using terms denoting probability values rather than frequency counts. For example, you can recast the formula for computing P(A | B) using such probability terms as: P(A | B) = P(A & B) / P(B) The advantage of recasting the formula using terms denoting probabilities instead of frequency counts arises because in practice, you often don't have access to a data set we can use to derive conditional probability estimates through an enumeration of cases method. Instead, you often have access to higher-level summary information from past studies in the form of percentages and probabilities. With the available information, the challenge then becomes finding a way to use these probability estimates instead to compute the conditional probabilities you are interested in. Recasting the conditional probability formula in terms of probabilities allows you to make inferences based on related probability information that is more readily accessible. The enumeration method might still be regarded as the most basic and intuitive method for computing a conditional probability. In Thomas Bayes' "Essay on the Doctrine of Chances," he uses enumeration to arrive at the conclusion that P( 2nd Event = b | 1st Event = a ) is equal to [P / N] / [ a / N], which is equal to P / a, which one can also denote as {a & b} / {a}: Figure 1. Graphical representation of relations Another reason why it is important to be aware of frequency versus probability format issues is because it has been demonstrated by Gerd Gigerenzer (and others) that people are better at reasoning in accordance with prescriptive Bayesian rules of inference when background information is presented in terms of frequencies of cases (1 in 10 cases) rather than probabilities (10 percent probability). A practical application of this research is that medical students are now being taught to communicate risk information in terms of frequencies of cases instead of probabilities, making it easier for patients to make better informed judgements about what actions are warranted given the test results. Joint probability The most basic method for computing a conditional probability using a probability format is: P(A | B) = P(A & B) / P(B) This probability format is identical to the frequency format, except for the probability operator P( ) surrounding the numerator and denominator terms. The P(A & B) term denotes the joint probability of A and B occurring together. To understand how the joint probability P(A & B) can be computed from cross-tabulated data, consider the following hypothetical data (taken from pp. 147-48 of Grimstead and Snell's online texbook): │ │ -Smokes │ +Smokes │ Totals │ │ -Cancer │ 40 │ 10 │ 15 │ │ +Cancer │ 7 │ 3 │ 10 │ │ Totals │ 47 │ 13 │ 60 │ To convert this table of frequencies to a table of probabilities, you divide each cell frequency by the total frequency (60). Note that dividing by the total frequency also ensures that Cancer x Smokes cell probabilies sum to 1 and permits you to refer to the silver area of the table below as the joint probability distribution of Cancer and Smoking. │ │ -Smokes │ +Smokes │ Totals │ │ -Cancer │ 40/60 │ 10/60 │ 50/60 │ │ +Cancer │ 7/60 │ 3/60 │ 10/60 │ │ Totals │ 47/60 │ 13/60 │ 60/60 │ To compute the probability of cancer given that a person smokes P(+Cancer | +Smokes), you can simply substitute the values from this table into the above formula as follows: P(+Cancer | +Smokes) = ( 3 / 60 ) / ( 13 / 60) = 0.05 / .217 = 0.23 Note that you could have derived this value from the table of frequencies as well: P(+Cancer | +Smokes) = 3 / 13 = 0.23 How do you interpret this result? Using the recommended approach of communicating risk in terms of frequencies, you might say that of the next 100 smokers you enounter, you can expect 23 of them to experience cancer in their lifetime. What is the probability of getting cancer if you do not smoke? P(+Cancer | -Smokes) = ( 7 / 60 ) / ( 47 / 60) = 0.117 / .783 = 0.15 So it appears that you are more likely to get cancer if you smoke than if you do not smoke, even though the tallies appearing in the table might not have initially given you that impression. It is interesting to speculate on what the true conditional probabilities might be for various types of cancer given various criteria for defining someone as a smoker. A "cohort" research methodology would also require you to equate smokers and non-smokers on other variables like age, gender, and weight so that smoking, and not these other co-variates, can be isolated as the root cause of the different cancer rates. To summarize, you can compute a conditional probability (+Cancer | +Smokes) from joint distribution data by dividing the relevant joint probability P(+Cancer & +Smokes) by the relevant marginal probability P(+Smokes). As you might imagine, it is often easier and more feasible to derive estimates of a conditional probability from summary tables like this, rather than expecting to apply more data-intensive enumeration methods. >>> More PHP Articles >>> More By developerWorks blog comments powered by
{"url":"http://www.devshed.com/c/a/PHP/Implement-Bayesian-inference-using-PHP-Part-1/4/","timestamp":"2014-04-16T22:17:56Z","content_type":null,"content_length":"43119","record_id":"<urn:uuid:c6f141b0-9c1a-4137-92d7-3af5b0c93425>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
book-stacking problem How much of an overhang can be achieved by stacking books on a table? Assume each book is one unit long. To balance one book on a table, the center of gravity of the book must be somewhere over the table; to achieve the maximum overhang, the center of gravity should be just over the table's edge. The maximum overhang with one book is obviously 1/2 unit. For two books, the center of gravity of the first should be directly over the edge of the second, and the center of gravity of the stack of two books should be directly above the edge of the table. The center of gravity of the stack of two books is at the midpoint of the books' overlap, or (1 + 1/2)/2, which is 3/4 unit from the far end of the top book. It turns out that the overhangs are related to the harmonic numbers H[n], (see harmonic sequence) which are defined as 1 + 1/2 + 1/3 + ... + 1/n: the maximum overhang possible for n books is H[n]/2. With four books, the overhang (1 + 1/2 + 1/3 + 1/4)/2 exceeds 1, so that no part of the top book is directly over the table. With 31 books, the overhang is 2.0136 book lengths. Related category GAMES AND PUZZLES
{"url":"http://www.daviddarling.info/encyclopedia/B/book-stacking_problem.html","timestamp":"2014-04-17T18:59:10Z","content_type":null,"content_length":"6904","record_id":"<urn:uuid:622f7679-92e0-41ab-9f95-a884800869c0>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
The Galactic Question Center Wednesday, January 11, 2006 Pearls before swine Is it possible to win this game Yes! I did it by taking 2 from the top, then 3 from the bottom then 1 from the top, then two from the bottom. The only chance for winning is if you start. There are 3 rows (3, 4 and 5 pearls). Any move you make will lead to having the following combinations: 1 - 2 - 3 3 - 3 4 - 4 1 - 1 - 1 Then you lose. Not that I can tell. As long as you're only allowed to pick from an even number of balls, you'll always lose. Plus the computer has too much control. If there was a set number of balls that he had to every time, you'd be able to win. The end-game scenario always has 4 pearls. There's no way for the player that goes first to win. I played a trick similar to this on folks at parties (I used dollar bills with "winner takes all" as the hook). It was amazing how angry some people became. It was equally amazing how quickly competition short circuited their brain. Only a very small number of people actually said, "You go first this time". Of that small number, a large number were female. yes and no. since you always have to go first you will always lose. After you get rid of your pearls count the number of pearls the guy holding them removes. He alwas makes it an even number total between yours and his. (until the last move) Try playing with someone where THEY go first and use this rule. You will always win! Actually I realized later (before reading later commments) that there was a way of solving it. Except that I wasn't able to solve it. Not persistent enough. Bummer. Alex got it..also if you don't take any balls your opponent will be left with all the balls including the last one. Hey guys, this is actually a form of an ancient game called NIM, and it is completely deterministic. If you know the strategy, you can win every time, even his more advanced games II & III (I located the strategy and beat him continuously up to level 10 in game III and got board). Those of you that are interested, and know how to convert the number of balls in a row to Base 2, take a look here: http://www.answers.com/topic/nim-2 . Good Luck! My email is dbynoe@qcsit.com if you want to contact me, the user name that I wanted is in use, and decided not to bother :) you guys are need to use ur head, you can win every single time if u go first. it starts off 3,4,5. if u make it, 1,4,5...there is no possible way for him to win he would usually make it, 1,3,5. you easily change that to 1,3,2. He makes that..2,2,1. you change it to 2,2. There is no possible move for him to make to win. Yes! I did it by taking 2 from the top, then 3 from the bottom then 1 from the top, then two from the bottom. certified, this really works, but how it works, still i can“t tell A lot of good info here. It is possible to win and the strategy is simple. You must be the first person to set the Nim sum to zero to win (see wikipedia - Nim)and reset that zero on your turn each time. He who then disturbs the zero loses! Basic strategy (for any game where the last to draw loses). Its all done with binary addition with absolutely no carries. Remember Binary number bits: Set the bits that represent the number of pearls remaining on the row. Create a binary representation of each row. Next add the binary rows together to create the Nim Sum (in binary). If you have an even number of 1's then that bit for the Nim Sum becomes zero. An Odd number of 1's sets that bit to one (this is an XOR function). Zero is always Zero. To win, the bits representing the Nim sum must be set to zeros on your turn. If you can't make it zero that means you've made a mistake on a previous move and are in danger of losing unless your opponent makes a mistake. Pull the pearls from a single row that will allow you to set the bits representing the Nim sum to all zeros and don't disturb existing bits already at zero! Once you set the bits on the pearl row to make the Nim sum zero, the sum of the bits on the pearl row will tell you how many pearls to leave on that row to end your turn. If the game starts out with the NIM sum already at zero make your opponent goes first (to disturb the zero). He will then automatically lose unless you make a mistake. If you fail to make him go first in this situation Juan will always win. There's a little more to the strategy but this info will get you about a 90% win. It can get tricky sometimes times to know which row to pull from but this is rare. A little observation can help here. There's math for that situation too. Hmmmmm....... what if i win it the loser way? Open 2 windows and just follow his move in one of the windows. You'll definitely win at one of the two games. Heck, he's playing with himself, but oh wells, i don't really care. \(-__-)/ u also can win if take 3 balls from middle, then take all from bottom,so the ball has left 1 from middle and 3 from top..computer will be stupid because not take all from top.. it take only 1 from top.. so u can take all from top... Well I have tried this method on my app Pearls Before Swine, and it does not work! I will have to check to see if the computer is any different! :( Post a Comment << Home
{"url":"http://galquest.blogspot.com/2006/01/pearls-before-swine.html","timestamp":"2014-04-16T07:32:08Z","content_type":null,"content_length":"39697","record_id":"<urn:uuid:06f3c598-9cf7-41d6-b471-0c1e6b41c51c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
vector lies in the plane October 10th 2011, 11:58 AM vector lies in the plane what does it mean to say a vector lies in a plane in R^3. Does it mean it's inital point lies on the plane? October 10th 2011, 12:07 PM Re: vector lies in the plane usually it means that the vector can be written as a linear combination of two other vectors you know beforehand. one vector determines a line, two vectors determine a plane. there are two different ways of thinking of a vector: one is simply as a point. another way, is as an "arrow" going from one point in space, to another. in this view, all "arrows" with the same length and direction are considered "equal". the bridge between these two views is to identify a point in space with the arrow that goes from the origin, to that point. October 10th 2011, 12:18 PM Re: vector lies in the plane I was thinking how to visualize it. October 10th 2011, 12:25 PM Re: vector lies in the plane The wording of this question is problematic. In one sense all vectors have the same initial point: $(0,0,0).$ Vectors are really just equivalence classes, defined by length and direction. Thus as reply #2 suggests that any two vectors can be though of as being in some plane, though that may not be a unique plane. It is true that any three non-colinear points determine a unique plane. October 10th 2011, 12:32 PM Re: vector lies in the plane the plane is unique if you also say it must pass through (0,0,0) (which goes along with thinking of all vectors having intial point (0,0,0)). if one regards each vector as "unique", then one has the counter-intuitive result that vectors do not form a vector space (vector addition not being defined unless the tail of one is at the head of another). put more technically, an affine plane isn't quite the same thing as a linear plane. in ordinary low-dimensions that most people are familiar with: the line ax+b is not a linear function. October 10th 2011, 12:38 PM Re: vector lies in the plane October 10th 2011, 12:58 PM Re: vector lies in the plane valid point. the vectors must be linearly independent, or we have a "degenerate plane" (a line).
{"url":"http://mathhelpforum.com/advanced-algebra/190003-vector-lies-plane-print.html","timestamp":"2014-04-16T14:01:44Z","content_type":null,"content_length":"7965","record_id":"<urn:uuid:b7e77210-5a65-429c-9610-3f925f0f568b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00186-ip-10-147-4-33.ec2.internal.warc.gz"}