content
stringlengths
86
994k
meta
stringlengths
288
619
Circular Pipes A surveyor places a metre stick against the base of a large circular underground pipe and finds that the midpoint of the stick is 8 cm from the pipe wall. Use this information to find the inside diameter of the pipe. Problem ID: 36 (Feb 2001) Difficulty: 2 Star
{"url":"http://mathschallenge.net/view/circular_pipes","timestamp":"2014-04-16T04:12:43Z","content_type":null,"content_length":"4287","record_id":"<urn:uuid:c4a6ce85-a282-4f78-9066-ce41d5a88c9c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
The name means "fishing" and the game uses two sets of Chinese dominoes (64 tiles in all) with two or three players. The Deal The tiles are arranged in a woodpile, four tiles high. Four stacks (16 tiles) are taken from one end and laid face up on the table. Each player then draws his hand; for three players, each draws two stacks (8 tiles); for two players, each draws three stack (12 tiles). The Play The idea is to match a tile in your hand to a tile that is face up on the table. Two tiles match when their pips have the same total, regardless of how the pips are arranged. You can also match the [2-1] and [4-2] (Gee Joon or "supreme pair") together. If a player is dealt a pair of [6-6], he may lay them down from his hand immediately. In his turn, each player matches one of his tiles to one of the exposed tiles and collects the pair in front of him. Whether or not he was able to make a match, he draws a single tile from the top stack at the end of the woodpile. If this new tile matches an exposed tile, the player collects this pair immediately. If this new tiles does not match an exposed tile, he leaves it face up on the table. If there are two identical tiles on the table, a player who has a third identical itle can place it with the first two and later a player can capture these three with the fourth identical tile. This is clearly only possible with the civil series tiles - the doubles, [3-1], [5-1], [6-1], [6-4] and [6-5] - since these are the only tiles of which there are four identical copies in the set. Play continues around the table until the woodpile is empty. The little fish are tiles with less than eight pips. They score one point for every red spot they have, then the score is raised up to the nearest multiple of ten. For example, 3 red spots would become 10 points. The big fish are tiles with eight or more pips. They score two points for every pip they have, regardless of color. Each player adds these two scores together to get his final score. The winner is the highest score and each of the other players pay him the difference between his final score and their final score. Other Pages An earlier version of these rules of Tiu U was published in the Game Cabinet.
{"url":"http://www.pagat.com/tile/cdom/tiu-u.html","timestamp":"2014-04-17T18:45:02Z","content_type":null,"content_length":"7256","record_id":"<urn:uuid:c3063c4c-a7c1-4723-86f7-e2f083941475>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Stochastic Analysis Common-Leonid Petrov (Northeastern University)-Random 3D surfaces and their asymptotic behavior Mathematics - Lecture/Discussion Monday, December 9, 2013 4:00 PM-5:00 PM ABSTRACT: I will discuss the probabilistic model of randomly tiling a hexagon drawn on the regular triangular lattice by lozenges of three types (equivalent formulations: dimer models on the honeycomb lattice, or random 3D stepped surfaces glued out of 1x1x1 boxes). This model has received a significant attention over the past 20 years (first results - the computation of the partition function - date back to P. MacMahon, 100+ years ago). Kenyon, Okounkov, and their co-authors (1998-2007)proved the law of large numbers: when the polygon is fixed and the mesh of the lattice goes to zero, the random 3D surface concentrates around a deterministic limit shape, which is algebraic. I will discuss finer asymptotics: local geometry, behavior of interfaces between phases (which manifests the Kardar-Parisi-Zhang universality), and global fluctuations of random surfaces (described by the Gaussian Free Field), as well as dynamical models associated with random tilings. Suggested Audiences: Adult, College E-mail: ma-chair@wpi.edu Last Modified: November 26, 2013 at 2:56 PM
{"url":"http://www.socialweb.net/Clients/WPI/math.lasso?id=162422","timestamp":"2014-04-18T18:50:51Z","content_type":null,"content_length":"2157","record_id":"<urn:uuid:cf9646f8-85b8-481e-b2f3-1d3365999c10>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
How Quantum Field Theory Becomes “Effective” Ken Wilson, Nobel Laureate and deep thinker about quantum field theory, died last week. He was a true giant of theoretical physics, although not someone with a lot of public name recognition. John Preskill wrote a great post about Wilson’s achievements, to which there’s not much I can add. But it might be fun to just do a general discussion of the idea of “effective field theory,” which is crucial to modern physics and owes a lot of its present form to Wilson’s work. (If you want something more technical, you could do worse than Joe Polchinski’s lectures.) So: quantum field theory comes from starting with a theory of fields, and applying the rules of quantum mechanics. A field is simply a mathematical object that is defined by its value at every point in space and time. (As opposed to a particle, which has one position and no reality anywhere else.) For simplicity let’s think about a “scalar” field, which is one that simply has a value, rather than also having a direction (like the electric field) or any other structure. The Higgs boson is a particle associated with a scalar field. Following the example of every quantum field theory textbook ever written, let’s denote our scalar field φ(x, t). What happens when you do quantum mechanics to such a field? Remarkably, it turns into a collection of particles. That is, we can express the quantum state of the field as a superposition of different possibilities: no particles, one particle (with certain momentum), two particles, etc. (The collection of all these possibilities is known as “Fock space.”) It’s much like an electron orbiting an atomic nucleus, which classically could be anywhere, but in quantum mechanics takes on certain discrete energy levels. Classically the field has a value everywhere, but quantum-mechanically the field can be thought of as a way of keeping track an arbitrary collection of particles, including their appearance and disappearance and interaction. So one way of describing what the field does is to talk about these particle interactions. That’s where Feynman diagrams come in. The quantum field describes the amplitude (which we would square to get the probability) that there is one particle, two particles, whatever. And one such state can evolve into another state; e.g., a particle can decay, as when a neutron decays to a proton, electron, and an anti-neutrino. The particles associated with our scalar field φ will be spinless bosons, like the Higgs. So we might be interested, for example, in a process by which one boson decays into two bosons. That’s represented by this Feynman diagram: Think of the picture, with time running left to right, as representing one particle converting into two. Crucially, it’s not simply a reminder that this process can happen; the rules of quantum field theory give explicit instructions for associating every such diagram with a number, which we can use to calculate the probability that this process actually occurs. (Admittedly, it will never happen that one boson decays into two bosons of exactly the same type; that would violate energy conservation. But one heavy particle can decay into different, lighter particles. We are just keeping things simple by only working with one kind of particle in our examples.) Note also that we can rotate the legs of the diagram in different ways to get other allowed processes, like two particles combining into one. This diagram, sadly, doesn’t give us the complete answer to our question of how often one particle converts into two; it can be thought of as the first (and hopefully largest) term in an infinite series expansion. But the whole expansion can be built up in terms of Feynman diagrams, and each diagram can be constructed by starting with the basic “vertices” like the picture just shown and gluing them together in different ways. The vertex in this case is very simple: three lines meeting at a point. We can take three such vertices and glue them together to make a different diagram, but still with one particle coming in and two coming out. This is called a “loop diagram,” for what are hopefully obvious reasons. The lines inside the diagram, which move around the loop rather than entering or exiting at the left and right, correspond to virtual particles (or, even better, quantum fluctuations in the underlying field). At each vertex, momentum is conserved; the momentum coming in from the left must equal the momentum going out toward the right. In a loop diagram, unlike the single vertex, that leaves us with some ambiguity; different amounts of momentum can move along the lower part of the loop vs. the upper part, as long as they all recombine at the end to give the same answer we started with. Therefore, to calculate the quantum amplitude associated with this diagram, we need to do an integral over all the possible ways the momentum can be split up. That’s why loop diagrams are generally more difficult to calculate, and diagrams with many loops are notoriously nasty beasts. This process never ends; here is a two-loop diagram constructed from five copies of our basic vertex: The only reason this procedure might be useful is if each more complicated diagram gives a successively smaller contribution to the overall result, and indeed that can be the case. (It is the case, for example, in quantum electrodynamics, which is why we can calculate things to exquisite accuracy in that theory.) Remember that our original vertex came associated with a number; that number is just the coupling constant for our theory, which tells us how strongly the particle is interacting (in this case, with itself). In our more complicated diagrams, the vertex appears multiple times, and the resulting quantum amplitude is proportional to the coupling constant raised to the power of the number of vertices. So, if the coupling constant is less than one, that number gets smaller and smaller as the diagrams become more and more complicated. In practice, you can often get very accurate results from just the simplest Feynman diagrams. (In electrodynamics, that’s because the fine structure constant is a small number.) When that happens, we say the theory is “perturbative,” because we’re really doing perturbation theory — starting with the idea that particles usually just travel along without interacting, then adding simple interactions, then successively more complicated ones. When the coupling constant is greater than one, the theory is “strongly coupled” or non-perturbative, and we have to be more clever. So far, so good. Now for the bad news. In many cases of interest, when we actually do the integral over momentum in the loop diagrams, we get an answer that is not at all small, even when multiplied by appropriate powers of the coupling constant — in fact, the answer can be infinite! Generally a sign that something has gone terribly wrong. The great contribution of Feynman, Schwinger, Tomonaga, and Dyson was to show that we didn’t necessarily have to despair at this apparent disaster: certain quantum field theories can be “renormalized” to get sensible answers. Renormalization has gained a reputation as being somewhat mysterious, perhaps even disreputable, but it’s really not a big deal: it’s just a matter of taking a limit in a careful way so that we get finite answers for perfectly reasonable physical questions. One of Wilson’s great contributions was to make the physical meaning of renormalization more clear. Let’s think a bit about why the loop diagrams are giving infinite answers — why, as we say, they are divergent. It’s because we were integrating (summing) over all the possible ways that momentum could move through the loops. And in particular, on closer inspection, the divergences arise from allowing the momentum to get arbitrarily large. Momentum is a vector, so even if a finite amount comes into the loop, we can always divide it into an arbitrarily large amount going one way and a similarly large amount pointing in the opposite physical direction, so the sum is constant. But if you think about it, “large momentum” corresponds to “high energies” or (in quantum mechanics) “short distances.” And it’s at high energies/short distances where all sorts of funny things can be lurking — from new kinds of particles we haven’t yet discovered (and therefore don’t include in our Feynman diagrams) to a breakdown of spacetime itself. So maybe we shouldn’t expect this sum over arbitrarily large momenta to give any sensible answers. Is it possible to do useful work in quantum field theory without worrying about the “ultraviolet” (high-energy, short-distance) regime? Yes it is, says Ken Wilson: we can talk about an “effective field theory” that is only valid below some energy scale, in which case what goes on at high energies is simply irrelevant. And thus he made quantum field theory safe for the world. Let me explain what this means, although I’ll do it in a somewhat non-Wilsonian way. (I’m going to talk about diagrams, Wilson would take a path integral over all the quantum fields and divide things up into high energies and low energies.) Remember that what we are using our quantum field theory for is to calculate the rate at which particles interact with each other in various ways. For example, we’ve been looking at one particle decaying into two. That’s a sum of an infinite number of Feynman diagrams, including loops with arbitrarily large momenta inside. But let’s forget about the details, and just think about the final answer. We can express that answer (whatever it turns out to be) as a “blob diagram,” which morally represents the sum of all the real Feynman diagrams. Likewise for other processes we might be interested in. For example, our theory will allow two bosons to scatter off each other into two more bosons, which we can represent diagramatically as well. So far this is just re-writing our ignorance in a convenient way. What Wilson says is that we don’t need to know what is actually going on at arbitrarily high energies, which would correspond to the high-momentum contributions from the diagrams on the right. Instead, there is a different theory — the effective theory — that simply encapsulates the blob diagrams on the left. If we knew the true theory, we could derive the effective theory by actually integrating over all those loops with large momenta moving through them. But the beauty is that we don’t need to know the true theory — we are welcome to work with the effective theory in its own right. Indeed, there can be many possible “true” theories — many “ultraviolet completions” — that would give you exactly the same low-energy effective theory! The good news is, we can usefully do quantum field theory without knowing absolutely everything about nature. The bad news is, it can be very hard to figure out what nature is actually doing at very high energies, since it’s all bundled up in an effective field theory. That’s why it’s hard to test something like string theory at the LHC. As Polchinski says, “Nobody ever promised you a rose The nice thing about all this is that effective field theories are really quite “effective.” That is, they are not arbitrarily complicated; it’s generally quite simple to figure out what processes are important and which ones are less so. To see this all we need (he says, chuckling maniacally) is a bit of dimensional analysis. We’ll use natural units, in which Planck’s constant &hbar; and the speed of light c are both set equal to unity. In natural units, everything can be expressed as different powers of a single kind of quantity. We will choose “energy” as our measuring stick, in which case we have: Mass = Energy; Length = 1/Energy; Time = 1/Energy. (If you’ve never seen this before, and don’t mind a bit of arithmetic, it’s worth checking that these follow from setting &hbar;=c=1.) So what are the units of our field φ? Well, vibrations in the field carry energy. We talk about the “kinetic energy density,” which is just the amount of kinetic energy the field carries in any specified volume of space. If the “velocity” of the field is its time derivative dφ/dt, the kinetic energy density is $\displaystyle{\frac{\rm kinetic\ energy}{\rm spatial\ volume} = \frac{1}{2}\left(\frac{d\phi}{dt}\right)^2.}$ If you’ve never seen this formula before, I’m just pulling it out of thin air; but notice that it bears a family resemblance to the kinetic energy of a particle, (1/2)mv^2, since the velocity is the time derivative of the position. The left-hand side here has units of energy/volume; volume has units of length^3; and length has units of 1/energy. So the left side has units of energy^4, and therefore so does the right-hand side. The time derivative d/dt has units of 1/time, which is the same as energy; and its squared, so that’s energy^2. All that’s left (since 1/2 is a dimensionless number) is the field, which is also squared. To make left and right match, the field must have units of energy: [φ] = energy. Awesome. Now, in quantum field theory, each of the blob diagrams above corresponds to a quantum “operator”; there’s an operator that connects three particles (e.g. by taking one particle into two), one that connects four particles (e.g. by taking one into three, or two into two, etc.), and so on. Each operator has a dimension, which we can figure out by dimensional analysis. (Technical note: in addition to the field itself, operators can also involve derivatives of the field. Derivatives have units of 1/length or 1/time, i.e. units of energy. We’re just ignoring this possibility.) The three-particle operator (corresponding to the diagram with three lines coming into the blob) must have the dimensions of our original three-particle vertex (since that’s one of the terms that sums to get the blob). If you dig into the equations of field theory, that’s just the field itself raised to the third power. (Likewise a four-particle vertex would be the field to the fourth power, etc. — each appearance of the field in the expression from which the vertex derives corresponds to one line coming into or out of the vertex.) So the dimension of that three-point blob is equal to that of φ^3, which is of course just energy^3. Every operator (every blob diagram) has units of energy to some power, so all that matters is what that power is. If the power is three, we say we have a “dimension-three operator.” Our four-particle blob is a dimension-four operator, and so on. (Yes, I know, a lot of work just to say “the dimension of the blob is the number of lines coming in/out.” In other theories it would be more complicated. The electron field, for example, is a fermion rather than a boson, and it turns out to have dimensions energy^3/2, so a diagram with four electron lines is dimension-6.) Honestly there is a reason we’re going through all this. Each of those blob diagrams represents something that could happen at some point in spacetime. The chance that it happens anywhere is obtained by integrating all over spacetime. This quantity (really we’re talking about terms in the effective action) therefore has units of spacetime volume times the units of our operator. And spacetime volume has units of 1/energy^4 (sticking with good old four-dimensional spacetime). So if we have an operator of dimension N, the thing we really care about — the integral over all of spacetime of our operator, from which we can derive the quantum probability amplitude — has units of [spacetime integral of operator of dimension N] = energy^N-4. Why do we care about that? Because, once again following Wilson’s logic, the interactions in our effective theory change as we change our definition of “high energy” (the part we’re bundling up) vs. “low energy” (the part we’re explicitly describing in our theory). As we change this “cutoff,” we are including or excluding different processes, thereby altering the effective coupling constants. That change is known as the renormalization group. It spells immediate doom for many crackpotty attempts at unification that try to derive, for example, the fine-structure constant in terms of π and e and the author’s birthday. The coupling constants of an effective field theory are not really constant; they depend on the energy at which you measure them. This can have dramatic consequences. In quantum chromodynamics, the theory of quarks and gluons, the coupling constant is small at high energies and everything is perturbative. But at low energies the coupling becomes strong, and the theory changes character completely — the new effective field theory is one of light bound states (pions), not a theory of quarks and gluons at all. And this innocent-looking formula, coming from a bit of dimensional analysis, tells us roughly how that change goes. The importance of an operator of dimension N (where N is just the number of particles involved in the blob diagram, in our simple scalar-field-theory example) grows at high energy if its spacetime integral goes as a positive power of energy; i.e., if N>4. But we don’t care about high energies! We are trying to construct an effective theory at low energies, so we care about the terms for which N&leq;4 — those are the ones that dominate at low energies. In fact, we have lingo to encapsulate this importance. When talking about operators with units of energy^N in four spacetime dimensions, we refer to them as: • N<4: relevant • N=4: marginal • N>4: irrelevant The labels relevant/marginal/irrelevant are telling us how important such operators are to a low-energy effective field theory. (Strictly speaking, even “irrelevant” operators can be important. In the Fermi theory of the weak interactions, the lowest-order operator you can construct that gives rise to any interaction at all is dimension 6. So you have to keep that interaction to have anything interesting happen — but we say that the resulting theory is “non-renormalizable.”) (And while we’re speaking strictly, this dimensional analysis gives the leading behavior, but not the whole story. In QCD, for example, the coupling is marginal, but it doesn’t remain exactly constant with energy, but rather changes slowly [logarithmically]. If all of your couplings are exactly constant, you have a conformal field theory.) For those few of you who have made it this far, please appreciate how wonderful this is! Above we were drawing Feynman diagrams representing processes in an effective field theory, and we argued that diagrams with N scalar particles coming in and out would have dimension N. And now we’ve seen that, at low energies, the only relevant (and marginally relevant) processes are those with N&leq;4. But if N is the number of particles involved, it’s going to be a positive integer. And there aren’t that many positive integers less than or equal to four! In fact, there are only four of them. A one-particle operator would represent a particle disappearing into, or appearing out of, empty space. We don’t think that can happen (energy conservation), so that’s not very important. A two-particle operator just has one particle going in and one particle coming out — i.e., it’s just a particle propagating through space. Indeed, we call it the propagator . And then there are the three-particle and four-particle processes we mentioned above. And that’s it! Those pieces give you the important low-energy description of any theory of a single scalar field, no matter what new particles and crazy nonsense might be going on at higher energies. Of course we don’t work at strictly zero energy, so the “irrelevant” parts might also be interesting and useful, but Wilsonian effective field theory gives you a systematic way of dealing with them and estimating their importance. If you have more than one kind of particle/field running around in your theory, there will naturally be more operators to deal with — but still a manageable number of relevant/marginal ones. The effective field theory philosophy tells you to write down all of the relevant and marginal operators consistent with the underlying symmetries of your theory. You can then measure their coefficients in some kind of experiment, and use that data to predict the answer for any other kind of experiment you might want to do. In fact, you’re not allowed to only write down some of the operators consistent with your symmetries; generically we would expect all such operators to be generated by the higher-energy processes we’re ignoring. Wilson’s viewpoint, although it took some time to sink in, led to a deep shift in the way people thought about quantum field theory. Pre-Wilson, it was all about finding theories that are renormalizable, which are very few in number. (The old-school idea that a theory is “renormalizable” maps onto the new-fangled idea that all the operators are either relevant or marginal — every single operator is dimension 4 or less.) Nowadays we know you can start with just about anything, and at low energies the effective theory will look renormalizable. Which is useful, if you want to calculate processes in low-energy physics; disappointing, if you’d like to use low-energy data to learn what is happening at higher energies. Chances are, if you go to energies that are high enough, spacetime itself becomes ill-defined, and you don’t have a quantum field theory at all. But on labs here on Earth, we have no better way to describe how the world works. 39 Responses to How Quantum Field Theory Becomes “Effective” 1. This was awesome. Thank you! Like or Dislike: 4 1 2. You—and Matt Strassler—do a wonderful public service in your talks and writings. Thank you. Like or Dislike: 1 0 3. I’d like to compliment your article without disparaging someone else, except I won’t… where’s the fun in that? Thank you for doing in 10 minutes what an unnamed UofC physics professor could not do in 1 quarter. Like or Dislike: 1 1 4. Thank you! Like or Dislike: 1 0 5. Not for the first time, you set a standard for us other science bloggers to aspire to. Like or Dislike: 1 0 6. Good stuff Sean. Nice to see mention of the fine structure constant being a running constant. But you know how you said “momentum is a vector, so even if a finite amount comes into the loop” along with “the electron field… turns out to have dimensions energy³/²”? That’s because the momentum is in a loop, something like this BEC spinor. Look out for blue toruses on TQFT websites. Methinks there’s going to be some even more effective quantum field theory coming soon. Like or Dislike: 2 0 7. “A one-particle operator would represent a particle disappearing into, or appearing out of, empty space. We don’t think that can happen (energy conservation), so that’s not very important.” i guess I’ll be reading up on Ken Wilson and effective field theory for the rest of my break. Like or Dislike: 0 0 8. Sean, this is so good! Like or Dislike: 0 0 9. Thank you for taking the time to write this, very interesting! Like or Dislike: 0 0 10. Glad people like it! Like or Dislike: 1 0 11. Thank you very much for this. I’ve wondered what renormalization is for a while, this makes it as clear as it will be until I get to this point in school. And will probably help me a lot at that Like or Dislike: 0 0 12. Sean – any chance you will ever write a QFT book, at roughly the level of “Spacetime + Geometry”? Like or Dislike: 2 0 13. Bob, you never know. But it’s not a high priority right now, and writing a textbook is one of those things that better be a very high priority or it doesn’t get done. So I would say, not in the next ten years. Also, there are dozens of QFT textbooks already written, by people who know the subject much better than I do. Like or Dislike: 0 0 14. Knowing the subject much better than you does not necessarily qualify someone to write a good QFT book. Communication skills are much more important! Like or Dislike: 0 0 15. I think this summarizes the entire second semester of my graduate quantum field theory course. Like or Dislike: 0 0 16. Whoa! You made a blog post longer than a paragraph and without any videos! I am impressed. Like or Dislike: 0 0 Also, there are dozens of QFT textbooks already written, by people who know the subject much better than I do. Care to recommend any? Like or Dislike: 0 0 18. Peskin & Schroeder is by now the “standard” textbook, and quite a good one. Zee is also good, but for inspirational purposes more than calculational ones. Srednicki is also very good. (All the good QFT textbooks these days come from California.) Like or Dislike: 0 0 19. Thanks for this Sean, it’s one of the clearest “non-technical” explanations I’ve ever read of such a deep idea in Physics. Like or Dislike: 0 0 20. This is a very nice explanation of Wilson’s work. But my feeling (most likely naïve) is that if you want to calculate some process at low or high energy and compare with experiments, you still have to go through old fashioned field theory calculation (whether it is renormaliazable or not). Also, does Wilson’s method give same answers (exact or approximate) for renormalizable theories where you would include N>4 diagrams. Anyone cares to make comments? Like or Dislike: 0 0 21. After reading this pristine explanation, I think some professors shall be forbidden eternally from teaching QFT. Like or Dislike: 0 0 22. In your very fine article, you mentioned the decay of the Neutron into a Proton, Electron and a Positron Neutrino. From what I understand the Neutron is a composite particle. Also, putting neutrons in a particle accelerator and bounce electrons off of them results in scattering events. Analyzing these scattering events shows that there are three different scattering centers inside the Neutron. Therefore, the Neutron is not composed of the Positron Neutrino as its’ mass is too small for scattering. The idea, as I understand, is that the Neutron is composed of 1 up and 2 down quarks. The Proton is composed of 2 up and 1 down quark. Is it also true that the Proton is not seen as a composite particle and does not show three scattering centers? The up and down quarks all have the same mass (360MeV from what I have read) and the Electron and Positron contain no quarks. However, the Neutron has a larger mass than the Proton. The question then is ‘what is the nature of the energy difference between the Proton and the Neutron if the Quarks that compose each particle have the same mass and the Neutron has a larger mass’? You did use the Neutron as your example:) Like or Dislike: 0 2 23. Thomas: the up and down quarks do not have the same mass. When discussing the mass of the proton and neutron, it also matters which mass you are considering. The up quark has a “bare mass” of about 2 MeV, while the down quark is about 5 MeV. These numbers are approximate, because we never observe bare quarks: confinement means they are always bound into hadron states, including baryons like the proton and neutron. When quarks are in these bound states, we can talk about their “effective” or “dressed” masses which take into account the binding energy from gluons. I think that is the source of the “360 MeV” number that you remembered. The difference in masses between the proton and neutron is related to the difference in bare masses of the up and down quark. However, to my knowledge there is not yet a calculation which explains the difference completely. This is a problem in low-energy quantum chromodynamics, and as Sean mentioned in this blog post, QCD becomes non-perturbative at low energies, making calculations very difficult. Like or Dislike: 0 0 24. Kevin; Thank you for your answer; it’s appreciated. I want ask you, or anyone, if anyone knows of any experiment that did a direct measurement of the gravitational attraction between two (frozen in place?) Lithium atoms? I think I remember reading of such an experiment; but, I find no reference. Of course, I could be wrong. The thought is ‘can the gravitational attraction between two neutral Hydrogen atoms be measured by any conceivable experiment’? Why might I think that neutral Hydrogen is gravitationally unique among the elements and want an atom to atom experimental measurement (as opposed to a calculation from general laws)? read between the lines in the following article. (…the musts and can nots?) Like or Dislike: 0 0 25. Never seen this subject explained so clearly Like or Dislike: 0 0 26. Dr. Carroll, I read the article about the Unitarity Method, written by Zvi Bern, Lance Dixon, and David Kosower, in the May, 2012, issue of Scientific American. Naturally I found it rather interesting in that it seems to hold a great deal of promise. A couple of years ago, while researching Information Physics, I came across the work of Kevin Knuth and have been a huge fan ever since. Mr. Knuth has greatly refined and developed the seminal work of Richard Cox, generalizing a logic to a calculus in a manner which consistently maintains the symmetries inherent in the logic, resulting in what could be called the “Cox-Knuth Method.” To quote Mr. Knuth: “Cox showed that it was possible to systematically generalize Boole’s logic by quantifying over the space of propositions in such a way as to remain faithful to the symmetries of the logic, thereby formalizing the process of inductive reasoning (that is, reasoning on the basis of incomplete information). […] Thus, Cox showed that probability theory can be understood as a calculus that systematically generalizes the Boolean logic of propositions, and that probability could be interpreted as an agent’s degree of belief in a proposition on some given evidence. Very importantly, this view of probability recognizes that, from the outset, all probability statements are conditional in nature—one always speaks of the probability of a proposition given some other proposition—which greatly encourages explicit statement of the assumptions that, in application of Kolmogorov’s formulation, are oftentimes left implicit.” The paper which the above was taken from, Quantum Theory and Probability Theory: Their Relationship and Origin in Symmetry (http://www.mdpi.com/2073-8994/3/2/171), was quite recently brought to my attention. In this paper and in an additional paper referenced, Mr. Knuth and his colleagues use the “Cox-Knuth Method” to derive Feynman’s rules of quantum theory and would seem to have gained a good bit of insight in the process; while reading the paper I couldn’t help but wonder if a similar approach couldn’t be employed to derive the Unitarity Method quite possibly yielding a good bit of insight as well. Of course, being a simple lay-person, I could be completely off my rocker . . . Regarding the “God” question: you know, the perennial philosophers, which is to say, the contemplatives throughout history who bothered with transcribing a record of their experience and the knowledge they gained, and who, it would seem, are/were much more qualified to speak of “God” than theologians, philosophers, or scientists, rarely, if ever, speak/spoke of “God” except in the context of an Infinite Living Mind. With this in mind , I find this paper, A Consistent Set of Infinite-Order Probabilities (http://philsci-archive.pitt.edu/9707/), from the realm of approximate reasoning, rather interesting, especially when taken in conjunction with the above paper . . . It would seem there is no classical mind . . . I mean, world . . . With regards, Wes Hansen Like or Dislike: 0 0 27. http://vixra.org/abs/1306.0024 Like or Dislike: 0 1 28. Um…wow! Thanks! Like or Dislike: 0 1 29. If you take the math out of it, then you can easily see that division and multiplication are complete bullshit. Like or Dislike: 1 2 30. Just take my word for it and don’t look any deeper than what I’m telling you. Like or Dislike: 0 3 31. Also, the 2007 Honda Fit is the most luxurious car in the world, often times selling for $300,000+. But I’ll sell you mine for the mere price of $50,000. Like or Dislike: 1 2 32. Dear Prof Carrol, Can you please elaborate as to the difference between the Wilson cutoff you described and the old Cutoff used in the 1930s ? Joe Edwards Like or Dislike: 1 1 33. Joe– I’m not an expert on what was done in the 1930′s. The Wilsonian cutoff is a limit in momentum space, and is considered a real part of how we define the effective field theory. It basically says “here’s where we expect new physics to kick in.” Like or Dislike: 2 0 34. Pingback: Rapidinhas | Not A Science Blog 35. I still fail to see how quantum field theory can be seen to be “effective” if it could not predict the existence of a square wave in a circuit that does not have an inductor or propagator of a magnetic field that travels close to the speed of light. The electron seems to be too slow to have a quick change in voltage that is not generated from a changing magnetic field that would have to be created from an inductor in a circuit. I have heard you mention in video’s that the speed of an electron cannot be faster because electrons would fly off of the atoms. Do you think you could better explain some reasoning behind this? Like or Dislike: 0 4 36. Dear Sean, thanks again for the nice post. However, your discussion about the effective theory using Feynman diagrams and loops somehow can be missguided to be seen as a 1PI generating functional, although you specify that the integration is only down to some energy scale and not to zero. My point is that despite one works with an effective theory one has to deal with loops. All cuantum corrections still proceed but now the integrals goes only up to some cut-off scale, where the new physics appear, and only regarding the low energy degrees of freedom running in the loops. I hope I help making this point clearer or if I’m wrong I thank anyone who corrects me. Like or Dislike: 1 1 37. Amazing read! Myself, I have been very interested in Quantum Field Theory (QFT). I think it is the answer to the universe. QFT cleared up many things. In my opinion, Quantum Mechanics makes things more complicated, and sometimes the best answer is simple. This was my first introduction to QFT (quantum-field-theory.net/fields-of-color). The quote from this book got me thinking a lot lately, “everything is fields, even the bodies; reality consists only of fields and interactions between fields.” This explained how it is possible for forces such as gravity to happen through “action at a distance.” Even simple laws like Newton’s can be explained by QFT! Asher Kirschbaum Like or Dislike: 1 3 This entry was posted in Science. Bookmark the permalink.
{"url":"http://www.preposterousuniverse.com/blog/2013/06/20/how-quantum-field-theory-becomes-effective/","timestamp":"2014-04-20T00:39:21Z","content_type":null,"content_length":"152865","record_id":"<urn:uuid:2ab891ef-7928-4cf7-aa08-7ae33fe11f5c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - User Profile for: redmon_@_iu.edu Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. User Profile: redmon_@_iu.edu User Profile for: redmon_@_iu.edu UserID: 710273 Name: Don Redmond Registered: 5/5/11 Total Posts: 48 Recent Messages Discussion Posted 1 Re: New numerical methods. The true story. Wrong assertion from sci.math.independent Jan 8, 2014 2:23 PM math teachers and authors. 2 Re: To Be Published the Disproof of the Current Pi and a Math sci.math.independent Nov 4, 2013 6:01 PM Question for the smart ones Prfessors all 3 Re: floor sums sci.math.independent Oct 15, 2013 1:21 PM 4 Re: floor sums sci.math.independent Oct 14, 2013 1:18 PM 5 Re: Looking For Published FLT3 Proofs that do NOT Use Infinite Descent sci.math.independent Sep 30, 2013 5:56 PM 6 Re: Long division of 1/(1+u) sci.math.independent Sep 23, 2013 8:45 AM 7 Re: The first new theorem Primes sci.math.independent Sep 18, 2013 7:31 PM 8 Re: A good paper/book that outlines how British (calculus) sci.math.independent Aug 8, 2013 9:08 AM mathematics fell behind mainland Europe because of notational difference? 9 Re: Michael's conjecture sci.math.independent Aug 7, 2013 8:52 AM 10 Re: Michael's conjecture sci.math.independent Aug 6, 2013 1:54 PM Show all user messages
{"url":"http://mathforum.org/kb/profile.jspa?userID=710273","timestamp":"2014-04-19T14:56:29Z","content_type":null,"content_length":"15316","record_id":"<urn:uuid:eeded6c6-4918-41f7-8016-8930607b296c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Field MTBF Calculator Field MTBF Calculator This tool computes the lower one-sided MTBF at a given confidence limit based on the number of unit-hours accumulated and the total number of field failures. The calculation assumes that units have a constant failure rate and fail in accordance with the exponential distribution. For purposes of this calculation it is assumed that the "field test" is time truncated, thus making use of equation 2 shown for this tool. One common use of the tool is to estimate the current MTBF for a population of field units when no failures have occurred, which is typically calculated at a 60% confidence level for r=0. Calculation Inputs: Toolkit Home 1. MIL-HDBK-338, Electronic Reliability Design Handbook. 2. United States Air Force Rome Laboratory Reliability Engineer's Toolkit (1993). Copyright © 2010 - 2014 Reliability Analytics Corporation All content and materials on this site are provided "as is" Reliability Analytics makes no warranty, express or implied, including the warranties of merchantability and fitness for a particular purpose; nor assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed; nor represents that its use would not infringe privately owned rights.
{"url":"http://reliabilityanalyticstoolkit.appspot.com/field_mtbf_calculator","timestamp":"2014-04-17T18:22:56Z","content_type":null,"content_length":"11304","record_id":"<urn:uuid:c72fc003-4a73-49a2-b3b9-1b7c0dacdb45>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
The Colony SAT Math Tutor ...I am currently a senior mathematics major at UT Dallas with a cumulative GPA of 3.78. I completed advanced math courses in high school, culminating with AP Calculus and AP Statistics. I achieved a 5 on the AP Calculus BC exam and a 4 on the AP Statistics exam. 7 Subjects: including SAT math, algebra 1, prealgebra, algebra 2 ...I am currently teaching geometry. I love geometry because it requires logical reasoning. It is a very visual subject and I am a very visual learner and teacher. 10 Subjects: including SAT math, geometry, ASVAB, algebra 1 ...I received my degree from Texas Woman's University and am currently pursuing my Masters in Biophysical Chemistry. After my masters, I plan to pursue my Ph.D. in Biochemistry or in Naturopathic Medicine. I graduated with a 3.76 GPA in all of my Chemistry and Math courses. 17 Subjects: including SAT math, chemistry, geometry, biology ...I now excel in subjects with which I once struggled tremendously because I was fortunate enough to have had a few teachers and tutors who embodied this same philosophy. I'm very patient in my approach with students, and try not to always force the issue. I realize not every student really wants to learn. 41 Subjects: including SAT math, chemistry, French, calculus ...I usually allow at least 6 weeks of preparation so there is less stress. Of course, there are always special exceptions. Please let me know if I can help you prepare for this entrance exam. 29 Subjects: including SAT math, reading, Spanish, GRE
{"url":"http://www.purplemath.com/The_Colony_SAT_Math_tutors.php","timestamp":"2014-04-19T23:37:50Z","content_type":null,"content_length":"23641","record_id":"<urn:uuid:7b19516e-32f2-445c-85d2-d5710b227793>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Tunable Imaging Filters - J. Bland-Hawthorn 3.2. Gap-scanning Filters Fabry-Perot filter: The air-gap etalon, or Fabry-Perot filter, was introduced in the previous section. The etalon comprises two plates of glass kept parallel over a small separation where the inner surfaces are mirrors coated with high reflectivity where µl is the optical gap. The condition for peaks in transmission is given in eqn. 1. Note that µ (pressure scanning), or l (gap scanning). Both tilt and pressure scanning suffer from serious drawbacks which limit their dynamic range. With the advent of servo-controlled, capacitance micrometry, the performance of gap scanning etalons surpasses other techniques. These employ piezo-electric transducers that undergo dimensional changes in an applied electric field, or develop an electric field when strained mechanically. Queensgate Instruments, Ltd. have shown that it is possible to maintain plate parallelism to an accuracy of Fabry-Perot filters have been made with 15 cm apertures and physical scan ranges up to 3 cm. The etalon is ultimately limited by the finite coating thickness between the mirrors, so it really only achieves the lowest interference orders (m < 5) at infrared wavelengths. Solid etalon filter: These are single cavity Fabry-Perot devices with a transparent piezo-electric spacer, e.g., lithium niobate. The thickness and, to a lesser extent, refractive index can be modified by a voltage applied to both faces. For low voltage systems, tilt and temperature can be used to fine-tune the bandpass. High quality spacers with thicknesses less than a few hundred microns are difficult to manufacture, so that etalon filters are normally operated at high orders of interference. The largest devices we have seen are 5 cm in clear aperture. Michelson filter: In the Fourier Transform or Michelson filter, the collimated beam is split into two paths at the front surface of the beam-splitter. The separate beams then undergo different path lengths by reflections off separate mirrors before being imaged by the camera lens at the detector. The device shown in Fig. 2 uses only 50% of the available light. As Maillard has demonstrated at the Canada France Hawaii Telescope, it is possible to recover this light but the layout is more involved. Figure 2. Schematic of a two-beam Michelson (Fourier Transform) interferometer. The output signal is a function of path difference between the mirrors. At zero path difference (or arm displacement), the waves for all frequencies interact coherently. As the movable mirror is scanned, each input wavelength generates a series of transmission maxima. Commercially available devices usually allow the mirror to be scanned continuously at constant speed, or to be stepped at equal increments. At a sufficiently large arm displacement, the beams lose their mutual coherence. The filter is scanned from zero path length (x = y = 0) to a maximum path length y = L set by twice the maximum mirror spacing (x = L / 2). The superposition of two coherent beams with amplitude b[1] and b[2] in complex notation is b[1] + b[2] e^i 2 where y is the total path difference and b^2 cos^2 y, where b = b[1] = b[2]. The combined beams generate a series of intensity fringes at the detector. If it was possible to scan over an infinite mirror spacing at infinitesimally small spacings of the mirror, the superposition would be represented by an ideal Fourier Transform pair, such where b(y) is the output signal as a function of pathlength y and B(B(b(y) are both undefined for y < 0: we include the negative limits for convenience. Note that The quantity b(y) - 1/2 b(0) is usually referred to as the interferogram although this term is sometimes used for b(y). The spectrum B( The Michelson does not suffer the coating thickness problems of the Fabry-Perot filter, and therefore reaches the lowest orders even at optical wavelengths.
{"url":"http://ned.ipac.caltech.edu/level5/Hawthorn2/Haw3_2.html","timestamp":"2014-04-21T07:52:43Z","content_type":null,"content_length":"7765","record_id":"<urn:uuid:63b1b745-a1e0-4a57-984a-9d34b5ee8f09>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
Why aren't electrons considered "black holes"? That site looks fishy to me. We have physics that work very well in the energy region accessible to us - even the most powerful colliders we have been able to build. However, it is known that there are problems with extending this physics too far into high energies - or equivalently, at distances that are too close. (It takes a very high energy to get electrons that close to each other). This is an area that I'm not as familiar with as I would like, but the Wikipedia article: points out there are problems even in QED, at high enough energies. So what we have, currently, is known as an "effective field theory". This is perhaps more obvious for gravity, see for instance which is about quantum gravity as such an effective field theory. But gravity is (apparently, anyway) not alone in being in the "effective field theory" boat.
{"url":"http://www.physicsforums.com/showthread.php?t=124216","timestamp":"2014-04-18T10:50:53Z","content_type":null,"content_length":"73947","record_id":"<urn:uuid:5a23291b-a888-496d-b097-d0563af63c15>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
How long to build a base? Graymane -> How long to build a base? (8/15/2011 2:08:37 PM) Airfield and Port Construction Times Factors involved in construction times 1. 1 Eng Vehicle = 5 Engineers and 1 build point = 1 engineer. 2. Supply Consumption Rate. Engineers in combat mode consume roughly the same amount of supply whether they are constructing bases or not. 3. Supply Consumption. Engineers in combat mode do not consume supplies at the same rates. I ran tests with 30 engineers, 60 engineers and 120 engineers both constructing and not constructing. The rates within each size were the same (60 constructing and 60 in combat mode not constructing consumed roughly the same amount of supplies.). Consumption rates per turn for 30, 60 and 120 engineers were all different. The larger the number of engineers, the fewer supplies consumed per capita. Notably, 2 supplies per engineer does not seem to hold. 4. Engineer usage. An odd number of engineers is treated as if it has 1 more engineer (5 engineers acts like 6, for example). 5. Each size of a port or airfield costs the same amount regardless of the SPS as long as the SPS is greater than 0 and the port or airfield is smaller or equal to the SPS size. In other words, a level 1 airfield on a 0(1), 0(2), through 0(9) costs the same amount of engineer points. I call this value the standard cost. 6. A port or airfield 1 size larger than the SPS costs twice the standard cost you are building to. 7. A port or airfield 2 or 3 sizes larger than the SPS costs four times the standard cost for the size you are building to. 8. A 0(0) dot base costs 20 times the standard cost for a size 1 base. 9. A 1(0) or 2(0) costs 40 times the standard cost for building a size 2 or 3 base. 10. The total SPS of the base (port SPS + airfield SPS) impacts the total number of engineers allowed to work on the base up to a maximum SPS of 9. 11. Engineers work 12 hours per day. First turn and second turn construction times are the same, so engineers seem to work during the day turn and not nights. 12. Type of island doesn't matter (6000, 60,000 or unlimited) for construction speeds. Rules of Thumb 1. Trying to build up 0 total SPS bases (0(0), 0(0)) is generally a bad idea. You can only use a few engineers and that number goes down as the size goes up. 2. However, building up a 0(0) base with an SPS greater than 0 (say a 0(3), 0(0)) is much easier. I don't have the values in the table, but you can use more engineers which has the effect of a faster build time. Assumptions that were not tested 1. All engineers are equal with respect to construction time and supply cost. 2. Landlocked airbases behave the same as island port/airfields with respect to SPS calculations. 3. Anything to do with forts. 1. I ignored disabled support/engineers during testing as it didn't seem to materially affect the results. 2. The actual standard costs are numbers like 394 and 296. In all cases, I've rounded up to the nearest hundred. This has the affect of overestimating engineers and costs. The actual values are somewhat smaller but not that much. (Testing done with 1108p7) (Testing done with modified scenario 11 (Marianas 1943)
{"url":"http://www.matrixgames.com/forums/printable.asp?m=2885601","timestamp":"2014-04-17T14:11:38Z","content_type":null,"content_length":"46086","record_id":"<urn:uuid:8ad9f111-d3e1-4285-8589-09274eac5870>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Fundamental Algorithms Alan Siegel Lecture: M 5:10-7:00, Room 102 Recitation: Th 5:10-6:00, Room 109 Please note that the recitation is in a different room Office Hours: Th 6-7, Th 2-3 and by appointment Phone: 998-3122 This course covers the design and analysis of combinatorial algorithms. The curriculum is concept-based and emphasizes the art of problem-solving. The class features weekly exercises designed to strengthen conceptual understanding and problem solving skills. Students are presumed to have adequate programming skills and to have a solid understanding of basic data structures and their implementation in the programming languages of their choice. Although some mathematical sophistication is very helpful for this course, the necessary mathematics is contained within the curriculum. In some recitation sessions, sophisticated problems will be solved by the class in closely supervised individual, collaborative, and group efforts. Other recitation sessions will be used for additional lecture. Required Text: An Introduction to Algorithms: their Methods and ssendaM, by A.R. Siegel. A new edition of the text will be used for this course, and will be available before the first day of class. The details for acquiring the text will be posted here as soon as it becomes available. Course Topics Algorithmic Design Paradigms The Greedy Method Dynamic Programming Sorting- and Selection-based processing Algorithm Redesign and Adaptation Problem Transformations The Analysis of Algorithmic Performance Asymptotic Growth Recurrence Equations The Recursion Tree Solution Method Probabilistic Analysis Structural Analysis Lower Bounds Managing Data for Efficient Processing Lists, Stacks, Queues, Priority Queues, Trees and Graphs Tarjan's Categorization of Data Structures Search Trees and their Enhancement Sorting, Selection, and Hashing Selected Representative Algorithms/problems Topological Sort Connected Components Biconnected Components and Strong Components Representative styles of Dynamic Programming and their applications Standard Sorting and Selection Algorithms Selected topics in Hashing Minimum Spanning Trees Shortest Path Problems It will change as the lectures and organization are adapted to fit the current time constraints. • There will be approximately 11 written homework assignments that contain 10 to 15 multi-part exercises. Perhaps one-third to one-half of these problems will be extremely challenging. That is, the necessary concepts will have already been taught, but significant thought will be needed to figure out how to apply these techniques to solve the more challenging exercises. • Students are not required to solve even the majority of the difficult problems, but they are expected to write down what ideas/methods they used, and where their solution method seems to have broken down. • Students are also expected to compare their own answers with the solution handouts to identify the concepts and techniques were overlooked. In particular, students are required to keep a copy of every homework assignment, and submit a self-grading of that copy in the first class day that follows the due date for the assignment. • Because more than a third of the course is embedded in the exercises, students are expected to study the answers as a vehicle for mastering the material. • Homework will receive two grades: overall performance, and quality of the self-grading effort. Incorrect and even fragmentary incorrect answers can receive full credit for the QoE self-grading. Course Grading Policy • 20% Midterm Exam • 10% Overall homework performance grade • 20% Overall QoE homework grade • 44% Final Exam • 6% Classroom participation • The homework and exam grading policy allocates significantly more credit for incorrect and partial solutions that include an explanation about what is wrong than for comparable answers that lack any such comments.
{"url":"http://cs.nyu.edu/courses/spring14/CSCI-GA.1170-001/index.html","timestamp":"2014-04-18T13:22:21Z","content_type":null,"content_length":"5366","record_id":"<urn:uuid:714b697c-4390-453a-8805-5982c88fe8b6>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
jvegaroj's Likes In Interest Mathematics Cheat Sheets | StudentHacks.org Here's a list of some helpful cheat sheets from around the web . . . Algebra Notes 23 pages filled with every formula and rule you need to know for Math Ability Starts in Infancy, Study Suggests Babies who have a strong primitive sense of numbers at 6 months of age grow into children with strong math skills at age 3, new research finds. Scientists hope the study will lead to better math infoverse - octomatics what would be the binary code? it should be: 000 . 001 . 010 . 011 . 100 . 101 . 110 . 111 MIT OpenCourseWare | Mathematics The lecture notes were taken by a student in the class. For all of the lecture notes, including a table of contents, download the following file (). Touch Mathematics | Derivatives A collection of free, web browser-based apps that aim to make mathematics intuitive for learners of all ages. touchmat...tics.org 9,087 humble software development - trig demo A trigonometry visualization written in HTML5 / Canvas relating the circle to the sine function. The Batman Equation Reportedly, the equation above plots as the figure below, which is...familiar from somewhere. Can't quite put my finger on it. Turmite -- from Wolfram MathWorld Turmites, also called turning machines, are 2-dimensional in which the "tape" consists of a of spaces that can be written and erased by an active ("head") element that turns at each iteration on the basis of the state of its current grid square. ...
{"url":"http://www.stumbleupon.com/stumbler/jvegaroj/likes/interest/mathematics","timestamp":"2014-04-20T16:05:59Z","content_type":null,"content_length":"80144","record_id":"<urn:uuid:cd1f14a0-9b37-41db-bc7c-9868ac5b878e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Count on Math 2: Making Your First Million 4, 5, 6 Professional Commentary In this second of two lessons on developing number sense, students begin to apprehend the size of one million by figuring out how long is a million days, how long would it take to count to one million on a calculator, how long would it take to write the numbers from 1 to 1,000,000, and other such questions. An activity sheet and lesson extensions are included. This lesson is adapted from an article published in the February 1996 issue of Teaching Children Mathematics. (sw) Common Core State Standards for Mathematics Grade 4 Number and Operations in Base Ten Generalize place value understanding for multi-digit whole numbers. Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. Compare two multi-digit numbers based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons. Grade 5 Number and Operations in Base Ten Understand the place value system. Recognize that in a multi-digit number, a digit in one place represents 10 times as much as it represents in the place to its right and 1/10 of what it represents in the place to its left. Perform operations with multi-digit whole numbers and with decimals to hundredths. Fluently multiply multi-digit whole numbers using the standard algorithm. Grade 6 The Number System Compute fluently with multi-digit numbers and find common factors and multiples. Fluently divide multi-digit numbers using the standard algorithm. Fluently add, subtract, multiply, and divide multi-digit decimals using the standard algorithm for each operation. Ohio Mathematics Academic Content Standards (2001) Number, Number Sense and Operations Standard Benchmarks (3–4) Use a variety of methods and appropriate tools (mental math, paper and pencil, calculators) for computing with whole numbers. Benchmarks (5–7) Use a variety of strategies, including proportional reasoning, to estimate, compute, solve and explain solutions to problems involving integers, fractions, decimals and percents. Grade Level Indicators (Grade 4) Use a variety of methods and appropriate tools for computing with whole numbers; e.g., mental math, paper and pencil, and calculator. Grade Level Indicators (Grade 6) Use proportional reasoning, ratios and percents to represent problem situations and determine the reasonableness of solutions. Principles and Standards for School Mathematics Number and Operations Standard Compute fluently and make reasonable estimates Expectations (3–5) select appropriate methods and tools for computing with whole numbers from among mental computation, estimation, calculators, and paper and pencil according to the context and nature of the computation and use the selected method or tools. Expectations (6–8) develop, analyze, and explain methods for solving problems involving proportions, such as scaling and finding equivalent ratios.
{"url":"http://www.ohiorc.org/record/4374.aspx","timestamp":"2014-04-18T13:48:05Z","content_type":null,"content_length":"31258","record_id":"<urn:uuid:a72ead4c-03f0-4ae7-80a6-6d317ecfb57c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Tutors in Gurnee, IL 60031 A friendly instructor with PhD and MBA and vast experience ...I am qualified to teach several topics in discrete mathematics including set theory, graph theory, probability, number theory, , discrete calculus, geometry, game theory and discretization. I have taken several mathematics courses covering these topics during... Offering 10+ subjects including algebra 1 and algebra 2
{"url":"http://www.wyzant.com/60099_algebra_tutors.aspx","timestamp":"2014-04-17T13:31:04Z","content_type":null,"content_length":"60371","record_id":"<urn:uuid:213c45fd-519a-4171-8018-92d2cc2fc089>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
South River Prealgebra Tutors ...The students experience the benefits of understanding the math which will reflect in their positive results. Since I have experience of working in public schools I am fully enriched with the content knowledge, skills, applications in projects. Also I am equipped with the Algebra resources like ... 10 Subjects: including prealgebra, calculus, geometry, algebra 1 I am an experienced tutor working with elementary through high school students. I have over 6 years of experience tutoring and 4 years of experience working with middle-school students from minority backgrounds who are struggling with reading, writing and math. I love working with students from all grade levels and helping them get motivated to set learning goals and to succeed in 25 Subjects: including prealgebra, English, reading, statistics ...I am certified elementary, early childhood, and special education teacher. I worked at a summer camp teaching science and helping in the bunk life area for students with disabilities. A large population of these students had Autism and Aspergers. 25 Subjects: including prealgebra, reading, ESL/ESOL, algebra 1 My experience in tutoring spans a wide variety of subjects and disciplines. I have degrees in Biology and Mathematics and am comfortable teaching any and all of the subjects in both science and math. I have personally tutored everything from Algebra to Advanced Calculus and English to AP Biology and everything in between. 22 Subjects: including prealgebra, reading, English, chemistry My name is Enrique and I have an Engineering degree from NJIT (Bachelor of Science). I am devoted to teaching, I can quickly assess a students need in a mathematical topic, and can tailor my lessons accordingly. I tutor one on one and have the ability to determine what the students strengths and we... 3 Subjects: including prealgebra, calculus, precalculus
{"url":"http://www.algebrahelp.com/South_River_prealgebra_tutors.jsp","timestamp":"2014-04-19T17:05:56Z","content_type":null,"content_length":"25229","record_id":"<urn:uuid:d26442f3-ecb1-4443-a894-8078e2062552>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of the Brazilian Computer Society Services on Demand Related links Print version ISSN 0104-6500 J. Braz. Comp. Soc. vol.5 n.3 Campinas Feb. 1999 Simulation of controlled queuing systems and its application to optimal resource management in multiservice cellular networks Jinsung Choi LG Information and Communication, KOREA J. A. Silvester Dept. of Electrical Engineering University of Southerm California University Park Los Angeles CA 90089 We consider a controlled queuing model and derive the corresponding Markov decision process for simple G/M/1 call admission controller. As an application of the controlled queuing model, we look into an optimal resource management problem arising in the context of multiservice cellular network with reuse partitioning. In particular, we consider a channel borrowing scheme between zones in two-zone reuse partitioning in a two-class cellular network and investigated the performance of various channel borrowing policies via simulation. We also introduce the simulator developed for controlled queuing systems. Most importantly, we demonstrate that Markov decision processes are well suited to this kind of optimization problem. Keywords: Controlled queuing, Markov decision process, channel borrowing, stochestic optimization. 1 Introduction A Controlled queuing model is a useful model for systems requiring control of arrivals, service mechanism or service discipline. These types of systems arise in many contexts, including manufacturing, distributed computer systems, voice and data networks, and vehicular traffic flow. In this paper, we discuss a system requiring control of arrivals and model it as a Markov Decision Process (MDP) and introduce a simulator developed for this model. We apply the model to manage radio spectrum resources efficiently in multiservice cellular networks with reuse partitioning. In this application, arrivals correspond to new call requests originating in cells and the objective of control of arrivals is to maximize the revenue in terms of overall radio spectrum utilization while guaranteeing a call level Quality of Service (QOS) (that is, call blocking probability). Through this application, we show that the controlled queuing model combined with Markov decision process model is well suited to this kind of resource management application. The study of controlled queuing systems began in the 1970s and a comprehensive discussion can be found in the literatures. Initial attention was directed toward investigations of operational characteristics of system under controls. These investigations were stimulated by ideas and techniques from the theory of Markov decision processes. It turns out that Markov decision processes is a good model for controlled queuing system and some results of optimal admission control problems using Markov decision processes can be found in literature [1] [11] [13]. Markov decision processes provide the theoretical foundations for sequential decision making tasks [1] [11]. A standard approach in the control of queuing systems consists of formulating the problem as a Markov decision process and deriving a functional equation of stochastic dynamic programming, from which the optimal policy is determined. This paper is organized as follows. We start by introducing a model for queuing systems with control of arrivals in Section II. Then, in Section III and Section IV, the application to be considered is presented together with its corresponding Markov decision process model. Section V is devoted to the developed simulator. Numerical results obtained from the simulation are presented in Section V. 2 A Queuing Model with Control of Arrivals In queuing systems, a typical control is based on admitting or rejecting arriving customers (Figure 1). Decisions may be based on various information available to the controller and motivated by different incentives. For example, customers may be divided into several classes, according to the reward they yield to the system. These rewards may be very important information to the controller. Also, information about the current state of the system may be available to the controller. Figure 1 - A Queuing System with Control of Arrivals Let us consider the problem of accepting and rejecting customers at a single-server queue in a G/M/1 setting, where interarrival times are i.i.d. random variables with distribution function B(^.) and service times are i.i.d. random variables having the exponential distribution with mean 1/m . A controller regulates the system load by accepting or rejecting arriving customers. If rejected, the arrival leaves the system. This might serve as a simple model for a telephone call admission controller. Let the state space for the natural process denote the number of customers in the system (in service plus in the queue) at any time point. Assume that the system capacity (the size of Eligible Queue in Figure 1) is infinite. Each arriving customer contributes R units of revenue and the system incurs a holding cost at rate f(j) per unit time whenever there are j customers in the system. Decision epochs are the time immediately following an arrival and decisions are required only when customers enter the system. If at some decision epoch the system occupies state sÎ S, where S={ 0, 1, }, the controller must choose an action a from the action set A[s] ={0,1} for all sÎ S, where action 0 denotes rejecting an arrival, while action 1 corresponds to accepting an arrival. As a consequence of choosing action a, the next decision epoch occurs at or before some time t, and the system state changes to other state according to the transition probability. This controlled queuing system can be formulated as a semi-Markov decision process. A Markov decision process is a controlled stochastic process satisfying the Markov property with costs assigned to state transitions. A Markov decision problem (MDP) is a Markov decision process together with a performance criterion as an objective function. In general, a solution to a Markov decision problem is a policy, mapping states to actions, that determines state transitions to minimize the cost according to the performance criterion [1]. A semi-Markov decision process is the embeded Markov decision process that agrees with the natural process only at decision epochs. We prefer to use the semi-Markov decision process model in a queuing admission control model because what transpires between decision epochs provides no relevant information to the controller. The formulation of a semi-Markov decision process consists of determining the five elements: decision epochs, states, actions, rewards and transition probabilities [1]. The definitions of decision epochs, states and actions are already made above and the definitions of the rest are specified as follows. Let F(t|s,a) denote the probability that the next decision epoch occurs within t time units of the current decision epochs, given that the controller chooses action a from A[s] in state s at the current decision epoch. Also, let the quantity p(j|t,s,a) denote the probability that the natural process occupies state j, t time units after a decision epoch, given that action a was chosen in state s at the current decision epoch and that the next decision epoch has not occurred prior to time t. We use the transition probability p(j|t,s,a) to compute cumulative rewards between decision Since decisions are made only at arrival times, where B(^.) is the probability distribution function of the arrival process and is independent of s and a. In between arrivals, the natural state may change because of service completions. From elementary probability theory, the number of service completions in t units of time follows a Poisson distribution with parameter m t. Consequently, the probabilistic evolution of the state of the natural process can be described by for s ³ j >0 . Letting it follows that When the controller chooses action a in state s, he receives a lump sum reward, k(s,a), given by : Further it accrues a reward at rate c(k,s,a) as long as the natural process occupies state k, and action a was chosen in state s at the preceding decision epoch. where k denotes the state of the natural process. Let Q(t,j|s,a) denote the probability that the next decision epoch occurs at or before time t, and the system state at that decision epoch equals j, given that the previous state was s and action a was taken at the previous epoch. Additionally, let P(j|s,a) denote the transition probability that the next state is j, given that the previous state was s and action a was taken at the previous epoch. Then, for this model, the followings hold ; 3 A Resource Management Problem Optimal Channel Borrowing in Reuse Partitioning in Multiservice Cellular Networks In the previous section, we discussed a controlled queuing system in the context of control of arrivals and formulated as a semi-Markov decision process. In this section, we introduce a practical problem as an application of controlled queuing system models, and in the next section we formulate this application using the knowledge obtained from Section II. We consider an issue in the area of cellular network management. In cellular networks, it is crucial to be able to use the available radio spectrum as efficiently as possible. At the same time, there is a need to provide a certain quality of services (QOS) for mobile users, which may require a sophisticated resource management scheme. Let us consider a multiservice cellular network system with a fixed channel allocation scheme combined with a reuse partitioning scheme. By multiservice, we consider two classes of call traffic, namely broadband and narrowband, where each class can be described by a set of traffic parameters describing the call arrival process, average call duration and the number of channels required for each call. Reuse partitioning is an effective concept to achieve high spectrum efficiency. In two-zone reuse partitioning, each cell is divided into two concentric subcells or zones, inner zone and outer zone, each corresponding to a different reuse factor (Figure 2) [3][5]. The idea behind reuse partitioning is that because the inner zone is closer to the base station, the power level required for a desired Carrier to Interference Ratio (CIR) in the inner zone can be much lower than the outer zone. As a result, the channel reuse factor of inner zones can be smaller than that of outer zones, thus resulting in higher spectrum efficiency compare to the system without reuse partitioning. For example, in [3], it is shown that the channel capacity can be increased by 30% by using two reuse factors of N[i]=3 for inner zones and N[o]=9 for outer zones over that achieved by a single reuse factor of N=7. Figure 2 - A Cellular Network with Reuse Partitioning Scheme With N[i] = 3 an N[o] = 7 We can increase radio spectrum utilization further by adding an asymmetric channel borrowing scheme to the reuse partitioning. In this additional scheme, the inner zone can borrow free channels from its neighboring outer zone after all nominal channels assigned to the inner zone are used. By asymmetric, we mean that unused channels may be borrowed from outer zones to inner zones but the opposite, from inner zones to outer zones, is not allowed. This is because channel borrowing from inner zones to outer zones cause so-called channel locking problem, which in turn arises different resource management issues. In asymmetric channel borrowing between zones, a tradeoff is noticed between increasing overall channel utilization and providing fair service to users in the same cells in terms of call blocking probability. In other words, it is not possible to increase overall channel utilization by channel sharing without degrading the call blocking probability of the users in outer zone. Therefore, what we can do is to control the channel borrowing in such a way that the constraint of outer zone call blocking probability is not violated. Otherwise, the channel borrowing may proliferate to such an extent that the outer zone call blocking probability is jeopardized. We call the controller in charge of this task the channel borrowing controller (CBC). In the following section, we formulate CBC as a Markov decision process. 4 Problem Formulation The scheme described in the previous section can be viewed as a controlled queuing system having multiple servers and no waiting queue, i.e., if no server is available at the moment a new call originates, that call is blocked immediately (Figure 3). This type of queueing system is also known as a loss model. We classify calls into four categories based on the call originating location and the call class, namely: normal narrowband and broadband calls, which originate in the outer zone and overflow narrowband and broadband calls originating in the inner zone. (Notice that the two terms, outer zone call and normal call, mean the same, and therefore, will be used interchangeably throughout the paper.) More specifically, by overflow calls, we mean inner zone calls making use of borrowed channels from the outer zone, which occurs whenever new calls originate in the inner zone when all channels of the inner zone are busy. All types of calls are modeled as Poisson processes and it is assumed that call durations are exponentially distributed. Each type of calls has different traffic parameters. Figure 3 - Queuing Model for State based Channel Borrowing Scheme in Reuse Partitioning In reality, it turns out that a so-called Interrupted Poisson process (IPP) is a more appropriate model for overflow calls in Markovian loss system [14]. However, it is not necessary to incorporate IPP into our model here because it can be approximated by a Poisson process under the condition that the traffic load of overflow calls is significantly much less than that of normal calls. In this system, channels are assigned in the following way. A newly originating normal call is accepted by the system as long as the server is idle. Whereas, a new overflow call coming from the inner zone is controlled by CBC regardless of the type of the call. The detailed operations of CBC will be discussed in the next section. The purpose of CBC is to maximize the overall channel utilization while guaranteeing the outer zone call blocking probability. With the definitions above, the system ruled by CBC is now ready to be formulated as a Markov decision process. As mentioned earlier, five elements need to be specified in a Markov decision process modeling: states, actions, state transition probabilities, a reward function and decision epochs [1] [11]. These elements are given as follows. The state is defined by the vector s = (s[n], s[b]), where s[n], s[b] represent the number of on-going narrowband calls, broadband calls in the system respectively. Four kinds of actions are considered in each state, namely: (r, r) meaning rejecting both classes of overflow calls, (a, r) accepting only narrowband overflow calls, (r, a) accepting only broadband overflow calls and (a, a) accepting both classes of overflow calls, respectively. Notice that borrowing decisions are made before the occurrence of an event instead of after. This is necessary to reduce the size of the problem [13]. The state transition probabilities are calculated based on aggregate call arrival or departure rates and they depend on the state and the action chosen. Notice that since the underlying Markov Chain is a continuous-time system, it is required to transform the transition rates to equivalent transition probabilities through uniformization process [1]. They are given as follows after uniformization : where m[n], m[b] represent the call departure rates of narrowband calls and broadband call, respectively l[n], l[b], g[n], g[b] denote the arrival rates of normal narrowband calls, normal broadband calls, overflow narrowband calls and overflow broadband calls respectively, I[n], I[b] are indicator functions whose value is 0 (reject) or 1 (accept) depending on the action taken. Channel utilization rate is used as the reward function. The reward function also needs to be transformed through uniformization. The reward function r(s,a) is defined as follows after uniformization where c[n] , c[b] are the channel requirements of a narrowband, broadband call, respectively. In order to complete the formulation, one more element needs to be specified, which is a performance criterion. We use the expected average channel utilization, which is calculated based on the reward function, as the performance criterion. The expected value is preferable over other performance criteria such as total reward or discounted total reward because the controller should make decisions frequently and infinitely. In the next section, we simulate the application. 5 Simulation Simulation is a useful method to get the idea of statistical behavior of the system. In this section, we discuss about the simulator developed to simulate the controlled queuing system described above using discrete event simulation technique for Markovian queuing systems and present several simulation results of our application, optimal channel borrowing control problem. The numerical results show how the QOS statistics change according to channel borrowing control policy. This section is organized as follows; We first start by giving an overview of the simulator developed and discuss policy representation in Section V-B. We then cover the algorithms of CBC and preset numerical results obtained from the simulation. In the last subsection, a method to find the best policy is described. A. An Overview of the Simulator The underlying structure is based on typical discrete event simulator, which is one popular method in simulation of Markovian queuing systems. The main components of CQS are the following; • Event Queue (EQ) and Event Handler (EH) on the upper left • Channel Assignment Controller (CAC) and Channel Borrowing Controller (CBC) on the middle • A channel borrowing control policy to be experimented on the upper right All types of call arrivals including overflow calls and departures, i.e., call hang ups, are treated as events and are put into EQ together with the time stamp indicating when the event should happen or be processed. Whenever an event is extracted from the head of EQ and processed by EH, the next event of the same type is scheduled immediately according to the given probability dis-tribution using random number generator and is put into the tail of EQ. If the event extracted from the head of EQ turns out to be a call arrival, EH calls CAC to serve the call request. The basic task of the CAC is to assign available channels to users. As mentioned earlier, there are two types of call requests, normal and overflow calls. If the incoming call is a normal call, then CAC checks the current state of the system describing the current channel occupancy and admits the call as long as there are available channel resources or rejects otherwise. In other words, normal calls are blocked only when the system is full. This task has nothing to do with the channel borrowing control policy. If it is an overflow call, then CAC calls its submodule, CBC, and CBC takes care of the overflow call requests. CBC, as is the case above, checks the current state of the system and the specified channel borrowing control policy for that state and accepts or rejects depending on the action associated with that state. In this case, it is possible to reject overflow call request despite the availability of channels to maximize the overall channel utilization while guaranteeing specified call blocking probabilities to normal calls. After an event has been processed, the control is handed back to EH to continue to process next events. This task lasts until the simulation clock reaches the given maximum simulation time. While processing events, all statistical data are collected and updated. Among them, we are particularly interested in the overall channel utilization and blocking probabilities of normal calls. After the simulation, we compare the resulting blocking probabilities of normal calls and the required ones and see if the given policy made the system meet those QOS measures. In next section, we discuss about how to represent channel borrowing control policies. B. Representation of Channel Borrowing Control Policy In this section, we introduce a simple and intuitive way of policy representation using matrix and vector for the simulator. We start with a generic representation of stationary policies for the case of a two service networks with narrowband and broadband calls. Let p a stationary policy. A policy p specifies the channel borrowing rule to be used throughout the simulation. A policy is called stationary if the rule does not change with time t and can be represented mathematically as follows: Let the channel borrowing control policy p:S ®{a,r}^2 , be defined by the vector mapping p(s) = (p[n](s), p[b](s))^T , where p[n or b] (s) = a (or r) signifies that the system will accept (or reject) a future overflow narrowband or broadband calls when it is in state s. Let Ã(P) denote the set of all possible policies. We will discuss the cardinality of Ã(P ) later. Using these notation, a policy can be represented in a tabular form with two columns, state column and action column, as an input form for the simulator. This representation is simple and useful in the case of small system. In case of moderate and large systems, however, simpler form is required. If we restrict the policy to the multidimensional threshold (or coordinate convex) type of policies, which is the case in this paper, the tabular form above can be reduced to a matrix as follows. We call this matrix a policy matrix P ; where v[n] , v[b] denote the control vector for narrowband overflow calls and broadband overflow calls respectively and t^ ^i[k] is the threshold value for type i calls when the number of on-going broadband calls is k. The dimension colomns, N[b], is determined by the total number of channels in the system, C and the required number of channels for a broadband call, c[b] as above. In other words, channel borrowing control policies can be represented by a set of two control vectors, v[n] and v[b]. Vectors can be used because the Markovian system is discrete. If the threshold value t^ ^i [k] is 1, it means rejection of next type-i overflow calls. Channel Borrowing Controller (CBC) makes decisions based on those threshold values. The threshold value t^ ^i[k] is not necessarily an integer. If t^ ^i[k] has an explicit decimal point followed by a fractional part, the fractional part will be taken as the probability of acceptance of a new call in the state s = (s[n] , s[b]) = (t^ ^i[k] + l, k) . More detail will be discussed in the Section IV.4. C. Execution of Channel Borrowing Control When a new overflow call request comes in the system, first, we examine the current state of the system and see how many on-going broadband calls, s[b], are in the system. Then we take the s[b]+1-th column in the policy matrix P as the threshold vector, T. For instance, in the previous example, say, s[b] = 2. Then, T would be [2 0]^T, which in turn means that the threshold for overflow narrowband call, q[n], is 2 and the threshold for overflow broadband call, q[b], is 0. Then we check the state of the system again and see if the number of on-going narrowband calls, s[n], is less than or equal to that threshold value of the type of requesting calls. If so, that call will be accepted. Otherwise, it will be rejected. Similar procedures are taken for a new broadband overflow As mentioned earlier, the thresholds, q[n] and q[b], may be a number with fractional part. In that case, the thresholds are decomposed as follows ; q = int(q) + fract(q) The fractional part, fract(q), is used as the probability of acceptance of a new overflow call in state . s = (s[n] , s[b]) = (t^ ^i[k] + l, k) We call any policy having fractional parts in its policy matrix a randomized policy. D. Simulation Results Figure 4 through 7 show the simulation results of an example with the total number of channels in the system, C=24. Then the dimension of the policy matrix is 2´ 13. In this example, the traffic load of overflow calls was fixed at 30% of normal calls for both types and the traffic load of broadband calls was fixed at 30% of narrowband calls. The channel requirement of narrowband calls, c[n], is one channel per call and the channel requirement of broadband calls, c[b] , is two times bigger than c[n]. The average call duration of broadband calls is also two times longer than narrowband calls. The policy matrices used are summarized in Table 1. Figure 4 - Simulation results for v[n] = v[1] Again, -1 in matrix P means call rejection. The results show that the blocking probabilities of overflow calls,a [no] and a [bo] and the overall channel utilization, u. Among the results we are particularly interested in the normal call blocking probabilities, a [n] and a [b] because these are the QOS measures that we want to guarantee. We continue the simulation experiments with different channel borrowing policies in Figure 6 through 7. All those control vectors, v[n] and v[b], used in these series of simulation experiments are summarized in Table 1. We used five different vectors for v[n] and six different vectors for v[b] and each chart shows six different combinations of v[n] and v[b]. We denote these six different control vectors by v[1] through v[6] as shown in Table 1. As those values of thresholds in the vector imply, vector v[1] is the most generous channel borrowing control vector. On the other hand, v[6] is chosen as the most strict control vector and the other vectors, v[2] through v[5] were chosen between v[1] and v[6]. In Figure 4, v[n] is fixed at v[1] and the x axis is v[b] changing from v[1] through v[6]. As shown in the figure, call blocking probabilities of normal calls and narrowband overflow calls tend to decrease with v[b] and so does the overall channel utilization. Whereas, the call blocking probability of broadband overflow calls increases rapidly. Apparently this is because as we change v[b] from v[1] through v[6], the controller allows less channels for broadband overflow call requests, thus increase its blocking probability and the others, normal calls and narrowband overflow calls take advantage of it, therefore it decreases call blocking probabilities. The overall channel utilization, u, decreases because it is a monotonically increasing function of overall traffic load and this traffic load decreases with varying v[b] from v[1] through v[6]. Figure 5 shows similar results. The difference between Figure 4 and 7 is v[n]. In Figure 5, different v[n] is used, which is v[2]. All QOS measures except the blocking probability of narrowband overflow calls change just a little bit. Whereas, the blocking probability of narrowband overflow calls is shifted up. This happens because v[2] is stricter than v[1], thus gives higher blocks for narrowband overflow calls. Again, all other calls can benefit by decreasing the blocking probability. Figure 5 - Simulation results for v[n] = v[2] Figure 6 - Simulation results for v[n] = v[3] Figure 7 - Simulation results for v[n] = v[4] Figure 6 and 7 display similar behavior for v[n] = v[3,] and v[4] respectively. It is evident from all these figures that as we use stricter channel borrowing control policy for overflow calls, their call blocking probabilities get higher and normal calls can lower their call blocking probabilities. 6 Conclusion We considered a controlled queuing model and derived the corresponding Markov decision process for simple G/M/1 call admission controller. As an application of the controlled queuing model, we looked into an optimal resource management problem arising in the context of multiservice cellular network with reuse partitioning. In particular, we consider a channel borrowing scheme between zones in two-zone reuse partitioning in a two-classes cellular network and investigated the performance of various channel borrowing policies via simulation. We also introduced the simulator developed for controlled queuing system. Most importantly, we demonstrated that Markov decision processes are well suited to this kind of optimization problem. 7 References [1] M. Puterman, "Markov Decision Processes: Discrete Stochastic Dynamic Programming," Willy Inter-Science, 1994 [ Links ] [2] S.Papavassiliou, L.Tassiulas and P.Tandon, "Meeting QOS Requirements in a Cellular Network with Reuse Partitioning," IEEE JSAC, 12 (8): 1994 [ Links ] [3] S.W.Halpern, "Reuse Partitioning in Cellular System," Proc. IEEE Veh. Technol. Conf. VTC-83, 1983 [ Links ] [4] D.Lucatti, A.Pattavina and V.Trecordi, "Bounds and Performance of Reuse Partitioning in Cellular Networks," Proc. IEEE Globecomm, 1996, pp.1b.4.1-1b.4.8 [ Links ] [5] J.Zander and M.Frodigh, "Capacity Allocation and Channel Assignment in Cellular Radio Systems Using Reuse Partitioning," Electronic Letters, 28 (5), Feb. 1992 [ Links ] [6] J.Hyman, A.Lazar and G.Pacifici, "A Separation Principle Between Scheduling and Admission Control for Broadband Switching," IEEE JSAC, 11 (4), May 1993 [ Links ] [7] R.Guerin, "Queueing-Blocking System with Two Arrival Streams and Guard Channels," IEEE Trans.on Comm. 36 (2), Feb. 1988 [ Links ] [8] I.Katzela and M.Naghshineh, "Channel Assignment Schemes for Cellular Mobile Telecommunication Systems: A Comprehensive Survey," IEEE Personal Comm. Magazine, Jun. 1996 [ Links ] [9] S.Oh and D.Tcha, "Prioritized Channel Assignment in a Cellular Radio Network," IEEE Trans. On Comm. 40 (7), Jul. 1992 [ Links ] [10] M.Frodigh, "Reuse Partitioning Combined with Traffic Adaptive Channel Assignment for Highway Microcellular Systems," Proc. Globecom Dec. 1992, p1414-1418 [ Links ] [11] Henk Tijms, "Stochastic Models: An Algorithmic Approach," John Wiley & Sons, 1994 [ Links ] [12] B.Stavenow, K.Sallberg, and B.Eklundh, "Hybrid Channel Assignment and Reuse Partitioning in a Cellular Mobile Telephone System, " VTC-87, 1987, pp.405-411 [ Links ] [13] K.Ross and D. Tsang, "Optimal Circuit Access Policies in an ISDN Environment: A Markov Decision Approach," IEEE Trans. On Comm. 37 (9), Sep. 1989 [ Links ] [14] A.Kuczura, "The Interrupted Poisson Process As An Overflow Process," The Bell System Technical Journal 52 (3), Mar. 1973 [ Links ]
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-65001999000100005&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-17T05:07:05Z","content_type":null,"content_length":"64088","record_id":"<urn:uuid:2c2bc6d2-27c3-4ffd-b466-c3b34b11ad5c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/tiffers24/asked","timestamp":"2014-04-20T00:56:44Z","content_type":null,"content_length":"86973","record_id":"<urn:uuid:713a5604-85c4-4398-b71b-04a9d80eb418>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
parametric and implicitly defined curves October 14th 2008, 08:42 AM parametric and implicitly defined curves A stick of length 4 pivots counterclockwise around the origin of R2 at a rate of one revolution per second. A smaller stick of length 2 is attached to the end of the larger stick, and pivots counterclockwise around that point at a rate of two revolutions per second. Assuming that both sticks point in the positive x direction at time t=0 , express the curve drawn by the end of the second stick in terms of the parameter t of time measure in seconds.
{"url":"http://mathhelpforum.com/calculus/53649-parametric-implicitly-defined-curves-print.html","timestamp":"2014-04-18T12:04:19Z","content_type":null,"content_length":"3953","record_id":"<urn:uuid:a19397ce-97af-46d0-9072-821baa533b01>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Using astronaut Mike Fossum’s YouTube video to measure ATV acceleration By Rhett Allain The Automated Transfer Vehicle (ATV) doesn’t just bring supplies to the International Space Station. It can also be used for ISS reboosts. What is a reboost? In short, during a reboost, the ISS velocity is increased by a small amount to bring the space station up to a slightly higher orbit. Why is this needed? Well, although the ISS is in space, there is still stuff up there (gas from the atmosphere) that exerts a small drag force on the Station, decreases its velocity. The reboost are there just to keep it where it needs to be. This video shows the inside of the ISS during an ATV reboost, i.e. when the ATV's main thrusters were firing. Let’s see if we can estimate the ATV thrust based on the acceleration of astronauts inside the space station. Editor's note: In addition to having a knack for science communication, Rhett Allain is Associate Professor of Physics at Southeastern Louisiana University. He writes regularly for Wired's Dot Physics blog and is a bit of a physics fanatic who spends more time than many pondering how daily life intersects with science. With the recently announced development of ATV in cooperation with NASA for Orion, we're delighted to feature a few posts from the far side of the Atlantic. Enjoy! – DGS There are a couple of different ways you can measure the acceleration in NASA astronaut Mike Fossum's YouTube video, but I am going to use one of the astronauts themselves (we think this is the first scientific use of an astronaut's floating body as seen in a YT video to calculate ATV acceleration – Ed). Basically, I will use a video analysis program (in this case, the free Tracker Video Analysis). With video analysis, you can get position and time data from each frame of a video. If the motion of the astronauts had been recorded from a side view, position vs. time would obviously be the best choice. As you can see in Mike's video above, however, Mike, astro Satoshi Furukawa and cosmonaut Sergy Volkov are moving away from the camera, so I will measure the angular size of a person. As things move farther away from a camera, they also appear steadily smaller. Here is a diagram that shows the relationship between angle, size and distance. If you know the angle theta (θ) and the length of the object, you can find the distance (which I call r) with the formula: r = L / θ With this, I can mark a point on each side of one of the receding astronauts as he accelerates away from the camera. With some basic estimations for the angular view of the camera (and size of an astronaut), I get the following plot of distance from the camera for one of the astronauts. Since the graph of motion appears to be quadratic, I can compare the polynomial coefficients with the kinematic equation for constant acceleration. This says that the astronaut’s acceleration (with respect to the camera) was 0.034 m/s^2. Since the astronaut's body is in free flight orbit, this also means that the ISS had an acceleration of 0.034 m/s^2 in the opposite direction. This means that the ISS had an acceleration of 0.034 m/s^2 So far, so good. But remember, this isn’t the official acceleration due to ATV's thrust – this is just an estimate. Although this video analysis method is fun, there are other ways to get the acceleration of the ISS during a reboost. For this particular reboost, the change in speed of the ISS was 5.75 m/s. (Editor's note: The YT video used in Rhett's calculations was recorded during the ATV-2 reboost conducted on 15 June 2011, or possibly on 12 June; click on dates for details.) I'm not sure exactly how long this reboost lasted, but the video was 2:13 long. If we assume that is the time for the change in velocity, then we can calculate the acceleration. That's not exactly the same value from my video analysis, but it is close enough for me. Thrust from ATV The ATV thrusters have to exert a force on the ISS in order to obtain this acceleration. What magnitude of thrust would this require? If this is the only force acting on the ISS (it isn't), then we can say: F[thrust] = ma The ISS has a mass of about 4.5 x 10^5 kg. So, the amount of acceleration calculated above would require a thrust with a magnitude around 1.5 x 10^4 (15 000) Newtons. That might seem like a large force, but it isn't. Just as a comparison, this would be about the same as the gravitational force on two adults on the surface of the Earth. If you want some serious thrust, you could look at one of the Solid Rocket Boosters that were used during the launch of the Space Shuttle. Each of these rockets had a thrust around 14 million Newtons. Editor's note: ATV's actual Orbital Correction System thrusters provide 4 X 490N, for a total of 1960N. ATV flight dynamics engineer Laurent Arzel, at ATV-CC, says that ATV only uses two thrusters in parallel at one time, so the thrust during reboosts is fixed at around 2X490 or about 1000N. One reason why Rhett's thrust estimate above, 15 000N, is much higher is due to the fact that he assumed the reboost ran only during Mike's 2:13 video. It actually ran about 40:12, or 2412 seconds. Plugging this back into the equations gives a thrust estimate for ATV of about 1073N, much closer to the actual 1000N. This shows that Rhett's calculations are correctly done, but just off a bit in the burn duration. Here is the cool part: What does this thrust say about the motion of the ISS? Let's just approximate that the ISS needs a similar reboost about once a month. That would mean that this reboost takes the space station from some speed to some value that is around 5 m/s greater in magnitude. If you know the increase in speed during a reboost, you know the decrease in speed over the month in between This means that on a typical day, the ISS has a (negative) acceleration of about: Since this is an estimate, let's just call it -2 x 10^-6 m/s^2. Now I can use this acceleration to estimate the daily drag force on the ISS. Using the same mass as mentioned above, this acceleration goes with a drag force of about 0.9 Newtons. This is about the force you would exert by pushing on something with your finger. It's not a large drag force, but it is always there so that it eventually slows down the space station enough to cause a problem, which is 'fixed' by the periodic reboost provided by ATV or the Station's owe engines. Why do astronauts accelerate backwards? This is the real interesting question! Just about all of the introductory physics textbook examples you see are in an 'inertial reference frame'. What does that even mean? It means that the view point for the motion is from a frame that does not accelerate. In an inertial reference frame, our Newtonian physics models work. In particular, we can say that the net force on an object equals the mass of that object times its acceleration. More importantly, all of the forces on any particular object are due to interactions with other objects. You are probably sitting somewhere that is very close to an 'inertial frame' right now. In this frame of reference, you could see something like a book sitting on a table. The acceleration of this book is zero which implies that all the forces on it have to add up to zero. But what forces are there? There is the gravitational force – which is an interaction with the Earth – and then there is the force of the table pushing up on the book. Each force is an interaction with an object. But what if you aren't in an inertial frame of reference? What if your frame is accelerating? A great example is when you are in a moving car that is turning. If the car turns to the left, you can feel that something is different – you feel a force pushing you to the right. Things seem to behave differently because we want to use our ideas of physics for inertial frames (the book sitting on a table) even though you moving in a car is a non-inertial frame. In the case of a car turning to the left, you feel like you have a new force pushing you to the right. But what object is this right-pushing force an interaction with? The answer: nothing! This is the 'fake' force we call the centrifugal force. So, a fake force is a force you need to add in an accelerating frame. This fake force is not an interaction with another object – I guess that's why you would call it 'fake'. There is one more thing. The fake force can be written as: If the reference frame accelerates one way, you would feel a fake force in the opposite direction. Now back to the astronauts in the ISS. The cool thing is that there are more than one fake forces on these astronauts in the YT video. Suppose that we looked at an astronaut at some point before a reboost. In this case, the astronaut would just 'float' at any location. Here is a diagram showing the forces on an astronaut: In this frame, the fake force and the gravitational force have the same magnitude. This makes the net force on the astronaut zero, so that he or she just floats there. Of course, if you were able to observe the astronaut from outside the ISS (and in an inertial frame), you would see that the astronaut actually is accelerating. His or her acceleration would be the acceleration corresponding to only the gravitational force acting on the astronaut's body. During the ATV reboost, the ISS accelerates two ways at the same time. It accelerates as it orbits the Earth (because it is moving in a circle) and it accelerates because of the thrust from the ATV. This would make another fake force that pushes the astronauts in the opposite direction of the acceleration of the ISS. So, what is the difference between the acceleration of the ISS due to the gravitational force and the thruster? Why does one make the astronaut float and one does not? The difference is gravity. The gravitational force on the Space Station (from the Earth) also pulls on the astronaut. This gives both the astronaut and the space station the same acceleration and in a reference frame of the accelerating ISS, it makes a fake force that cancels the gravitational force. The thrust from the ATV, on the other hand, does not also exert a force on the astronauts. This means that there will be a fake force on the astronaut, but not a real force to cancel it, and they will float slowly to the rear. Note from Astro Mike Fossum: We shared a copy of this post with Mike, who sent in a couple of comments – Ed. I'll never forget this day and am excited you find it interesting! This is hard, but very cool, stuff! Thanks for helping tell the story!! Mike also adds that they did, in fact, record, the side-looking video that Rhett mentions above during this reboost as well. If we get a chance, we'll ping our NASA friends to see if a copy can be Note on Rhett's companion post in Wired: Rhett's also done a great companion post in Wired based on reboost acceleration, namely 'an estimation of the density of air in orbit on the ISS based on the acceleration during reboosts' via Wired This entry was posted in Astronauts, ATV-3, Education, Flight dynamics, Free flight, Fun stuff, ISS, ISS Partners, Life on ISS, Operations, Science, Video and tagged acceleration, allain, fossum, Furukawa, mike, reboost, rhet, Satoshi, Sergey, Volkov, youtube. Bookmark the permalink.
{"url":"http://blogs.esa.int/atv/2013/04/05/using-astronaut-mike-fossums-youtube-video-to-measure-atv-acceleration/","timestamp":"2014-04-19T15:06:53Z","content_type":null,"content_length":"65852","record_id":"<urn:uuid:87c89bee-a88d-476a-9d11-677beeed5fc6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Cauchy sequences MT2002 Analysis Previous page Contents Next page (Subsequences) (Continuity for Real functions) Cauchy sequences One of the problems with deciding if a sequence is convergent is that you need to have a limit before you can test the definition. Bernard Bolzano was the first to spot a way round this problem by using an idea first introduced by the French mathematician Augustin Louis Cauchy (1789 to 1857). A sequence is called a Cauchy sequence if the terms of the sequence eventually all become arbitrarily close to one another. That is, given N such that if m, n > N then |a[m]- a[n]| < 1. Note that this definition does not mention a limit and so can be checked from knowledge about the sequence. 2. It is not enough to have each term "close" to the next one. (|a[m]- a[m+1]| < see this earlier example) does satisfy this property, but not the condition for a Cauchy sequence. 3. We will see (shortly) that Cauchy sequences are the same as convergent sequences for sequences in R. However, we will see later that when we introduce the idea of convergent in a more general context Cauchy sequences and convergent sequences may be different. 4. Cantor (1845 to 1918) used the idea of a Cauchy sequence of rationals to give a constructive definition of the Real numbers independent of the use of Dedekind Sections. Some properties of Cauchy sequences 1. Any Cauchy sequence is bounded. (When we introduce Cauchy sequences in a more general context later, this result will still hold.) The proof is essentially the same as the corresponding result for convergent sequences. 2. Any convergent sequence is a Cauchy sequence. If (a[n]) N so that if n > N we have |a[n]- m, n > N we have |a[m]- a[n]| = |(a[m]- a[m]- a[m]- a[m]- 3. The Main Result about Cauchy sequences A Real Cauchy sequence is convergent. Since the sequence is bounded it has a convergent subsequence with limit Proof of that: Given a[n] of the subsequence is within a[m] will be within a[n] and hence within 2 1. The fact that in R Cauchy sequences are the same as convergent sequences is sometimes called the Cauchy criterion for convergence. 2. The use of the Completeness Axiom to prove the last result is crucial. For example, let (a[n]) be a sequence of rational numbers converging to an irrational. [e.g. (1, 1.4, 1.41, 1.414, ... ) Then since (a[n]) is a convergent sequence in R it is a Cauchy sequence in R and hence also a Cauchy sequence in Q. But it has no limit in Q. 3. In fact one can formulate the Completeness axiom in terms of Cauchy sequences. Here are some equivalent formulations of the axiom III Every subset of R which is bounded above has a least upper bound. III* In R every bounded monotonic sequence is convergent. III** In R every Cauchy sequence is convergent. We will see later that the formulation III** is a useful way of generalising the idea of completeness to structures which are more general than ordered fields. Previous page Contents Next page (Subsequences) (Continuity for Real functions) JOC September 2002
{"url":"http://www-groups.dcs.st-and.ac.uk/~john/analysis/Lectures/L10.html","timestamp":"2014-04-16T18:59:41Z","content_type":null,"content_length":"7574","record_id":"<urn:uuid:22cdde46-5080-40b6-acbe-449b23451df1>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Rosemont, IL Geometry Tutor Find a Rosemont, IL Geometry Tutor ...I continued applying trig skills through high school, where I was a straight A student and completed Calculus as a junior. I tutored math through college to stay fresh. Finally, trigonometry always finds its way into my day-to-day work, from teaching college-level physics concepts to building courses for professional auditors. 13 Subjects: including geometry, calculus, statistics, algebra 1 ...All of my teaching and tutoring experience is in grades K-8. I have taught as a classroom teacher and have tutored elementary students before and after school. I am a musician with experience playing in bands. 40 Subjects: including geometry, reading, English, writing I have helped hundreds of students improve their mathematical thinking skills over the past eight years. I have taught the entire mathematics curriculum at my high school in Chicago, including AP Statistics and AP Calculus, and I have been providing highly personalized mathematics tutoring along th... 11 Subjects: including geometry, calculus, statistics, algebra 2 ...I've tutored at Penn State's Learning Center as well as students at home. My passion for education comes through in my teaching methods, as I believe that all students have the ability to learn a subject as long as it is presented to them in a way in which they are able to grasp. I use both ana... 34 Subjects: including geometry, reading, writing, statistics ...I am also trained as a Professional Life Coach and I help both students and parents change this situation, which will free children to focus on growing into successful adults, instead of on "battling" (and losing) with math. The way I coach/tutor is heart based - warm, non-judgmental, sensitive,... 10 Subjects: including geometry, algebra 1, SAT math, algebra 2 Related Rosemont, IL Tutors Rosemont, IL Accounting Tutors Rosemont, IL ACT Tutors Rosemont, IL Algebra Tutors Rosemont, IL Algebra 2 Tutors Rosemont, IL Calculus Tutors Rosemont, IL Geometry Tutors Rosemont, IL Math Tutors Rosemont, IL Prealgebra Tutors Rosemont, IL Precalculus Tutors Rosemont, IL SAT Tutors Rosemont, IL SAT Math Tutors Rosemont, IL Science Tutors Rosemont, IL Statistics Tutors Rosemont, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/rosemont_il_geometry_tutors.php","timestamp":"2014-04-19T02:05:55Z","content_type":null,"content_length":"24068","record_id":"<urn:uuid:f5b9073a-c6c2-45d8-9cb7-475dab94ed08>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix diagonisable? November 7th 2010, 10:35 AM #1 Junior Member Oct 2010 Matrix diagonisable? If A is a square matrix with real valiues. Prove that A^tA is diagonisable and that all of it's eigenvalues are real and non-negative. OK so I can easily prove that it is diagonisable, but I can't get the non-negative eigenvalue part. Would appreciate a hint at this point. not the solution please. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/162426-matrix-diagonisable.html","timestamp":"2014-04-17T23:17:43Z","content_type":null,"content_length":"29018","record_id":"<urn:uuid:c174dd5f-1fb5-4891-9995-64556ecb78cc>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
From the Geometry Forum Newsgroup Archives About newsgroups || Search newsgroups || All geometry newsgroups [Switch to the Outline Version] 1. Imitating Soap Bubbles (Judith Haemmerle) 10/09/96 Looking for a well-defined algorithm: Create the shortest path that connects n points. [one-dimensional soap films] 2. Lobochevskian (hyperbolic) Geometry (Richard Stigels) 06/17/96 In Hyperbolic (or Lobochevskian) Geometry: Given ANY two triangles; the one with the greater sum of its sides has a lesser sum of its angles than the other one (which leads to that two triangles of totally different shapes but with the same side-sum also have the same angle-sum). True or false? 3. Points on the Earth (Judith Haemmerle) 01/19/96 I need a formula to calculate the distance between two points on the Earth's surface, knowing the latitudes and longitudes of the points. Also needed is a formula to calculate the direction from one point to the other. 4. A banana is more convex than a cup (Tim Poston) 01/10/96 A discussion on the topological convexity of a banana. [Gromov "Hyberbolic manifolds, groups and actions," Annals of Mathematics Studies; Chai, "A geometric inequality for certain types of compact sets in R^n," Amer. Journal of Math.] 1. Fractals and Fractional Dimensions (Judith Haemmerle) 11/26/95 I have to write an essay on whether a fractal really is a fractional dimension or not. I managed to find something on the relationship between topological dimensions, which I understand, and Hausdorff dimensions, which I don't. Can anyone out there help me with this or direct me to something simple and readable? 2. Neutral Geometries (Name withheld by request) 11/01/95 How many different kinds of neutral geometry are there? If a neutral geometry is one which shares all the other postulates except the parallel postulate, then does neutral geometry, n1, posess the Euclidean postulates and possibly some other, n2, would possess the Euclidean postulates and different others, etc.? 3. Extra credit geometry proof (Steve Heintz) 10/29/95 Given: Rectangle ABCD with P any point on line AB. line PE is perpendicular to line BD, line PM is perpendicular to line AC, line AN is perpendicular to line BD. Prove: PE+PM=AN. 4. Schoolteacher Follies (Richard Mateosian) 10/13/95 Public school education now and in the past; press reports of schooling. 5. Defining a rectangle (Patricia) 10/12/95 I learned in California that a rectangle was defined by having four sides and four right angles, making a square a type of rectangle. Now that I am teaching for a year in Scotland my students are insisting that a rectangle must have two sides which are longer. The dictionaries which we looked at have definitions to justify either claim. Can anyone help us out? 6. Standard Math Symbols in ASCII (Gerald D. Brown) 10/10/95 Does anybody know of a list of recognized ASCII character strings used to denote standard math symbols? 7. Pythagorean Theory and a President (Matt Bradbury) 09/26/95 I have a question that has our H.S. History of Math class stumped. We have to find people that were significant in mathematical history, and one of the questions is: Discovered proof of Pythagorean Theorem, and a U.S. President. [Congressman Garfield] 8. Quaternions (DELBECQUE Yannick) 09/23/95 I am looking for interesting facts about quaternions for a music composer who wishes to use them to compose. Geometrical interpretations of some proprieties of quaternions are the best things I can bring him, since they are simple to understand and visualise and they can be "put into music" more easily. [Kantor, I.L. and A.S. Solodovnikov. _Hypercomplex Numbers: An Elementary Introduction to Algebras_. Iannis Xenakis, Formal Music. Ebbinghaus, H.-D. et al. _Numbers_. Readings in Mathematics, Graduate Texts in Mathematics] 9. Request for advice - Independent HS project (Mark Jaffee) 09/08/95 Request for advice on teaching high school with Sketchpad and limited numbers of computers, and for a student's independent math project on "Philosophy of Mathematics." [Carl Boyer, Morris Kline, Phillip Kitcher] 10. Polyhedra Database (Andrew Hume) 08/30/95 Request for a volunteer to take over a polyhedra database and introduce it to the WWW. [offer to work with such a volunteer on polyhedra nomenclature] 11. 3D: Tetrahedra colliding? (Enno Rehling) 07/19/95 Given 2 Tetrahedra, determine whether they intersect or not. It's not sufficient to test for one of the corners being inside the other tetrahedron. I'd like to know the fastest way to calculate 12. Geometric Orthodontics Questions (Paulo Jorge Santos) 07/19/95 How calculate a perpendicular if you have only 3 points (A,B and C)? How calculate the distance between C and the line AB? (the point C must be perpendicular to AB). If you have two lines (and the respective linear equations), how calculate the interception points (x and y)? 13. Geometry on a Graphing Calculator (Suzanne Ewing) 07/19/95 I am interested in finding lesson ideas for geometry dealing with the graphing calculator. I would prefer ideas that deal with circles, triangles, quadrilaterals, etc., but any ideas would be helpful. [suggestions for newsgroup posts and Cabri Geometry II] 14. Geometry Textbook (Gary Wang) 07/14/95 Since I began my studies in math education, I've become more and more interested in the use of computer software designed for geometry. I've seen a lot of activities, as well as activity books. However, I was wondering if there's been a geometry textbook written in conjunction with one of the software packages, such as Cabri Geometre or Geometer's Sketchpad? [_Machine Proofs in Geometry_, _Discovering Geometry_ (Serra)] 15. Toblerone chocolate box shape (Steven Kirshner) 06/26/95 Is anyone familiar with the chocolate Toblerone? What do you call the shape of the box it comes in? [prism] 16. Sphere packing (Ed Dickey) 03/23/95 Can someone (Prof. Conway?) provide an update on the sphere packing problem? Has Hsiang's proof held up or is this still an open question? Similarly, has the 4-D kissing number problem (24 or 25) been settled? - The sphere packing problem is to find the greatest proportion of a fixed space filled by identical spheres. To quote Conway and Sloane who quote Rogers "many mathematicians believe, and all physicists know" that the correct proportion is pi/sqrt(18) or 0.7405 ... . Hsiang offered a proof in 1991. - The kissing number problem is to find the greatest number of "spheres," all the same size, that can be arranged around another sphere. In two dimensions, the answer is six (six circles around another circle). In three dimensions, it's 12 (try it with tennis or billiard balls); proving it is another matter. In 4-D, it's 24 or 25 (unless the question has been settled). 17. Plane Symmetry and TesselMania (PatsyJMJ) 02/13/95 Some years ago in Creative computing magazine, there was a listing for a BASIC program that used the 17 types of plane symmetry for simple drawing with the cursor, to form Escher-like pictures. As an artist and quiltmaker by avocation, I would like to know where to find some similar program using VGA or better screens. - TesselMania. 18. Morgan's Theorem (Minchbear) 02/02/95 Ryan Morgan is a high school student in Baltimore. He has made an interesting discovery that is an extension of Marion's Theorem. If you are like me, I didn't know what Marion's Theorem was until a few weeks ago. The whole topic is explained quite well on p.726 of the December, 1994, issue of the Mathematics Teacher. More information also given in this thread. 19. Penrose and Quasiperiodic Tilings (Margaret Sinclair) 01/20/95 Does anyone know of a supplier of Penrose tiles (i.e. cardboard or plastic "kites" and "darts")? Alternatively, has anyone come up with a method for making them at home? I got interested in these tiles because of an article in Discover magazine, February 1990. Does anyone have any other articles or books to recommend? I would like to know what Steinhardt's procedures are for building Penrose tilings with strictly local rules and to find out if there has been any success at finding a related procedure for three dimensions. [Martin Gardner, 1971 / Conway in "Tilings", Grunbaum & Shephard / rhombic tiles / matching conditions / Petra Gummelt of Greifswald and the "Cartwheel decagon" / dart and kite rhombs / tesselations / Socolar comment: physics, 2D tilings and symmetry / non-repeating patterns and crinkly surfaces / Wieringa roof] 1. Where is GSP? (John Olive) 08/24/94 Is the Forum going to start a special section for users of Sketchpad and Cabri Geometer? [Current organization of the Forum / possibility of new newsgroup / demo versions of Sketchpad and Cabri from Forum archives] 2. Integrating Geometry and Biology (Deobra Ray) 07/21/94 A teacber asks for suggestions about a 3-hour Geometry/Biology class. [books: By Nature's Design, Connections / Mathematics in Medicine and the Life Sciences/ game of life] 3. Internet access for teachers (Michelle Manes) 06/08/94 June 1994 listing of ways teachers in various states of the U.S. get email and/or full Internet access, by state. 4. The Forum Newsreader Plans (Helen Plotkin) 04/05/94 A newsreading module designed for use with a World Wide Web browser like Mosaic. The general idea, how the URLs will work. [what about speed? / batch fetching desirable / how to cope with a move? / .newsrc formats don't keep message-id's] 5. Visual Basic (Ben Preddy) 03/27/94 What is Visual Basic--how does it differ from 'regular' basic? [Microsoft's Basic for Windows / first draw needed objects, then write event-driven code / Dynamic Link Libraries] 6. Question Concerning Buddhism (Lee Rudolph) 02/09/94 Is it possible to think without language? Reason (thought) is only possible by manipulating symbols representing the world. [geometers thinking symbolically/non-symbolically / a wordless movie of a proof (intersection of a plane with a right circular cylinder is an ellipse / mathematics is a language / visualizing geometric figures / first gain an awareness or intuition of aspects of geometry, then assert the result in language / Einstein on 'visual thinking' / power of being able to name things / not essential to name things before thinking about them / solving puzzles without words / unprofitable discussion?] 7. Karl Menger (Mike Mortenson) 02/07/94 Looking for biographical information on the creator of the 'Menger sponge', "one of those pre-fractal geometric objects that mathematicians once called 'monsters'." [combinatorial and set-theoretic work / 'distance geometry' / book Menger wrote published by Chicago science museum] 8. Geometry Programs for Dos/Windows (Seth Delackner) 01/18/94 Request for info about a good geometry program for Dos/Windows. [Sketchpad for Windows / demo available from Forum / 800 numbers for Sketchpad and Cabri / no Windows version of Cabri] 1. Posting Pictures? (Tom McDougal) 09/14/93 Is anyone working on a system to allow postings of pictures? [Sander articles with figures in Forum directory / Forum newsreader being written / why not build on Nuntius? / interface confusing to novice users] 2. Unit Volume in 4D or higher... (Djamal Bouzida) 08/31/93 Request for formulae: Jacobian in 4 dimensions, general in N dimensions. [Wicklin answers] 3. Simple Questions for All Forum Users (Annie Fetter) 07/01/93 Request for information from readers. What do you use computers for? [testing theories and making conjectures / writing, playing, talking to people, getting information / word processing, databases, spreadsheets, math lesson preparation for grades 6-9] 4. Conics as Envelopes of Their Tangents (Martha Smith) 04/22/93 Does anyone know of sources proving that string figures' resulting curves are the ones claimed? [Veblen & Young, Samuel's _Projective Geometry_ / differential equations / Coxeter & Greitzer's _Geometry Revisited_ / computer environments (Sketchpad) for teaching conics / ways of constructing line conics (paper-folding) / two-circle case as special case of paper folding] 5. Pythagorean Primitives (Dan Bennett) 03/25/93 Two formulas that generate Pythagorean triples--where to look to see if they're original? [books on elementary number theory / Stark's book, _An Introduction to Number Theory_ / McDougal/Littell, _Geometry for Enjoyment & Challenge / a geometric interpretation of the formulas] 6. Geometry and Quilts (Claire Groden) 03/01/93 Is anyone using computers to investigate the mathematics of patchwork quilts? [using Aldus Freehand / Geometer's Sketchpad / use quilts as examples to discuss what makes something 'geometric' / software: Architecture - Designing Your Own Home / geometry in real life] 7. Connected Geometry Project (Michelle Manes) 02/26/93 Brief description of the project: develop high school curriculum materials, tools and support for teachers, give students research experience in math--develop collection of activities. [Curriculum Map Maker / dynamic geometry software / examples of activities / a plug for orienteering / list of papers written about the project] 8. Geometry Drawing Programs (L.J. Dickey) 02/23/93 Request for experience reports, for third-year course in geometry. [Cabri-geometre used in independent research on a Mac / fundamental difference between Cabri and Sketchpad / Sketchpad fits best with the Mac / author of Sketchpad comments on Sketchpad and Cabri -- power, interface; 'select, then act' limits complexity; pedagogical ramifications] 9. Geometry Book (Joe Malkevitch) 01/05/93 Recommendation: David Well, _The Penguin dictionary of Curious and Interesting Geometry_. [other recommendations: _The Media Magic Catalog_, O'Roarke, _Art Gallery Theorems and Algorithms_] 1. Proofs - My Thoughts (Michael K. Rogers) 12/14/92 How does proof enter into the progressive geometry curriculum? [I liked two-column proofs in high school for their organization and sense of power. / effectiveness of teaching proofs / should we rely totally on insight? / 'beauty' of geometry the development of provable theories based on terms (undefined or defined), assumptions (axioms or postulates), previously proven theories (theorems, propositions) / should we eliminate proofs from the high school curriculum? / proofs without words or goal of rigor / use geometry to teach varieties of logic / geometry a microworld of 'shape' / NCTM Standards / Euclidean geometry] 2. Revitalizing Geometry (Joe Malkevitch) 12/06/92 How put geometry education on the same exciting footing that events in research in geometry are undergoing? [_Discrete and Computational Geometry_ / current geometry a 'sediment' / Sketchpad a tool for developing spatial intuition / Dirichlet domains / high school teachers teach what they know / college math community responsibility / Wagon's work--results in Mathematica notebook form] 3. Mazes (J. Shipley Newlin) 12/04/92 Newton's Apple request for interesting visual sequences that bring out some of the mathematical basis of mazes. [most common mazes are spanning trees for some graph / Euler characteristic vs. number of loops / self-similar maze that fills out a fractal tile of the plane / labyrinths (mazes with no choice of what way to go)] 4. Geometry Projects (Sue Stetzer) 10/03/92 Requests for suggestions for independent projects for the first quarter of a course (8th grade academically talented). [ruler and compass constructions / paperfolding / why teach constructions? / reasons for constructing and reproducing geometric figures / Senechal - On the Shoulders of Giants / student projects - interest at Dover Sherborn H.S. / robotics, computer vision, medical imaging problems / graph theory and geometry projects / Geometry Project design outline and evaluation criteria, project proposals and questions, regular evaluation model (Dover Sherborn)] [Privacy Policy] [Terms of Use] Home || The Math Library || Quick Reference || Search || Help © 1994-2014 Drexel University. All rights reserved. The Math Forum is a research and educational enterprise of the Drexel University School of Education. The Math Forum 11 June 1997
{"url":"http://mathforum.org/sarah/HTMLthreads/geoforum.descriptions.html","timestamp":"2014-04-20T08:41:26Z","content_type":null,"content_length":"20796","record_id":"<urn:uuid:5a29f0f1-4131-4767-8271-6d482911fd18>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Westchester, FL Algebra 2 Tutor Find a Westchester, FL Algebra 2 Tutor ...I’m very patient and can teach any type of Mathematics in a very good manner. Having a true compassion for my students, I always am the most favorite teacher of my students. I have the ability to be very understanding to my students so that they don’t feel bored or desperate by the complexity of a topic. 23 Subjects: including algebra 2, chemistry, physics, calculus ...Great mathematical skills, specializing in elementary and high school math (basic math, algebra I & II and geometry), SAT math, ASVAB and GED. I am experienced in preparing and editing APA style papers on any subject and of any length.My geometry lessons include formulas for lengths, areas and volumes. The Pythagorean theorem will be explained and applied. 46 Subjects: including algebra 2, Spanish, reading, writing ...I received my Master's in Finance and have three years of contributions and experience in the field. During my graduate years I tutored at an undergraduate and graduate level in all Finance courses. I make sure that the student gets a conceptual understanding of the material. 8 Subjects: including algebra 2, accounting, algebra 1, finance ...I have always gotten A's in all my math courses and am a Dual-Enrolled student currently in Senior year of High School and finishing my AA in college, where I took Prealgebra over the summer. I am required to maintain a 3.0 average and have a 3.7 unweighted and a 6.06 weighted GPA. I aspire to become a mathematics professor. 11 Subjects: including algebra 2, geometry, algebra 1, trigonometry ...In the past I have tutored students ranging from elementary school to college in a variety of topics including FCAT preparation, Biology, Anatomy, Math and Spanish. I enjoy teaching and helping others and always do my best to make sure the information is enjoyable and being presented effectively... 30 Subjects: including algebra 2, reading, biology, algebra 1 Related Westchester, FL Tutors Westchester, FL Accounting Tutors Westchester, FL ACT Tutors Westchester, FL Algebra Tutors Westchester, FL Algebra 2 Tutors Westchester, FL Calculus Tutors Westchester, FL Geometry Tutors Westchester, FL Math Tutors Westchester, FL Prealgebra Tutors Westchester, FL Precalculus Tutors Westchester, FL SAT Tutors Westchester, FL SAT Math Tutors Westchester, FL Science Tutors Westchester, FL Statistics Tutors Westchester, FL Trigonometry Tutors Nearby Cities With algebra 2 Tutor Carol City, FL algebra 2 Tutors Coconut Grove, FL algebra 2 Tutors Crossings, FL algebra 2 Tutors Gables By The Sea, FL algebra 2 Tutors Inverrary, FL algebra 2 Tutors Kendall, FL algebra 2 Tutors Ludlam, FL algebra 2 Tutors Olympia Heights, FL algebra 2 Tutors Perrine, FL algebra 2 Tutors Quail Heights, FL algebra 2 Tutors Richmond Heights, FL algebra 2 Tutors Snapper Creek, FL algebra 2 Tutors Sweetwater, FL algebra 2 Tutors Village Of Palmetto Bay, FL algebra 2 Tutors West Miami, FL algebra 2 Tutors
{"url":"http://www.purplemath.com/Westchester_FL_algebra_2_tutors.php","timestamp":"2014-04-18T21:44:31Z","content_type":null,"content_length":"24494","record_id":"<urn:uuid:c9b53932-eab5-4f20-b50a-e7c7a5fd8abb>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Check the Results of NDSolve How to | Check the Results of NDSolve For most differential equations, the results given by NDSolve are quite accurate. However, because its results are based on numerical sampling and error estimates, there can occasionally be significant errors. When you need to be sure of the quality of a solution, it is a good idea to do some basic checking of the solution. Comparing a solution computed with WorkingPrecision higher than the default MachinePrecision is often a useful way to check results. Different solutions may save solution data at different points, leading to differences at these points. To keep these differences no larger than the numerical error in the solution, use Use NDSolve with the default MachinePrecision to compute the solution: Use NDSolve with WorkingPrecision->22 to compute the solution: Since errors are often quite small, it is useful to view them on a logarithmic scale. RealExponent is effectively equal to Log10[Abs[x]], but without a singularity at zero, so it is a good choice for viewing differences that might be zero at some points: The residual for a differential equation is the difference of its left and right sides: The numerical methods used in NDSolve are designed to keep the residual small at any point. You can plot the logs of the residuals: As you can see, the numerical errors are significantly smaller when using the higher WorkingPrecision. The trade off is, of course, increased calculation time. Although in most cases keeping the residual small leads to an accurate numerical solution, this is not always true. A simple example is the Duffing equation:
{"url":"http://reference.wolfram.com/mathematica/howto/CheckTheResultsOfNDSolve.html","timestamp":"2014-04-18T03:00:51Z","content_type":null,"content_length":"37098","record_id":"<urn:uuid:a1f7d9a2-c73d-4c2f-b3b2-35b04fc5b837>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
Inequality problem June 5th 2009, 11:34 AM #1 Mar 2009 Inequality problem I made a simple model several weeks ago which led to a conjecture, which I have been unable to prove thus far. Given constants $s$ and $h$ such that functions $p(x)$ and $C_{i}(x)$ ( $i\in\{1,2\}$) such that and values $\exists x,y:\qquad-p'(x+y)=\frac{C_{1}'(x)}{h/2}=\frac{C_{2}'(y)}{s}$ $\exists x^{*},y^{*}:\qquad-p'(x^{*}+y^{*})=\frac{C_{1}'(x^{*})}{h-s}=\frac{C_{2}'(y^{*})}{s}.$ Prove or disprove that the following always holds: Best attempt so far: I've tried using other inequalities, but I always managed to find counterexamples in Mathematica. So far, I have not found a counter examples for $(x^{*}-x+y^{*}-y)(C_{1}'(x^{*})+C_{2}'(y^{*}))>C_{1}(x^{*})-C_{1}(x)+C_{2}$, so I think I'm in the right direction. Perhaps someone knows of an application which can search for counterexamples given such conditions? Any suggestions would be very, very, very much appreciated. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-applied-math/91905-inequality-problem.html","timestamp":"2014-04-16T22:12:36Z","content_type":null,"content_length":"32358","record_id":"<urn:uuid:62ac2106-83c5-4ce5-acd4-2f734e5690ac>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
A monadic library to build dataflow graphs for OM. This module just exports a set of chosen symbols from Internal. data BuilderState vector gauge anot Source (Show anot, Show (vector gauge), Vector vector) => Show (BuilderState vector gauge anot) Eq (Builder v g a ret) (TRealm r, Typeable c, C c, Fractional c) => Fractional (Builder v g a (Value r c)) you can convert GHC floating point immediates to Builder. (TRealm r, Typeable c, C c) => Num (Builder v g a (Value r c)) you can convert GHC numeric immediates to Builder. Show (Builder v g a ret) (TRealm r, Typeable c) => C (Builder v g a (Value r c)) choose the larger or the smaller of the two. (TRealm r, Typeable c, C c) => C (Builder v g a (Value r c)) (TRealm r, Typeable c, C c) => C (Builder v g a (Value r c)) Builder is Algebraic C. You can use sqrt and so on. (TRealm r, Typeable c, C c) => C (Builder v g a (Value r c)) Builder is Field C. You can use /, recip. (TRealm r, Typeable c, C c) => C (Builder v g a (Value r c)) (TRealm r, Typeable c, C c) => C (Builder v g a (Value r c)) Builder is Ring C. You can use div and mod. (TRealm r, Typeable c, C c) => C (Builder v g a (Value r c)) Builder is Ring C. You can use one, *. (TRealm r, Typeable c) => C (Builder v g a (Value r c)) (TRealm r, Typeable c, C c) => C (Builder v g a (Value r c)) Builder is Additive C. You can use zero, +, -, negate. TRealm r => Boolean (Builder v g a (Value r Bool)) Builder is Boolean. You can use true, false, not, &&, ||. :: Setup v g a The Orthotope machine setup. -> Name The name of the kernel. -> Builder v g a () The builder monad. -> Kernel v g a The created kernel. bind :: (Monad m, Functor m) => m a -> m (m a)Source run the given builder monad, get the result graph node, and wrap it in a return monad for later use. it is like binding a value to a monad-level identifier. :: (TRealm r, Typeable c) => Named (StaticValue r c) the named static value to be loaded from. -> B (Value r c) The loaded Value as a result. Load from a static value. :: (TRealm r, Typeable c) => Named (StaticValue r c) the named static value to be stored on. -> Builder v g a (Value r c) The Value to be stored. -> Builder v g a () The result. :: Typeable c => Operator The reduction Operator. -> Builder v g a (Value TArray c) The TArray Value to be reduced. -> Builder v g a (Value TScalar c) The TScalar Value that holds the reduction result. :: Typeable c => Builder v g a (Value TScalar c) The TScalar Value to be broadcasted. -> Builder v g a (Value TArray c) The TArray Value, all of them containing the global value. :: Typeable g => Axis v The axis for which index is required -> Builder v g a (Value TArray g) The TArray Value that contains the address as a result. :: Typeable g => Axis v The axis for which the size is required -> Builder v g a (Value TScalar g) The TScalar Value that contains the size of the mesh in that direction. :: Typeable c => v g The amount of shift -> Builder v g a (Value TArray c) The TArray Value to be shifted -> Builder v g a (Value TArray c) The shifted TArray Value as a result. :: (TRealm r, Typeable c) => c A Haskell value of type c to be stored. -> B (Value r c) TArray Value with the c stored. Create an immediate Value from a Haskell concrete value. TRealm is type-inferred. cast :: (TRealm r, Typeable c1, Typeable c2) => c2 -> Builder v g a (Value r c1) -> Builder v g a (Value r c2)Source take a phantom object c2, and perform the cast that keeps the realm while change the content type from c1 to c2. annotate :: (TRealm r, Typeable c) => (a -> a) -> Builder v g a (Value r c) -> Builder v g a (Value r c)Source Execute the builder, and annotate the very result with the givin function.
{"url":"http://hackage.haskell.org/package/Paraiso-0.3.1.2/docs/Language-Paraiso-OM-Builder.html","timestamp":"2014-04-19T04:25:26Z","content_type":null,"content_length":"38686","record_id":"<urn:uuid:dd1d87d9-711f-4267-83eb-d8db63f5359c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Surface splines a__ (gif) Definition properties and applications; (short text) properties of surface splines b__ Basic shapes represented as surface splines (Shaded, BB-net, Isophotes, Gauss curvature, Mean curvature); blends of basic shapes with surface splines (Gauss curvature) A single surface spline representing cubes and cylinders. Notes on the next three figures (figure) More examples of C1-surface splines (figure) Blend ratios for modifying surface splines (figure) Curvature distribution and interpolation of a part of a cube. A flattened cube. The left top is not a disk (even though it is almost circular); the right top is an exact disk. Each figure is one surface spline. Here is a pretty cat. (wow graphics can be great fun) Gaussian and Mean curvature on "the cat" (prediction of Gauss curvature and of Mean curvature) for_leif vw_shaded vw_patches vw_gauss_curvature A fancy car model (1) (2) (3) Implicit surface blending : Sponsors: NSF, Intel, SDRC ... via Grants: 1992: Research Initiation Award: Improving the shape of surfaces by perturbation 1994: National Young Investigator Award; Splines for modeling free-form surfaces.
{"url":"http://www.cise.ufl.edu/research/SurfLab/surf_spl/","timestamp":"2014-04-17T18:44:15Z","content_type":null,"content_length":"3244","record_id":"<urn:uuid:c6249b88-6a29-40ab-9c35-f949afdb7c6f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
RIMS Workshop RIMS Workshop 2003 New methods and subjects in singularity theory RIMS2003 --- Program in Japanese --- index Last modification on 15th Nov. 2003 --------------------------------------- New methods and subjects in singularity theory RIMS Workshop 25th--28th November 2003 Organizer: Goo ISHIKAWA (Hokkaido UNiversity) 25th November (Tue.) 10:00 --- 11:00 Goo ISHIKAWA (Hokkaido University) Motivic integration and non-holonomic geometry (A report on the "e-mail seminar"). 11:20 --- 12:20 Susumu TANABE (Independent University of Moscow) Combinatorial aspects of the mixed Hodge structure. I 14:00 --- 15:00 Susumu TANABE (Independent University of Moscow) Combinatorial aspects of the mixed Hodge structure. II 15:20 --- 16:20 Shihoko ISHII (Tokyo Institute of Technology) Introduction to arc spaces. I 16:40 --- 17:40 Shihoko ISHII (Tokyo Institute of Technology) Introduction to arc spaces. II 26th November (Wed.) 10:00 --- 11:00 Satoshi KOIKE (Hyogo University of Teacher Education) Introduction to motivic-Zeta functions for real analytic functions and their calculation formula and applications. I 11:20 --- 12:20 Satoshi KOIKE (Hyogo University of Teacher Education) Introduction to motivic-Zeta functions for real analytic functions and their calculation formula and applications. II 14:00 --- 15:00 Toru OHMOTO (Kagoshima University) TBA. 15:20 --- 16:20 Tatsushi MORIOKA (Osaka Kyoiku University) Mirror symmetry and WKB analysis 16:40 --- 17:40 Kojun ABE (Shinshu University) The structure of vector fields on differentiable orbifolds and singularities of polynomial mappings. 27th November (Thu.) 10:00 --- 11:00 Tohru MORIMOTO (Nara Women's University) From nilpotent geometry to subriemannian geometry. I 11:20 --- 12:20 Tohru MORIMOTO (Nara Women's University) From nilpotent geometry to subriemannian geometry. II 14:00 --- 15:00 Jiro ADACHI (Hokkaido Univesity) Various kinds of the Gray-type Theorem. 15:20 --- 16:20 Yoshinori MACHIDA (Numazu College of Technology) Geometry of 3-contact structures; toward its singularity theory. 16:40 --- 17:40 Suuichi IKEGAMI (Hiroshima University) Cobordism Group of Morse functions on Manifolds. 28th November (Fri.) 10:00 --- 11:00 Sachiko SAITO (Hokkaido University of Education) Real Hirzebruch surfaces. 11:20 --- 12:20 Yuichi YAMADA (The University of Electro-Communications) Plane curves obtained by cutting from a lattice and coefficients of Dehn surgery. 14:00 --- 15:00 Miroru YAMAMOTO (Kyushu University) Eversion of a fold map of $S^2$ to ${\bf R}^2$ with one singular set. index Goo Ishikawa
{"url":"http://www.math.sci.hokudai.ac.jp/~ishikawa/e-rims2003.html","timestamp":"2014-04-21T09:35:49Z","content_type":null,"content_length":"4030","record_id":"<urn:uuid:0e5d2c06-b8f4-4421-9b5c-424f92705404>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Three-dimensional adaptive coordinate transformations for the Fourier modal method « journal navigation Three-dimensional adaptive coordinate transformations for the Fourier modal method Optics Express, Vol. 22, Issue 2, pp. 1342-1349 (2014) The concepts of adaptive coordinates and adaptive spatial resolution have proved to be a valuable tool to improve the convergence characteristics of the Fourier Modal Method (FMM), especially for metallo-dielectric systems. Yet, only two-dimensional adaptive coordinates were used so far. This paper presents the first systematic construction of three-dimensional adaptive coordinate and adaptive spatial resolution transformations in the context of the FMM. For that, the construction of a three-dimensional mesh for a periodic system consisting of two layers of mutually rotated, metallic crosses is discussed. The main impact of this method is that it can be used with any classic FMM code that is able to solve the large FMM eigenproblem. Since the transformation starts and ends in a Cartesian mesh, only the transformed material tensors need to be computed and entered into an existing FMM code. © 2014 Optical Society of America 1. Introduction Periodic nanostructures gathered a tremendous amount of interest in the past decade [ 1. K. Busch, G. von Freymann, S. Linden, S. F. Mingaleev, L. Tkeshelashvili, and M. Wegener, “Periodic nanostructures for photonics,” Phys. Rep. 444, 101–202 (2007). [CrossRef] ]. Evolving experimental techniques allowed for more and more complex structures. Alongside this experimental development, numerical tools to solve Maxwell’s equations became much more elaborate, One of these rapidly developing numerical techniques is the Fourier Modal Method (FMM). It is capable to predict the transmission properties of periodic photonic systems, both dielectric and metallic. The systems normally considered are periodic with respect to the -plane and finite in -direction. The system is sliced into layers with constant permittivity in -direction and in each of these layers an eigenvalue problem is solved which stems from Maxwell’s curl equations. This allows expanding the fields into eigenmodes. The layers are then connected using a scattering matrix algorithm which ensures the fulfillment of the continuity conditions [ 2. L. Li, “Formulation and comparison of two recursive matrix algorithms for modeling layered diffraction gratings,” J. Opt. Soc. Am. A 13, 1024–1034 (1996). [CrossRef] After significant advancements for lamellar gratings and on the topic of the correct Fourier factorization rules [ 3. P. Lalanne and G. M. Morris, “Highly improved convergence of the coupled-wave method for TM polarization,” J. Opt. Soc. Am. A 13, 779–784 (1996). [CrossRef] 5. L. Li, “Use of Fourier series in the analysis of discontinuous periodic structures,” J. Opt. Soc. Am. A 13, 1870–1876 (1996). [CrossRef] ], the FMM still faced the problem of properly calculating the response of metallic systems. In-plane stair-casing for not-grid-aligned structures and most of all the Gibbs phenomenon remained a A popular approach to tackle these problems is the application of coordinate transformations. Two kinds of transformations emerged. First, adaptive coordinates (AC) are used to transform the permittivity distributions in such a way that the surface of the considered structures becomes grid-aligned. Second, adaptive spatial resolution (ASR) increases the coordinate line density along the interfaces. Combining both drastically enhance the performance of the method [ 6. G. Granet, “Reformulation of the lamellar grating problem through the concept of adaptive spatial resolution,” J. Opt. Soc. Am. A 16, 2510–2516 (1999). [CrossRef] 8. T. Vallius and M. Honkanen, “Reformulation of the Fourier modal method with adaptive spatial resolution: application to multilevel profiles,” Opt. Express 10, 24–34 (2002). [CrossRef] [PubMed] ]. In recent years, different concepts emerged for the construction of the corresponding meshes [ 9. T. Weiss, G. Granet, N. A. Gippius, S. G. Tikhodeev, and H. Giessen, “Matched coordinates and adaptive spatial resolution in the Fourier modal method,” Opt. Express 17, 8051–8061 (2009). [CrossRef] [PubMed] 11. J. Küchenmeister, T. Zebrowski, and K. Busch, “A construction guide to analytically generated meshes for the Fourier Modal Method,” Opt. Express 20, 17319–17347 (2012). [CrossRef] [PubMed] So far, adaptive coordinates have been used in the -plane. However, complex structures occurring in different layers pose a problem since different adaptive meshes would be necessary. How to connect these different meshes optimally remains a challenging task since each mesh represents a different basis. Also, the incident plane waves need to be transformed which induces additional errors. These problems can be tackled by designing a three-dimensional adaptive coordinate transformation. This method trades an increased amount of slices in the method for an accurate representation of the structure’s surface in all three dimensions. In this paper, such a three-dimensional transformation is designed for a system that has gathered an extensive amount of interest in recent years, two periodic layers of mutually rotated, metallic crosses [ 12. M. Decker, M. Ruther, C. E. Kriegler, J. Zhou, C. M. Soukoulis, S. Linden, and M. Wegener, “Strong optical activity from twisted-cross photonic metamaterials,” Opt. Lett. 34, 2501–2503 (2009). [CrossRef] [PubMed] ]. These structures are known for their strong optical activity. The transformation leads to fully anisotropic permittivity and permeability tensors. The key point is that the metallic crosses are twisted by the transformation so that they are both grid-aligned in the transformed space (thus allowing an ideal representation by the FMM) and at the same time the transformation between the crosses is performed continuously in Moreover, a way to implement a three-dimensional adaptive spatial resolution is discussed. The overall result is a method that allows the implementation of arbitrary three-dimensional adaptive coordinates and adaptive spatial resolution in classical, open source FMM codes [ 13. V. Liu and S. Fan, “S^4: A free electromagnetic solver for layered periodic structures,” Comput. Phys. Commun. 183, 2233–2244 (2012). [CrossRef] In section 2, the basics of Maxwell’s equations in generalized coordinates are discussed. Section 3 covers the design of the three-dimensional adaptive coordinate transformation. Finally, the concept for three-dimensional adaptive spatial resolution is presented in section 4. 2. Covariant formulation of the Fourier Modal Method with generalized coordinates In this section, we discuss how generalized coordinates are incorporated in the FMM. The system we investigate in this paper is displayed in Fig. 1 . It consists of two periodic layers of mutually rotated, metallic crosses. Since generalized coordinates in the context of the FMM have been discussed before, we will only briefly discuss them here. The presentation naturally follows previous publications on the topic in content and notation, see [ 9. T. Weiss, G. Granet, N. A. Gippius, S. G. Tikhodeev, and H. Giessen, “Matched coordinates and adaptive spatial resolution in the Fourier modal method,” Opt. Express 17, 8051–8061 (2009). [CrossRef] [PubMed] 11. J. Küchenmeister, T. Zebrowski, and K. Busch, “A construction guide to analytically generated meshes for the Fourier Modal Method,” Opt. Express 20, 17319–17347 (2012). [CrossRef] [PubMed] 15. L. Li, “Fourier modal method for crossed anisotropic gratings with arbitrary permittivity and permeability tensors,” J. Opt. A 5, 345–355 (2003). [CrossRef] ]. We distinguish between a curvilinear coordinate system and a Cartesian coordinate system . The three-dimensional adaptive coordinate transformations that are investigated in this paper have the form Eventually, we want to solve Maxwell’s curl equations which read in covariant form, see [ 15. L. Li, “Fourier modal method for crossed anisotropic gratings with arbitrary permittivity and permeability tensors,” J. Opt. A 5, 345–355 (2003). [CrossRef] ]. Here, denotes the Levi-Civita symbol and are covariant components of the electric and magnetic field. Throughout the manuscript, Greek indices run from 1 to 3. Furthermore, we use the Einstein sum convention, meaning that repeated indices are implicitly summed over. The vacuum wave number is denoted with the frequency and the speed of light . The metric tensor reads and (as used in Eqs. (4) ) denotes the reciprocal of its determinant. As illustrated in detail in [ 11. J. Küchenmeister, T. Zebrowski, and K. Busch, “A construction guide to analytically generated meshes for the Fourier Modal Method,” Opt. Express 20, 17319–17347 (2012). [CrossRef] [PubMed] ], the coordinate transformation leads to a transformed permittivity of the form where is the permittivity tensor in the Cartesian system. The permeability transforms identically. It is noteworthy that the matrix that is Fourier transformed when using FMM with AC and/or ASR is not the permittivity itself given in Eq. (7) but rather . Therefore, we refer to from now on as the effective permittivity . By entering Eqs. (1) Eq. (7) one can observe that both the effective permittivity and the effective permeability become fully anisotropic. Therefore, the full anisotropic FMM eigenvalue problem has to be solved. 3. Three-dimensional adaptive coordinates The overall aim of this section is to obtain three-dimensional adaptive coordinate transformation of the form in Eqs. (1) for our system depicted in Fig. 1 . Since three-dimensional meshing is a complex procedure, the discussion is structured the following way: First, an example of a two-dimensional mesh is discussed. Second, this planar mesh is utilized to create a three-dimensional mapping. Third, the impact on the transformed permittivity is illustrated and discussed. 3.1. Two-dimensional mesh for a rotated cross The objective is to find a two-dimensional, planar mesh for a rotated cross. Since the procedure how to find such meshes is discussed in great detail in [ 11. J. Küchenmeister, T. Zebrowski, and K. Busch, “A construction guide to analytically generated meshes for the Fourier Modal Method,” Opt. Express 20, 17319–17347 (2012). [CrossRef] [PubMed] ], we only briefly review the construction process depicted in Fig. 2 . In Fig. 2(a) , we show how the unit cell is divided for the mapping. The four points define specific coordinate lines (blue and red). Figure 2(b) depicts how these coordinate lines are mapped. The mapping in between these specific coordinate lines is given by a linear interpolation. The resulting mesh is depicted in Fig. 2(c) . One may notice that we could have chosen different specific coordinate lines which would also lead to a grid-aligned cross in the effective permittivity. The reason we choose the ones shown in Fig. 2 becomes clear in the next paragraph. 3.2. Constructing the three-dimensional transformation The lower cross in Fig. 1 is already grid-aligned. This simplifies the discussion at this point but does not impose a restriction. The upper cross can be transformed such that its effective permittivity is also grid-aligned. However, we have to change the coordinate system in between continuously such that artificial reflections are avoided. This means that we obtain a different planar mesh for every value of . This value of directly translates into a rotation angle which in turn translates to a mesh like in section 3.1. Explicitly, this means that we perform coordinate transformations in the space between the crosses, too. This explains why we constructed the mesh in Fig. 2 the way we did–for a given value we only compute the rotation angle and easily obtain the planar mesh. In particular, we hereby make sure that the grid-aligned crosses are directly above each other in the transformed space. The lower cross is grid-aligned and we choose the origin of our coordinate system to be in the center of the lower surface of this cross. The height of each cross is denoted h and the distance between the crosses is denoted b. Since we want the upper cross to be rotated by the angle φ[0], we obtain for the rotational dependence of the planar mesh on the x^3 coordinate. Fig. 3 , we visualize the real parts of the effective permittivity tensors for two different values of . The system is a square lattice with lattice constant 600 nm. The cross is 250 nm in diameter and the width of the arms of the crosses is 50 nm. The height of the crosses is = 25 nm and the spacing between them is = 50 nm. The angle by which the upper cross is rotated is = 15°. We assume the crosses to consist of gold, described by a Drude model with the parameters = 9.0685, a plasma frequency = 1.3544 · 10 Hz and a damping coefficient = 1.1536 · 10 Hz, see [ 16. A. Vial, A.-S. Grimault, D. Macías, D. Barchiesi, and M. Lamy de la Chapelle, “Improved analytical fit of gold dispersion: Application to the modeling of extinction spectra with a finite-difference time-domain method,” Phys. Rev. B 71, 085416 (2005). [CrossRef] ]. The wavelength used for Fig. 3 is 1000 nm. The color scale in Fig. 3(b) has been saturated at 0 for in order to see more features. The real part of the dielectric function of the gold crosses is about −42. We depict the components of the effective permittivity. This suffices since the permittivity tensor is symmetric, e.g., and the mapping in Fig. 2 symmetric. This results in rotated counter clockwise by 90°, respectively. Fig. 3(a) we display the effective permittivity at /2, i.e., between the crosses. As shown, the effective permittivity of this layer of air becomes fully anisotropic due to Eqs. (1) . In Fig. 3(b) we display the effective permittivity at , i.e., in the layer with the rotated cross. Due to the form of the coordinate transformation in Fig. 2 , the gold cross is grid-aligned in the transformed space. The origin of the discontinuities in the effective permittivity is the fact that the meshes above are not differentiable. This however does not affect the overall performance of the method as long as the discretization parameters are chosen wisely, see [ 11. J. Küchenmeister, T. Zebrowski, and K. Busch, “A construction guide to analytically generated meshes for the Fourier Modal Method,” Opt. Express 20, 17319–17347 (2012). [CrossRef] [PubMed] 4. Three-dimensional adaptive spatial resolution Up to this point we created an effective permittivity where the gold crosses are grid-aligned in both layers. Since we designed the three-dimensional adaptive coordinates such that the crosses are right above each other, it would suffice to apply the same two-dimensional, planar adaptive spatial resolution (ASR) transformation function in each layer. This means that the coordinate lines are compressed at the vicinity of the metallic surface. Mathematically, this is just another coordinate transformation that transforms the effective permittivity. The design of such a transformation is described in detail in [ 8. T. Vallius and M. Honkanen, “Reformulation of the Fourier modal method with adaptive spatial resolution: application to multilevel profiles,” Opt. Express 10, 24–34 (2002). [CrossRef] [PubMed] 11. J. Küchenmeister, T. Zebrowski, and K. Busch, “A construction guide to analytically generated meshes for the Fourier Modal Method,” Opt. Express 20, 17319–17347 (2012). [CrossRef] [PubMed] ]. However, this requires an FMM code that is capable to process coordinate transformations directly, including a switch in the basis. Also, it is difficult to start a plane wave in physical space, since this plane wave is also transformed when we change the basis functions due to the coordinate transformation we perform in every layer. Therefore, a three-dimensional ASR is desirable to avoid these problems. As indicated above, the aim of this paper is to demonstrate a way to incorporate three-dimensional coordinate transformation in any classic FMM code. To do so, the basic idea is switching on the adaptive spatial resolution smoothly with increasing -coordinate. Thereby, the basis functions of the problem represent the real, physical space. Therefore, we can easily start an ordinary plane wave in the incoming, Cartesian, physical half-space. Then, we can introduce several intermediate layers to start the adaptive spatial resolution. The general procedure is sketched in Fig. 4(a) . In this sketch, we start by a Cartesian layer at the bottom, cf. Fig. 4(b) . In the next three layers, we gradually introduce the ASR as discussed in great detail in [ 11. J. Küchenmeister, T. Zebrowski, and K. Busch, “A construction guide to analytically generated meshes for the Fourier Modal Method,” Opt. Express 20, 17319–17347 (2012). [CrossRef] [PubMed] ]. “1/3 ASR” figuratively means that the ASR has reached a third of its desired strength, see Figs. 4(c)–4(e) The layer in which the ASR is fully introduced at its desired strength (cf. Fig. 4(e) ) is the layer where the first cross is located (shaded in yellow). Then, like before, we rotate the mesh, only with an ASR applied before that, see Fig. 4(f) . Once the mesh is rotated up to the cross rotation angle , we can compute the layer of the rotated cross, again shaded in yellow in Fig. 4(a) . The mesh that is used to compute the effective permittivity in this layer is depicted in Fig. 4(g) . Like above, we then gradually reverse the mesh changes—first, the mesh is rotated back, then the ASR is decreased until we reach the outgoing layer with a Cartesian mesh. Mathematically, this looks the following: when we want the increase of the density to happen on the interval ∈ [0, ], then the mapping has to obey Here, ASR denotes the compression function (see ) in Section 8 in [ 11. J. Küchenmeister, T. Zebrowski, and K. Busch, “A construction guide to analytically generated meshes for the Fourier Modal Method,” Opt. Express 20, 17319–17347 (2012). [CrossRef] [PubMed] ]). A linear introduction of the ASR seems most reasonable. Therefore, a suitable function fulfilling the requirements is The mapping is constructed similarly. Conceptually, the whole coordinate transformation still has the form of Eqs. (1) . For any given value of we first compress the coordinate lines and then apply the adaptive coordinate transformation. The result is meshes like in Fig. 4 . The great advantage of such a procedure is that it can be easily incorporated into any classical FMM code which can solve the large eigenproblem. Since the incoming and outgoing layer are Cartesian, this is perfectly compatible. So any classical FMM code that can solve the large eigenproblem can just be given the transformed permittivity and permeability and can, thereby, incorporate three-dimensional coordinate transformations. Moreover, the issues of two-dimensional transformations that were discussed above are avoided. 5. Conclusion This work dealt with the enhancement of the Fourier Modal Method towards three-dimensional adaptive coordinate transformations. We demonstrated how a three-dimensional mesh can be constructed and how this transformation translates into fully anisotropic effective permittivity and permeability tensors. The presented approach can be used to extend any classical FMM code that is able to solve the large eigenproblem such that it uses coordinate transformations, namely by simply transforming the tensors in the presented fashion. While it increases the number of layers to be solved, the overall Fourier representation of the entire structure is highly improved since the structures are transformed to be grid-aligned and the transition in between the structures is performed continuously. This concept expands the range of possibilities for the FMM greatly, especially for complex systems which vary in propagation direction. I cordially thank Thomas Zebrowski and Kurt Busch for their input on coordinate transformations in the context of the FMM. I acknowledge support by the Deutsche Forschungsge-meinschaft (DFG) and the State of Baden-Württemberg through the DFG-Center for Functional Nanostructures (CFN) within sub-project A1.1. Furthermore, I acknowledge support by Deutsche Forschungsgemeinschaft and Open Access Publishing Fund of the Karlsruhe Institute of Technology (KIT). References and links 1. K. Busch, G. von Freymann, S. Linden, S. F. Mingaleev, L. Tkeshelashvili, and M. Wegener, “Periodic nanostructures for photonics,” Phys. Rep. 444, 101–202 (2007). [CrossRef] 2. L. Li, “Formulation and comparison of two recursive matrix algorithms for modeling layered diffraction gratings,” J. Opt. Soc. Am. A 13, 1024–1034 (1996). [CrossRef] 3. P. Lalanne and G. M. Morris, “Highly improved convergence of the coupled-wave method for TM polarization,” J. Opt. Soc. Am. A 13, 779–784 (1996). [CrossRef] 4. G. Granet and B. Guizal, “Efficient implementation of the coupled-wave method for metallic lamellar gratings in TM polarization,” J. Opt. Soc. Am. A 13, 1019–1023 (1996). [CrossRef] 5. L. Li, “Use of Fourier series in the analysis of discontinuous periodic structures,” J. Opt. Soc. Am. A 13, 1870–1876 (1996). [CrossRef] 6. G. Granet, “Reformulation of the lamellar grating problem through the concept of adaptive spatial resolution,” J. Opt. Soc. Am. A 16, 2510–2516 (1999). [CrossRef] 7. G. Granet and J.-P. Plumey, “Parametric formulation of the Fourier modal method for crossed surface-relief gratings,” J. Opt. A 4, S145–S149 (2002). [CrossRef] 8. T. Vallius and M. Honkanen, “Reformulation of the Fourier modal method with adaptive spatial resolution: application to multilevel profiles,” Opt. Express 10, 24–34 (2002). [CrossRef] [PubMed] 9. T. Weiss, G. Granet, N. A. Gippius, S. G. Tikhodeev, and H. Giessen, “Matched coordinates and adaptive spatial resolution in the Fourier modal method,” Opt. Express 17, 8051–8061 (2009). [CrossRef] [PubMed] 10. S. Essig and K. Busch, “Generation of adaptive coordinates and their use in the Fourier Modal Method,” Opt. Express 18, 23258–23274 (2010). [CrossRef] [PubMed] 11. J. Küchenmeister, T. Zebrowski, and K. Busch, “A construction guide to analytically generated meshes for the Fourier Modal Method,” Opt. Express 20, 17319–17347 (2012). [CrossRef] [PubMed] 12. M. Decker, M. Ruther, C. E. Kriegler, J. Zhou, C. M. Soukoulis, S. Linden, and M. Wegener, “Strong optical activity from twisted-cross photonic metamaterials,” Opt. Lett. 34, 2501–2503 (2009). [CrossRef] [PubMed] 13. V. Liu and S. Fan, “S^4: A free electromagnetic solver for layered periodic structures,” Comput. Phys. Commun. 183, 2233–2244 (2012). [CrossRef] 14. H. Kim, J. Park, and B. Lee, Fourier Modal Method and its Applications in Computational Nanophotonics (CRC Press, 2012). 15. L. Li, “Fourier modal method for crossed anisotropic gratings with arbitrary permittivity and permeability tensors,” J. Opt. A 5, 345–355 (2003). [CrossRef] 16. A. Vial, A.-S. Grimault, D. Macías, D. Barchiesi, and M. Lamy de la Chapelle, “Improved analytical fit of gold dispersion: Application to the modeling of extinction spectra with a finite-difference time-domain method,” Phys. Rev. B 71, 085416 (2005). [CrossRef] OCIS Codes (050.1970) Diffraction and gratings : Diffractive optics (050.1755) Diffraction and gratings : Computational electromagnetic methods (160.3918) Materials : Metamaterials (160.5298) Materials : Photonic crystals ToC Category: Diffraction and Gratings Original Manuscript: October 11, 2013 Revised Manuscript: December 9, 2013 Manuscript Accepted: December 13, 2013 Published: January 14, 2014 Jens Küchenmeister, "Three-dimensional adaptive coordinate transformations for the Fourier modal method," Opt. Express 22, 1342-1349 (2014) Sort: Year | Journal | Reset 1. K. Busch, G. von Freymann, S. Linden, S. F. Mingaleev, L. Tkeshelashvili, and M. Wegener, “Periodic nanostructures for photonics,” Phys. Rep.444, 101–202 (2007). [CrossRef] 2. L. Li, “Formulation and comparison of two recursive matrix algorithms for modeling layered diffraction gratings,” J. Opt. Soc. Am. A13, 1024–1034 (1996). [CrossRef] 3. P. Lalanne and G. M. Morris, “Highly improved convergence of the coupled-wave method for TM polarization,” J. Opt. Soc. Am. A13, 779–784 (1996). [CrossRef] 4. G. Granet and B. Guizal, “Efficient implementation of the coupled-wave method for metallic lamellar gratings in TM polarization,” J. Opt. Soc. Am. A13, 1019–1023 (1996). [CrossRef] 5. L. Li, “Use of Fourier series in the analysis of discontinuous periodic structures,” J. Opt. Soc. Am. A13, 1870–1876 (1996). [CrossRef] 6. G. Granet, “Reformulation of the lamellar grating problem through the concept of adaptive spatial resolution,” J. Opt. Soc. Am. A16, 2510–2516 (1999). [CrossRef] 7. G. Granet and J.-P. Plumey, “Parametric formulation of the Fourier modal method for crossed surface-relief gratings,” J. Opt. A4, S145–S149 (2002). [CrossRef] 8. T. Vallius and M. Honkanen, “Reformulation of the Fourier modal method with adaptive spatial resolution: application to multilevel profiles,” Opt. Express10, 24–34 (2002). [CrossRef] [PubMed] 9. T. Weiss, G. Granet, N. A. Gippius, S. G. Tikhodeev, and H. Giessen, “Matched coordinates and adaptive spatial resolution in the Fourier modal method,” Opt. Express17, 8051–8061 (2009). [CrossRef] [PubMed] 10. S. Essig and K. Busch, “Generation of adaptive coordinates and their use in the Fourier Modal Method,” Opt. Express18, 23258–23274 (2010). [CrossRef] [PubMed] 11. J. Küchenmeister, T. Zebrowski, and K. Busch, “A construction guide to analytically generated meshes for the Fourier Modal Method,” Opt. Express20, 17319–17347 (2012). [CrossRef] [PubMed] 12. M. Decker, M. Ruther, C. E. Kriegler, J. Zhou, C. M. Soukoulis, S. Linden, and M. Wegener, “Strong optical activity from twisted-cross photonic metamaterials,” Opt. Lett.34, 2501–2503 (2009). [CrossRef] [PubMed] 13. V. Liu and S. Fan, “S4: A free electromagnetic solver for layered periodic structures,” Comput. Phys. Commun.183, 2233–2244 (2012). [CrossRef] 14. H. Kim, J. Park, and B. Lee, Fourier Modal Method and its Applications in Computational Nanophotonics (CRC Press, 2012). 15. L. Li, “Fourier modal method for crossed anisotropic gratings with arbitrary permittivity and permeability tensors,” J. Opt. A5, 345–355 (2003). [CrossRef] 16. A. Vial, A.-S. Grimault, D. Macías, D. Barchiesi, and M. Lamy de la Chapelle, “Improved analytical fit of gold dispersion: Application to the modeling of extinction spectra with a finite-difference time-domain method,” Phys. Rev. B71, 085416 (2005). [CrossRef] OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-22-2-1342&id=276984","timestamp":"2014-04-16T22:40:45Z","content_type":null,"content_length":"207919","record_id":"<urn:uuid:b59b5ebb-aadd-4510-bfb4-62c07fe02734>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
(with Jonathan Chaika, Yitwah Cheung Available as a pdf file Abstract. We prove that for the surface defined by a holomorphic quadratic differential, the set of directions such that the corresponding Teichmuller geodesic lies in a compact set in the corresponding stratum is a winning set in Schmidt game. This generalizes a classical result in the case of the torus due to Schmidt and strengthens a result of Kleinbock and Weiss. Statistical hyperbolicity in Teichmuller space (with Spencer Dowdall, Moon Duchin) arXiv 1108.5416 Available as a pdf file In this paper we explore the idea that Teichmuller space with the Teichmuller metric is hyperbolic ``on average." We consider several different measures on Teichm\"uller space and show that with respect to each one, the average distance between points in a ball of radius r is asymptotic to 2r, which is as large as possible. The Geometry of the Disc Complex (with Saul Schleimer) arXiv 1010.3174 Available as a pdf file We give a distance estimate for the metric on the disk complex and show that it is Gromov hyperbolic. As another application of our techniques, we find an algorithm which computes the Hempel distance of a Heegaard splitting, up to an error depending only on the genus. The Weil-Petersson geodesic flow (with Keith Burns, Amie Wilkinson), to appear Annals of Math. arXiv 1004.5343 Available as a pdf file In this paper we prove that the Weil-Petersson geodesic flow is ergodic on moduli space On train track splitting sequences (with Lee Mosher, Saul Schleimer), to appear Duke Math. Journal. Available as a pdf file We show that the subsurface projection of a train track splitting sequence is an unparameterized quasi-geodesic in the curve complex of the subsurface. For the proof we introduce induced tracks, efficient position, and wide curves. This result is an important step in the proof that the disk complex is Gromov hyperbolic. As another application we show that train track sliding and splitting sequences give quasi-geodesics in the train track graph, generalizing a result of Hamenstadt Asymptotics of Weil-Petersson geodesics II:bounded geometry and unbounded entropy. (with Jeffrey Brock , Yair Minsky),to appear Geom Funct. Anal. arXiv 1004.4401 Available as a pdf file We use ending laminations for Weil-Petersson geodesics to establish that bounded geometry is equivalent to bounded combinatorics for Weil- Petersson geodesic segments, rays, and lines. Further, a more general notion of non-annular bounded combinatorics, which allows arbitrarily large Dehn-twisting, corresponds to an equivalent condition for Weil-Petersson geodesics. As an application, we show the Weil-Petersson geodesic flow has compact invariant subsets with arbitrarily large topological entropy. Geometry of Teichmuller space with the Teichmuller metric Available as a pdf file This chapter is a survey of recent results in Teichmuller geometry Dichotomy for the Hausdorff dimension of the set of nonergodic directions. (with Yitwah Cheung, Pascal Hubert) Inventiones Math. (183) (2011) 337-383 Available as a pdf file We consider billiards in a certain rectangle with a horizontal barrier This gives a one parameter family of flows in different directions. We study the Hausdorff dimension of the set of directions such that the flow in that direction is not ergodic. The dimension is computed explicitly in terms of the continued fraction expansion of the length of the barrier. Teichmuller geometry of moduli space, II: M(S) seen from far away (with Benson Farb) In the tradition of Ahlfors-Bers V 71-79 Contem Math. 510 American Math Soc. (2010) Available as a pdf file We construct a metric simplicial complex which is an almost isometric model of the moduli space M(S) of Riemann surfaces. We then use this model to compute the "tangent cone at infinity" of M(S): it is the topological cone on the quotient of the complex of curves C(S) by the mapping class group of S, endowed with an explicitly described metric. The main ingredient is Minsky's product regions Divergence of Teichmuller geodesics. (with Anna Lenzhen), Geom dedicata (2010) 114 191-210 Available as a pdf file We study the asymptotic geometry of Teichm\"uller geodesic rays. The question of whether two rays through a given point stay bounded distance apart or not was settled except for one outstanding case. In this paper we settle that last case. We show that when the transverse measures to the vertical foliations of the quadratic differentials determining two different rays are topologically equivalent, but are not absolutely continuous with respect to each other, then the rays diverge in Teichm\"uller space. Teichmuller geometry of moduli space, I: Distance minimizing rays and the Deligne-Mumford compactification (with Benson Farb), Jour Diff Geom. (85) (2010) 187-227 Available as a pdf file Let S be a closed, oriented surface with a finite (possibly empty) set of points removed. In this paper we relate two important but disparate topics in the study of the moduli space M(S) of Riemann surfaces: Teichm\"{u}ller geometry and the Deligne-Mumford compactification. We reconstruct the Deligne-Mumford compactification (as a metric stratified space) purely from the intrinsic metric geometry of M(S) endowed with the Teichm\"{u}ller metric. We do this by first classifying (globally) geodesic rays in M(S) and determining precisely how pairs of rays asymptote. We construct an ``iterated EDM ray space'' functor, which is defined on a quite general class of metric spaces. We then prove that this functor applied to M(S) produces the Deligne-Mumford compactification. Asymptotics of Weil-Petersson geodesics I: ending laminations, recurrence, and flows (with Jeffrey Brock, Yair Minsky), GAFA (19) (2010) 1229-1257 Available as a pdf file We define an ending lamination for a Weil-Petersson geodesic ray. Despite the lack of a natural visual boundary for the Weil-Petersson metric, these ending laminations provide an effective boundary theory that encodes much of its asymptotic CAT(0) geometry. In particular, we prove an ending lamination theorem for the full-measure set of rays that recur to the thick part, and we show that the association of an ending lamination embeds asymptote classes of recurrent rays into the Gromov-boundary of the curve complex. As an application, we establish fundamentals of the topological dynamics of the Weil-Petersson geodesic flow, showing density of closed orbits and topological transitivity. Coarse and synthetic Weil-Petersson geometry: quasi-flats, geodesics, and relative hyperbolicity (with Jeffrey Brock), Geometry and Topology, (12) , 2008 This is available as a pdf file. We analyze the coarse geometry of the Weil-Petersson metric on Teichm\"uller space, focusing on applications to its synthetic geometry (in particular the behavior of geodesics). We settle the question of the strong relative hyperbolicity of the Weil-Petersson metric via consideration of its coarse quasi-isometric model, the "pants graph." We show that in dimension~3 the pants graph is strongly relatively hyperbolic with respect to naturally defined product regions and show any quasi-flat lies a bounded distance from a single product. For all higher dimensions there is no non-trivial collection of subsets with respect to which it strongly relatively Topological dichotomy and strict ergodicity for translation surfaces (with Y.Cheung, P.Hubert ) Ergodic Theory Dynamical Systems {\bf 28} (2008) 1729-1748 This is available as a pdf file. Hubert-Schmidt and McMullen have found examples of translation surfaces whose Veech group is infinitely generated. In this paper we show first that the Hubert-Schmidt examples satisfy the topological dichotomy property that for every direction either the flow in that direction is completely periodic or minimal. More significantly we show that they have minimal but non uniquely ergodic directions. Problems on flat surfaces and translation surfaces (with P.Hubert, T.Schmidt, A.Zorich) This is a list of open problems in the subject and is available as a pdf file. Ergodic Theory of Translation surfaces to appear Handbook of Dynamical Systems, Elsevier This survey is available as a pdf file. Minimal nonergodic directions on genus 2 translation surfaces (with Yitwah Cheung), to appear Ergodic Theory Dynamical Systems The paper is available as a pdf file. In this paper we show that every genus 2 translation surface which is not a Veech surface has a minimal direction which is not uniquely ergodic. A divergent Teichmuller geodesic with uniquely ergodic vertical foliation, (with Y.Cheung) In this paper we construct an example of a quadratic differential whose vertical foliaiton is uniquely ergodic and yet the Teichmuller geodesci determined by the quadratic differential eventually leaves every compact set of moduli space. to appear, Israel Journal of Mathematics Available as a pdf file Multiple Saddle Connections on flat Surfaces and Principal Boundary of the Moduli Spaces of Quadratic Differentials (with A.Zorich), to appear GAFA In this paper we consider the phenomenon of multiple homologous saddle connections on surfaces defined by quadratic differentials. This paper is available as a pdf file The Pants Complex Has Only One End (with S.Schleimer) In this paper we show that the pants complex of a closed surface of genus greater than $2$ has only one end. to appear, proceeding of Conference on Spaces of Kleinian groups London Math. Soc. Lec. Notes Cambridge University Press The paper is available as a pdf file Quasiconvexity in the curve complex (with Y.Minsky) Contemporary Mathematics 355 309-320 In this paper we show that disc complex associated to a handlebody is a quasiconvex subset of the complex of curves. Available as a postscript file Moduli Spaces of Abelian Differentials: The Principal Boundary, Counting Problems and the Siegel-Veech Constants . (with Alex Eskin, Anton Zorich) Publications IHES 97 61-179 In this paper we consider general counting problems for the number of saddle connections and cylinders of closed trajectories for Abelian differentials. Saddle connections and cylinders may occur with multiplicity. We discuss these issues and relate the constants to the Siegel-Veech formula. This is in turn is related to finding the principal boundary of the moduli space. The paper is available as a Postscript file. Billiards in Rectangles with Barriers. (with Alex Eskin, Martin Schmoll) In this paper we consider a counting problem for closed orbits on a billiard table which is a rectangle with a barrier. Duke Mathematical Journal 118 427-463 The paper is available as a Postscript file. Weil-Petersson isometry group. (with Mike Wolf) In the paper we show that the isometry group of Teichmuller space with respect to the Weil-Petersson metric coincides with the mapping class group Geometriae Dedicata 93 177-190 Available as a as a dvi file. Rational billiards and flat structures (with S. Tabachnikov) to appear Handbook Dynamical Systems, Elsevier This survey paper is available as a dvi file Asymptotic formulas on flat surfaces (with Alex Eskin) Erg. Th. Dyn. Sys. 21 443-478 The paper is available as a dvi file (137K). Unstable quasi-geodesics in Teichmuller space (with Yair Minsky) In the tradition of Ahlfors and Bers: Proceedings of the first Ahlfors-Bers Colloquium I.Kra, B.Maskit eds AMS Contemp Math. 256 (2000) 239-241 This paper is available as a dvi file Superrigidity and mapping class groups. (with Benson Farb) Topology 37 1169-1176 The paper is available as a Postscript file (150K), or (without the figures) as a dvi file (35K). Geometry of the Complex of Curves I: Hyperbolicity (with Yair Minsky) Invent.Math 138 (1999) 103-149 The Complex of Curves on a Surface is a simplicial complex whose vertices are homotopy classes of simple closed curves, and whose simplices are sets of homotopy classes which can be realized disjointly. It is not hard to see that the complex is finite-dimensional, but locally infinite. It was introduced by Harvey as an analogy, in the context of Teichmuller space, for Tits buildings for symmetric spaces, and has been studied by Harer and Ivanov as a tool for understanding mapping class groups of surfaces. In this paper we prove that, endowed with a natural metric, the complex is hyperbolic in the sense of Gromov. In a certain sense this hyperbolicity is an explanation of why the Teichmuller space has some negative-curvature properties in spite of not being itself hyperbolic: Hyperbolicity in the Teichmuller space fails most obviously in the regions corresponding to surfaces where some curve is extremely short. The complex of curves exactly encodes the intersection patterns of this family of regions (it is the "nerve" of the family), and we show that its hyperbolicity means that the Teichmuller space is "relatively hyperbolic" with respect to this family. A similar relative hyperbolicity result is proved for the mapping class group of a surface. The paper is available as a Postscript file (563K), or (without the figures) as a dvi file (180K). Geometry of the Complex of Curves II: Heirarchical Structure (with Yair Minsky) to appear, GAFA The paper (November 2000) is available as a Postscript file (1180K).
{"url":"http://math.uchicago.edu/~masur/papers.html","timestamp":"2014-04-16T15:59:03Z","content_type":null,"content_length":"16786","record_id":"<urn:uuid:8d875f73-251f-46d0-9d2e-072dc6082e83>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Linearly constrained optimization and projected preconditioned conjugate gradients - SIAM J. Matrix Anal. Appl , 2000 "... . The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and ..." Cited by 73 (10 self) Add to MetaCart . The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and form of the eigenvectors of the preconditioned matrix and its minimum polynomial are given. The consequences of these results are considered for a variety of Krylov subspace methods. Numerical experiments validate these conclusions. Key words. preconditioning, indenite matrices, Krylov subspace methods AMS subject classications. 65F10, 65F15, 65F50 1. Introduction. In this paper, we are concerned with investigating a new class of preconditioners for indenite systems of linear equations of a sort which arise in constrained optimization as well as in least-squares, saddle-point and Stokes problems. We attempt to solve the indenite linear system A B T B 0 | {z } A x 1 x... - Large Scale Nonlinear Optimization, 35–59, 2006 , 2006 "... This paper describes Knitro 5.0, a C-package for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving large-scale, smooth nonlinear programming problems ..." Cited by 38 (3 self) Add to MetaCart This paper describes Knitro 5.0, a C-package for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving large-scale, smooth nonlinear programming problems, and it is also effective for the following special cases: unconstrained optimization, nonlinear systems of equations, least squares, and linear and quadratic programming. Various algorithmic options are available, including two interior methods and an active-set method. The package provides crossover techniques between algorithmic options as well as automatic selection of options and settings. 1 , 2000 "... We propose a new framework for the application of preconditioned conjugate gradients in the solution of large-scale linear equality constrained minimization problems. This framework allows for the exploitation of structure and sparsity in the context of solving the reduced Newton system (despite the ..." Cited by 11 (0 self) Add to MetaCart We propose a new framework for the application of preconditioned conjugate gradients in the solution of large-scale linear equality constrained minimization problems. This framework allows for the exploitation of structure and sparsity in the context of solving the reduced Newton system (despite the fact that the reduced system may be dense). Numerical experiments performed on a variety of test problems from the Netlib LP collection indicate computational promise. , 1998 "... eme which combines the forward and reverse modes of AD. Problem structure can be viewed in many di#erent ways; one way is to look at the granularity of the operations involved. For example, di# erentiation carried out at the matrix-vector operations can lead to great savings in the time as well as sp ..." Cited by 9 (0 self) Add to MetaCart eme which combines the forward and reverse modes of AD. Problem structure can be viewed in many di#erent ways; one way is to look at the granularity of the operations involved. For example, di# erentiation carried out at the matrix-vector operations can lead to great savings in the time as well as space requirements. Figuring out the kind of computation is another way to view structure, e.g., partially separable or composite functions whose structure can be exploited to get performance gains. In this thesis we develop a general structure framework which can be viewed hierarchically and allows for structure exploitation at various levels. For example, for time integration schemes employing stencils it is possible to exploit structure at both the stencil level and the timestep level. We also present some advanced structure exploitation ideas, e.g., parallelism in structured computations and using structure in implicit computations. The use of AD as a derivative computing e , 2005 "... Each step of an interior point method for nonlinear optimization requires the solution of a symmetric indefinite linear system known as a KKT system, or more generally, a saddle point problem. As the problem size increases, direct methods become prohibitively expensive to use for solving these probl ..." Cited by 5 (2 self) Add to MetaCart Each step of an interior point method for nonlinear optimization requires the solution of a symmetric indefinite linear system known as a KKT system, or more generally, a saddle point problem. As the problem size increases, direct methods become prohibitively expensive to use for solving these problems; this leads to iterative solvers being the only viable alternative. In this thesis we consider iterative methods for solving saddle point systems and show that a projected preconditioned conjugate gradient method can be applied to these indefinite systems. Such a method requires the use of a specific class of preconditioners, (extended) constraint preconditioners, which exactly replicate some parts of the saddle point system that we wish to solve. The standard method for using constraint preconditioners, at least in the optimization community, has been to choose the constraint , 2001 "... We consider numerical methods for finding (weak) second-order critical points for large-scale non-convex quadratic programming problems. We describe two new methods. The first is of the active-set variety. Although convergent from any starting point, it is intended primarily for the case where a goo ..." Cited by 2 (0 self) Add to MetaCart We consider numerical methods for finding (weak) second-order critical points for large-scale non-convex quadratic programming problems. We describe two new methods. The first is of the active-set variety. Although convergent from any starting point, it is intended primarily for the case where a good estimate of the optimal active set can be predicted. The second is an interior-point trust-region type, and has proved capable of solving problems involving up to half a million unknowns and constraints. The solution of a key equality constrained subproblem, common to both methods, is described. The results of comparative tests on a large set of convex and non-convex quadratic programming examples are given. - Comput. Optim. Appl , 1998 "... We propose a new framework for the application of preconditioned conjugate gradients in the solution of large-scale linear equality constrained minimization problems. This framework allows for the exploitation of structure and sparsity in the context of solving the reduced Newton system (despite the ..." Add to MetaCart We propose a new framework for the application of preconditioned conjugate gradients in the solution of large-scale linear equality constrained minimization problems. This framework allows for the exploitation of structure and sparsity in the context of solving the reduced Newton system (despite the fact that the reduced system may be dense). Numerical experiments performed on a variety of test problems from the Netlib LP collection indicate computational promise. 1 "... Clustering with constraints is an important and developing area. However, most work is confined to conjunctions of simple together and apart constraints which limit their usability. In this paper, we propose a new formulation of constrained clustering that is able to incorporate not only existing ty ..." Add to MetaCart Clustering with constraints is an important and developing area. However, most work is confined to conjunctions of simple together and apart constraints which limit their usability. In this paper, we propose a new formulation of constrained clustering that is able to incorporate not only existing types of constraints but also more complex logical combinations beyond conjunctions. We first show how any statement in conjunctive normal form (CNF) can be represented as a linear inequality. Since existing clustering formulations such as spectral clustering cannot easily incorporate these linear inequalities, we propose a quadratic programming (QP) clustering formulation to accommodate them. This new formulation allows us to have much more complex guidance in clustering. We demonstrate the effectiveness of our approach in two applications on text and personal information management. We also compare our algorithm against existing constrained spectral clustering algorithm to show its efficiency in computational time. , 2004 "... Combining direct and iterative methods for the solution of large systems in different application areas 1 ..." Add to MetaCart Combining direct and iterative methods for the solution of large systems in different application areas 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=491499","timestamp":"2014-04-16T13:24:07Z","content_type":null,"content_length":"35129","record_id":"<urn:uuid:e8fc08d1-a1d5-483b-801f-00b0abe1360a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Before using Algebrator, I could barely do long division. Now Im like the best student in my algebra class and I never would have been able to get that result without it! Thank you so much! Alden Lewis, WI It is more intuitive. And it even 'took' my negative scientific annotations and showed me how to simplify! Thanks!!! Jeff Kasten, MI I was really struggling with the older version... so much I pretty much just gave up on it. This newer version looks better and seems easier to navigate through. I think it will be great! Thank you! David Aguilar, CA
{"url":"http://www.solve-variable.com/math-variable/cramer%E2%80%99s-rule/my-maths-cheats.html","timestamp":"2014-04-20T15:52:05Z","content_type":null,"content_length":"15586","record_id":"<urn:uuid:0a93eb0a-39e9-4973-9d9a-928e5ce46f81>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: e What is the distance of the point (6, 8) from the origin? View Solution e What is the distance of the point (8a, a^2 - 16) from the origin? View Solution e If the distance from the origin to the point (k, k + 5) is 25 units, then find the value of k. View Solution e What is the distance between the points (5, 5) and (20, 25)? e View Solution e The distance between the points (5, 5) and (11, k) is 10 units. What are the values of k? e View Solution e If θ is a real number, then find the distance between the points (9sin 3θ, 9cos 3θ) and (- 9cos 3θ, 9sin 3θ). e View Solution e Find the distance between the points (6, 0) and (7, tan 8θ) for all real values of θ. e View Solution e The distance of the point (cot 6θ, 2) from (0, 3) is 2 units where 6θ is an acute angle. Find the value of θ. e View Solution e If a is any real number, then what is the distance from (2, 0) to (0, a)? e View Solution e What is the distance of the point (3, 4) from the x-axis? e View Solution e What is the distance of the point (5, 7) from y-axis? e View Solution e The three points A (7, - 7), B (8, 8), C (9, 11) e View Solution e What is the distance between the points (4, 4) and (16, 20)? e View Solution e The quadrilateral formed by the points A (1, 4), B (5, 1), C (8, 5) and D (4, 8) is a e View Solution e If the distance from the origin to the point (k, k + 2) is 10 units, then find the value of k. View Solution e What is the distance between the points (3, 3) and (12, 15)? e View Solution e The distance between the points (3, 4) and (9, k) is 10 units. What are the values of k? e View Solution e If θ is a real number, then find the distance between the points (2sin 4θ, 2cos 4θ) and (- 2cos 4θ, 2sin 4θ). e View Solution e Find the distance between the points (6, 0) and (7, tan 2θ) for all real values of θ. e View Solution e The distance of the point (cot 2θ, 6) from (0, 7) is 2 units where 2θ is an acute angle. Find the value of θ. e View Solution e In the triangle formed by the points A (4, 3), B (8, 7) and C (1, 6), the right angle is located at View Solution e What is the distance between the points (5, 5) and (20, 25)? e View Solution e If a is any real number, then what is the distance from (8, 0) to (0, a)? e View Solution e If A = (k, k), B = (3 + k, 4 + k), C = (4 + k, 3 + k) are any three points of a plane, then for all the real values of k which of the following is correct? e View Solution e What is the distance of the point (12, 16) from the origin? View Solution e a is any non-zero real number. If the distance between the points (a, - 1a) and (1a, a) is d, then which of the following is true?. View Solution e What is the distance of the point (6, 7) from the x-axis? e View Solution e What is the distance of the point (4y, y^2 - 4) from the origin? View Solution e What is the distance of the point (7, 9) from y-axis? e View Solution e The three points A (2, - 2), B (3, 3), C (4, 6) e View Solution e The three points A (2, 1), B (5, 1) and C (2, 5) View Solution e If k > 0, then the points A (4k, 6k), B (4k, 8k) and C (4k + 3k, 7k) are necessarily the vertices of e View Solution e What is the quadrilateral formed by the vertices A = (- 3, 0), B = (- 1, - 3), C = (5, 1) and D = (3, 4) in order? e View Solution e If sin θ ≠ cos θ, then the triangle formed by the points A (sin θ, cos θ), B (cos θ, sin θ) and C (sin θ + cos θ, cos θ + sin θ) is ______. e View Solution e ABC is an isosceles triangle. If A = (x, y), B = (3, 1), C = (7, - 2) then which of the following is the relation between x and y when BC is the base of the triangle? View Solution e If the point P (x, x + 1) is equidistant from the points A (a + b, b - a) and B (a - b, a + b), then find the value of x for a ≠ b. e View Solution e If α > β > γ are any three real numbers, then the points A (α, β + γ), B (β, γ + α) and C (γ, α + β) are e View Solution e If t ≠ 0, then find the distance between the points (at^2, - 2at^2), (at2, 2at). e View Solution e A is a point on the curve y = x^2 and B is a point on the curve y = x^3. If the x co-ordinate of A is 2 and y co-ordinate of B is 8, then what is the distance between A and B? View Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgefkxkjhke&.html","timestamp":"2014-04-18T20:47:44Z","content_type":null,"content_length":"87589","record_id":"<urn:uuid:7cabda22-6df8-40bd-a345-1f37fca51b1a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Some informative questions about intuitionistic logic and mathematics [FOM] Some informative questions about intuitionistic logic and mathematics Lew Gordeew legor at gmx.de Sat Nov 5 18:57:40 EST 2005 Arnon Avron wrote on Wed, 2 Nov 2005 09:11:43 +0200: > 1) Is there a definable (in the same way implication is definable > in classical logic in terms of disjunction and negation) unary > connective @ of intuitionistic logic such that for every A, B we have > that A and @@A intuitionistically follow from each other, > and B intuitionistically follows from the set {A, at A}? No. For otherwise we could prove in the intuitionistic logic all classical axioms with respect to that "negation" @. Consequently, we could prove in the intuitionistic logic all negation-free classical tautologies. However, there are known purely implicational, and hence negation-free, classical tautologies which are not provable intuitionistically - a contradiction. > 2) Can one define in intuitionistic logic counterparts of the > classical connectives so that the resulting translation > preserves the consequence relation of classical logic? (Obviously,a > negative answer to question 1 entails a negative one to question 2). See above. > 3) In several postings it was emphasized that LEM applies > in intuitionistic logic to "decidable" relations. Is there > an intuitionist definition of "decidable" according to which > this claim conveys more than a trivial claim of the form "A implies A"? Yes. Because basic recursion theory is easily formalized using intuitionistic logic. Constructive Analysis is also intuitionistically formalizable; this requires some care though. > 4) Some postings mentioned also "undecidable" relations (or predicates). > What is the definition of "undecidable" here? is a relation P > intuitionistically undecidable if there is procedure that produces > a proof of absurdity from a procedure that given any x either provides > a proof of P(x) or a procedure that carries any proof of P(x) to > a proof of absurdity? Decidable (undecidable) relations are meant as in the ordinary recursion theory (modulo natural translations/interpretations - see above). > 5) Does an intuitionistically-undecidable predicate > intuitionistically-exist? Yes, of course (see above). Telefonieren Sie schon oder sparen Sie noch? NEU: GMX Phone_Flat http://www.gmx.net/de/go/telefonie More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-November/009308.html","timestamp":"2014-04-18T20:43:42Z","content_type":null,"content_length":"5199","record_id":"<urn:uuid:dbc375a0-3b1f-46e2-a6df-a2a0ed4bebb7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Topics in trivalent graphs Gans, Marijke van (2007) Ph.D. thesis, University of Birmingham. Chapter 0 details the notation and terminology used. Chapter 1 introduces the usual linear algebra over GF2 of edge space E and its orthogonal subspaces Z (cycle space) and Z* (cut space). "Reduced vectors" are defined as elements of the quotient space E/Z*. Reduced vectors of edges give a simple way of characterising edges that are bridges (their reduced vector is null) or 2-edge cuts (their vectors are equal), and also of spanning trees (the edges outside the tree are a basis) and form to the best of my knowledge a new approach. They are also useful in later chapters to describe Tait colorings, as well as cycle double covers. Perhaps the most important property of E/Z* is the Unique graph theorem: unlike in E, a list of which reduced vectors are edges uniquely determines graph structure (if edge connectivity is high enough; that covers certain “solid” components every trivalent graph can be decomposed into). Chapter 2 gives a brief intoduction to graph embeddings and planar graphs. Chapter 3 deals specifically with trivalent graphs, listing some of the ways in which they are different from graphs in general. Results here include two versions of Bipolar growth theorem which can be used for constructive proofs, and (after defining “halftrees” and a “flipping” operation between them) a theorem enumerating the set C\(_n\) of halftrees of a given size, the "Caterpillar theorem" showing C\(_n\) is connected by flipping, and the "Butterfly theorem" derived from it. Graphs referred to here as "solid" are shown to play an important structural role. Chapter 4 deals with the 4-coloring theorem. The first half shows the older results in a unified light using edge spaces over GF4. The second half applies methods from coding theory to this. The 4-color theorem is shown to be equivalent to a variety of statements about cycle-shaped words in codes over GF4 or GF3, many of them tantalisingly simple to state (but not, as yet, to prove). Chapter 5 deals with what has been variously called polyhedral decompositions and (specifically for those using cycles) cycle double covers, as in the cycle double cover conjecture. The more general concept is referred to as a "map" in this paper, and identified with what is termed here "cisness structures", which is a new approach. There is also a simpler proof of a theorem by Szekeres. Links with the subject of the previous chapter are identified, and some approaches towards proving the conjecture suggested. Several planned appendices were left out of the version submitted for examination because they would make the thesis too big, and/or were not finished. Of the ones that remain, appendix H (on embedding infinite 4- and 3-valent trees X and Y in the hyperbolic plane) now seems disjointed from the body of the text (a planned appendix dealt with colorings of finite graphs as the images of homomorphisms from embeddings of Y). Appendix B enumerates cycle maps (cycle double covers) on a number of small graphs while appendix D investigates the dimension of the instersection of Z and Z*. This unpublished thesis/dissertation is copyright of the author and/or third parties. The intellectual property rights of the author or third parties in respect of this work are as defined by The Copyright Designs and Patents Act 1988 or as modified by any successor legislation. Any use made of information contained in this thesis/dissertation must be in accordance with that legislation and must be properly acknowledged. Further distribution or reproduction in any format is prohibited without the permission of the copyright holder. Export Reference As : ASCII + BibTeX + Dublin Core + EndNote + HTML + METS + MODS + OpenURL Object + Reference Manager + Refer + RefWorks Share this item : Repository Staff Only: item control page
{"url":"http://etheses.bham.ac.uk/103/","timestamp":"2014-04-18T10:46:20Z","content_type":null,"content_length":"27472","record_id":"<urn:uuid:bf4b760e-52ba-4d35-8124-a479b2921303>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
6. THE MATHEMATICIAN, THE PHYSICIST AND THE ENGINEER (AND OTHERS) Index | Comments and Contributions | previous:5.3 earth sciences quotes mathematics physics chemistry engineering [Top of page] [Bottom of page] [Index] [Send comment] A mathematician, a physicist, and an engineer were all given a red rubber ball and told to find the volume. The mathematician carefully measured the diameter and evaluated a triple integral. The physicist filled a beaker with water, put the ball in the water, and measured the total displacement. The engineer looked up the model and serial numbers in his red-rubber-ball table. If it was my company: The engineer tried to look up the model and serial numbers, couldn't find them, so told his manager that it's just not going to work. From: "Ron Gerard" <ron#NoSpam.gerard.as> We chemists, who test by destroying a small sample, would weigh the ball, snip out a 1mm cube and weigh this - thus getting an accurate volume. mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] So a mathematician, an engineer, and a physicist are out hunting together. They spy a deer(*) in the woods. The physicist calculates the velocity of the deer and the effect of gravity on the bullet, aims his rifle and fires. Alas, he misses; the bullet passes three feet behind the deer. The deer bolts some yards, but comes to a halt, still within sight of the trio. "Shame you missed," comments the engineer, "but of course with an ordinary gun, one would expect that." He then levels his special deer-hunting gun, which he rigged together from an ordinary rifle, a sextant, a compass, a barometer, and a bunch of flashing lights which don't do anything but impress onlookers, and fires. Alas, his bullet passes three feet in front of the deer, who by this time wises up and vanishes for good. "Well," says the physicist, "your contraption didn't get it either." "What do you mean?" pipes up the mathematician. "Between the two of you, that was a perfect shot!" (*) How they knew it was a deer: The physicist observed that it behaved in a deer-like manner, so it must be a deer. The mathematician asked the physicist what it was, thereby reducing it to a previously solved problem. The engineer was in the woods to hunt deer, therefore it was a deer. mathematics physics [Top of page] [Bottom of page] [Index] [Send comment] A mathematician and a physicist agree to a psychological experiment. The mathematician is put in a chair in a large empty room and a beautiful naked woman is placed on a bed at the other end of the room. The psychologist explains, "You are to remain in your chair. Every five minutes, I will move your chair to a position halfway between its current location and the woman on the bed." The mathematician looks at the psychologist in disgust. "What? I'm not going to go through this. You know I'll never reach the bed!" And he gets up and storms out. The psychologist makes a note on his clipboard and ushers the physicist in. He explains the situation, and the physicist's eyes light up and he starts drooling. The psychologist is a bit confused. "Don't you realize that you'll never reach her?" The physicist smiles and replied, "Of course! But I'll get close enough for all practical mathematics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: LJGOLD01#NoSpam.ulkyvm.louisville.edu A businessman needed to employ a quantitative type person. He wasn't sure if he should get a mathematician, an engineer, or an applied mathematician. As it happened, all the applicants were male. The businessman devised a test. The mathematician came first. Miss How, the administrative assistant took him into the hall. At the end of the hall, lounging on a couch, was a beautiful woman. Miss How said, "You may only go half the distance at a time. When you reach the end, you may kiss our model." The mathematician explained how he would never get there in a finite number of iterations and politely excused himself. Then came the engineer. He quickly bounded halfway down the hall, then halfway again, and so on. Soon he declared he was well within accepted error tolerance and grabbed the beautiful woman and kissed her. Finally it was the applied mathematician's turn. Miss How explained the rules. The applied mathematician listened politely, then grabbed Miss How and gave her a big smooch. "What was that about?" she cried. "Well, you see I'm an applied mathematician. If I can't solve the problem, I change it!" physics engineering computer science [Top of page] [Bottom of page] [Index] [Send comment] From: pascual#NoSpam.tid.es (Pascual de Juan Nuqez) Three men, a physicist, a engineer and a computer scientist, are travelling in a car. Suddenly, the car starts to smoke and stops. The three atonished men try to solve the problem: - Physicist says: This is obviously a classic problem of torque. It has been overloaded the elasticity limit of the main axis. - Engineer says : Let's be serious! The matter is that it has been burned the spark of the connecting rod to the dynamo of the radiator. I can easily repair it by hammering. - Computer scientist says : What if we get off the car, wait a minute, and then get in and try again? engineering computer science [Top of page] [Bottom of page] [Index] [Send comment] From: Dave Murray <u01dagm#NoSpam.abdn.ac.uk> There are comp sci student, an engineering student and a meterology student going through the desert in a jeep. Suddenly the jeep stops and they're left sitting there wondering what The Eng student pipes up, " must be the fan belt thats broken..the engine has overheated...so we'll just have to wait till it cools down, bodge the fan belt and we'll be fine." The meterology replies, "naw, it's not that...its just the ambient heat in this place. It's not allowing the engine to breath correctly...we just have to wait till night The comp sci student thinks about this for a minute then says, "yeah, you might be right, but I've got an idea....What say we all get out..then get back in again?" mathematics engineering computer science [Top of page] [Bottom of page] [Index] [Send comment] An engineer, a mathematician, and a computer programmer are driving down the road when the car they are in gets a flat tire. The engineer says that they should buy a new car. The mathematician says they should sell the old tire and buy a new one. The computer programmer says they should drive the car around the block and see if the tire fixes itself. mathematics biology computer science [Top of page] [Bottom of page] [Index] [Send comment] A biologist, a statistician, a mathematician and a computer scientist are on a photo-safari in Africa. They drive out into the savannah in their jeep, stop and scour the horizon with their binoculars. The biologist: "Look! There's a herd of zebras! And there, in the middle: a white zebra! It's fantastic! There are white zebras! We'll be famous!" The statistician: "It's not significant. We only know there's one white zebra" The mathematician: "Actually, we know there exists a zebra which is white on one side" The computer scientist: "Oh no! A special case!" mathematics physics computer science [Top of page] [Bottom of page] [Index] [Send comment] A philosopher, a physicist, a mathematician and a computer scientist were travelling through Scotland when they saw a black sheep through the window of the train. "Aha," says the philosopher, "I see that Scottish sheep are black." "Hmm," says the physicist, "You mean that some Scottish sheep are "No," says the mathematician, "All we know is that there is at least one sheep in Scotland, and that at least one side of that one sheep is "Oh, no!" shouts the computer scientist, "A special case!" Sherlock Holmes and Dr. Watson were travelling on the same train when they passed the same field full of sheep. "Look at that solitary black sheep among all those white ones" said Watson to Holmes. "Yes Watson, the ratio of black sheep to white in that field is one black to three hundred and seventeen white" replied Holmes. "But how can you be so precise" said Watson, flabbergasted. "Elementary, my dear Watson" replied Holmes, "I counted all of the legs and divided by four!" mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] A mathematician, an engineer, and a physicist are being interviewed for a job. In each case, the interview goes along famously until the last question is asked: "How much is one plus one?" Each of them suspects a trap, and is hesitant to answer. The mathematician thinks for a moment, and says "I'm not sure, but I think it converges". The physicist says "I'm not sure, but I think it's on the order of one" The engineer gets up, closes the door to the office, and says "How much do you want it to be?". [Top of page] [Bottom of page] [Index] [Send comment] A doctor, a lawyer and a mathematician were discussing the relative merits of having a wife or a mistress. The lawyer says: "For sure a mistress is better. If you have a wife and want a divorce, it causes all sorts of legal problems. The doctor says: "It's better to have a wife because the sense of security lowers your stress and is good for your health. The mathematician says: " You're both wrong. It's best to have both so that when the wife thinks you're with the mistress and the mistress thinks you're with your wife --- you can do some mathematics. mathematics physics biology [Top of page] [Bottom of page] [Index] [Send comment] A Mathematician, a Biologist and a Physicist are sitting in a street cafe watching people going in and coming out of the house on the other side of the street. First they see two people going into the house. Time passes. After a while they notice three persons coming out of the house. The Physicist: "The measurement wasn't accurate.". The Biologists conclusion: "They have reproduced". The Mathematician: "If now exactly 1 person enters the house then it will be empty again." mathematics engineering [Top of page] [Bottom of page] [Index] [Send comment] There were two men trying to decide what to do for a living. They went to see a counselor, and he decided that they had good problem solving skills. He tried a test to narrow the area of specialty. He put each man in a room with a stove, a table, and a pot of water on the table. He said "Boil the water". Both men moved the pot from the table to the stove and turned on the burner to boil the water. Next, he put them into a room with a stove, a table, and a pot of water on the floor. Again, he said "Boil the water". The first man put the pot on the stove and turned on the burner. The counselor told him to be an Engineer, because he could solve each problem individually. The second man moved the pot from the floor to the table, and then moved the pot from the table to the stove and turned on the burner. The counselor told him to be a mathematician because he reduced the problem to a previously solved problem. [Top of page] [Bottom of page] [Index] [Send comment] Three engineering students were gathered together discussing the possible designers of the human body. One said, ``It was a mechanical engineer. Just look at all the joints.'' Another said, ``No, it was an electrical engineer. The nervous system has many thousands of electrical connections.'' The last said, ``Actually it was a civil engineer. Who else would run a toxic waste pipeline through a recreational area?'' mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] An engineer, a physicist, and a mathematician are shown a pasture with a herd of sheep, and told to put them inside the smallest possible amount of fence. The engineer is first. He herds the sheep into a circle and then puts the fence around them, declaring, "A circle will use the least fence for a given area, so this is the best solution." The physicist is next. She creates a circular fence of infinite radius around the sheep, and then draws the fence tight around the herd, declaring, "This will give the smallest circular fence around the herd." The mathematician is last. After giving the problem a little thought, he puts a small fence around himself and then declares, "I define myself to be on the outside!" mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] One day a farmer called up an engineer, a physicist, and a mathematician and asked them to fence of the largest possible area with the least amount of fence. The engineer made the fence in a circle and proclaimed that he had the most efficient design. The physicist made a long, straight line and proclaimed 'We can assume the length is infinite...' and pointed out that fencing off half of the Earth was certainly a more efficient way to do it. The Mathematician just laughed at them. He built a tiny fence around himself and said 'I declare myself to be on the outside.' chemistry engineering [Top of page] [Bottom of page] [Index] [Send comment] Four men were sitting one day discussing how smart their dog's were. The first man was an Engineer, who said his dog could do math. His dog was named T-Square, and he told him to get some paper and draw a square, a circle, and a triangle, which the dog did with no sweat. The Accountant said that his dog was better. His dog, Slide Rule, was told to fetch a dozen cookies, bring them back, and divide them into piles of 3, which Slide Rule did with no problem. The Chemist said his dog was smarter, his dog named Measure, was told to get a quart of milk, and pour 7 ounces into a 10 ounce glass. The dog did this with no trouble at all, and all three men agreed that their dog's were equally smart. Then they turned to the Union Member and asked, what can your dog do? The Union Member called his dog, who was named Coffee Break, and said, "Show the fellows what you can do". Coffee Break went over and ate the cookies, drank the milk, shit on the paper, fucked the other dogs, and claimed he injured his back while doing so, filed a grievence report for unsafe working conditions, put in for Workmens Compensation, and left for home on sick leave. mathematics physics [Top of page] [Bottom of page] [Index] [Send comment] A mathematician and a physicist are given the task of describing a room. They both go in, and spend hours meticulously writing down every detail, each turning in nearly a ream of paper. The next day, the room is changed, and they are again given the task. The physicist spends the better part of the day, but the mathematician, amazingly enough, leaves within a minute. he hands in a single sheet of paper with the following Put picture back on wall to return to previously solved state. mathematics engineering [Top of page] [Bottom of page] [Index] [Send comment] To tell a difference between a mathematician and an engineer, perform this experiment. Put an empty kettle in the middle of the kitchen floor and tell your subjects to boil some water. The engineer will fill the kettle with water, put it on the stove, and turn the flame on. The mathematician will do the same thing. Next, put the kettle already filled with water on the stove, and ask the subjects to boil the water. The engineer will turn the flame on. The mathematician will empty the kettle and put it in the middle of the kitchen floor... thereby reducing the problem to one that has already been solved! mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] A Mathematician (M) and an Engineer (E) attend a lecture by a Physicist. The topic concerns Kulza-Klein theories involving physical processes that occur in spaces with dimensions of 9, 12 and even higher. The M is sitting, clearly enjoying the lecture, while the E is frowning and looking generally confused and puzzled. By the end the E has a terrible headache. At the end, the M comments about the wonderful lecture. The E says "How do you understand this stuff?" M: "I just visualize the process." E: "How can you POSSIBLY visualize something that occurs in 9-dimensional space?" M: "Easy, first visualize it in N-dimensional space, then let N go to 9." mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] When considering the behaviour of a howitzer: A mathematician will be able to calculate where the shell will land. A physicist will be able to explain how the shell gets there. An engineer will stand there and try to catch it. mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: "Frank Kosanke" <digger#NoSpam.htb.de> (Blame translation from German on Joachim) A physicist, an engineer and a mathematician make their first parachute jump. Before the jump the instructor explains exactly what they must do: Jump out of the plane, count until three and pull the line. The physicist jumps. For him counting till three is too unexact and too primitive. Instead, he calculates out of his height, angle and velocity the exact moment he should pull the line for a soft landing and arrives The engineer is a practical man and thinks calling to three is too unreliable and therefore dangerous... He jumps and pulls the line immediately. He takes a bit longer than the physicist but he lands Both see jump the mathematician jump out of the plane. He falls ... and falls ... and falls ... No parachute opens and finally he falls on the ground. Fortunately, he lands in a haystack. The physicist and engineer walk alarmed to the haystack and while they dig him out they hear him say: "From this follows from complete induction: 3" mathematics physics chemistry biology [Top of page] [Bottom of page] [Index] [Send comment] The USDA once wanted to make cows produce milk faster, to improve the dairy industry. So, they decided to consult the foremost biologists and recombinant DNA technicians to build them a better cow. They assembled this team of great scientists, and gave them unlimited funding. They requested rare chemicals, weird bacteria, tons of quarantine equipment, there was a horrible typhus epidemic they started by accident, and, 2 years later, they came back with the "new, improved cow." It had a milk production improvement of 2% over the original. They then tried with the greatest Nobel Prize winning chemists around. They worked for six months, and, after requisitioning tons of chemical equipment, and poisoning half the small town in Colorado where they were working with a toxic cloud from one of their experiments, they got a 5% improvement in milk output. The physicists tried for a year, and, after ten thousand cows were subjected to radiation therapy, they got a 1% improvement in output. Finally, in desperation, they turned to the mathematicians. The foremost mathematician of his time offered to help them with the problem. Upon hearing the problem, he told the delegation that they could come back in the morning and he would have solved the problem. In the morning, they came back, and he handed them a piece of paper with the computations for the new, 300% improved milk cow. The plans began: "A Proof of the Attainability of Increased Milk Output from Bovines: Consider a spherical cow......" mathematics physics chemistry engineering biology [Top of page] [Bottom of page] [Index] [Send comment] An assemblage of the most gifted minds in the world were all posed the following question: "What is 2 * 2 ?" The chemist says immediately circa 10 to the power 1. The engineer whips out his slide rule (so it's old) and shuffles it back and forth, and finally announces "3.99". The physicist consults his technical references, sets up the problem on his computer, and announces "it lies between 3.98 and 4.02". The mathematician cogitates for a while, oblivious to the rest of the world, then announces: "I don't what the answer is, but I can tell you, an answer exists!". Philosopher: "But what do you _mean_ by 2 * 2 ?" Logician: "Please define 2 * 2 more precisely." Accountant: Closes all the doors and windows, looks around carefully, then asks "What do you _want_ the answer to be?" Computer Hacker: Breaks into the NSA super-computer and gives the answer. From: Tony Quinn <tonyquin#NoSpam.sixpints.demon.co.uk> Stress engineer: Well I know it's 4, but let's call it 50 anyway....... From: Detlef_Wendt#NoSpam.SU2.maus.de (Detlef Wendt) (blame JV for translation) The psychologist: Why do you wish to know that? The sociologist: I don't know, but is was nice talking about it. From: bhunt <bhunt#NoSpam.DEPAUW.EDU> Behavioral Ecologist: A polygamous mating system. X-XS4ALL-To: <sciencejokes#NoSpam.xs4all.nl> From: Carsten Knop <Carsten.Knop#NoSpam.inis.de> Medical Student : 4 All others looking astonished : How did you know ?? Medical Student : I memorized it. mathematics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: pclarke#NoSpam.waite.adelaide.edu.au (Philip Clarke) An Engineer, Statistician and Economist were asked "what does 2 + 2 equal?" They answered as follows: Engineer: With a safety factor of 2x, 2 + 2 = 8 Statistician: With a degree of freedom of 1, 2 + 2 = anywhere from 1 to 7, but I can't be sure. Economist: What would you like it to equal? mathematics physics [Top of page] [Bottom of page] [Index] [Send comment] From: MARTIN.VIETOR#NoSpam.HEIDEBOX.HEIDE.DE (Translation to blame on Joachim) A mathematician, a physicist and a doctor were posed the questin 2*2. The physicist takes a notebook and starts scribbling. After 3 days of the most complex calculations he finds with use of the Earth radius, the gravitation constant : "Somewhere between pi and 2 times the square root of 3." The mathematican comes back after a week with dark rings under his eyes and proclaims: "Colleges, their is a solution." The doctor says simple :"4" The others answer: "Oh well you memorized it." mathematics physics computer science [Top of page] [Bottom of page] [Index] [Send comment] From: "Frank Kosanke" <digger#NoSpam.htb.de> (Blame Joachim for translation from German) And yet another variation: A Physicist, a computer scientist and a mathematician must calculate what is 2 + 2. The physicist constructs out of slopes and balls etcetera a complicated measuring system and finds 3.99998 as solution. "Measuring errors are possible, of course" The computer scientist writes a 24 page Pascal Program, that spits out 4.000001 as solutions. "Going from a binary to a decimal system and back can cause inaccuracies." The mathematician buries himself in his books and writes complicates expressions on thousands pieces of papers. Then he proofs that there is only one solution, and it is calculable. mathematics physics [Top of page] [Bottom of page] [Index] [Send comment] From: carrt#NoSpam.ix.netcom.com (Tim Carr) Three people answered an add for a an open job - an engineer, a physicist and a statistician. When the engineer went in, he was asked: Q: "What is two plus two?" A: "Four." When the physisict went in, he was asked the same question: Q: "What is two plus two?" A: "Four." The statistician went in next. When the question was posed to him, he looked around furtively, shut the door and drew the blinds closed. His "What do you want it to be?" mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] The Board of Trustees, not convinced by the performance in a previous joke, decides to test the Profs. again. First they take a Math Prof. and put him in a room. Now, the room contains a table and three metal spheres about the size of softballs. They tell him to do whatever he want with the balls and the table in one hour. After an hour, he comes out and the Trustees look in and the balls are arranges in a triangle at the center of the table. Next, they give the same test to a Physics Prof. After an hour, they look in, and the balls are stacked one on top of the other in the center of the Finally, the give the test to an Engineering Prof. After an hour, they look in and one of the balls is broken, one is missing, and he's carrying the third out in his lunchbox. physics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: "F. Ted Tschang" <ft0d+#NoSpam.andrew.cmu.edu> An economist, an engineer, and a physicist are marooned on a deserted island. One day they find a can of food washed up on the beach and contrive to open it. The engineer said: "let's hammer the can open between these rocks". The physicist said: "that's pretty crude. We can just use the force of gravity by dropping a rock on the can from that tall tree over there". The economist is somewhat disgusted at these deliberations, and says: "I've got a much more elegant solution. All we have to do is assume a can-opener." [Top of page] [Bottom of page] [Index] [Send comment] In some foreign country a priest, a lawyer and an engineer are about to be guillotined. The priest puts his head on the block, they pull the rope and nothing happens -- he declares that he's been saved by divine intervention -- so he's let go. The lawyer is put on the block, and again the rope doesn't release the blade, he claims he can't be executed twice for the same crime and he is set free too. They grab the engineer and shove his head into the guillotine, he looks up at the release mechanism and says, "Wait a minute, I see your problem......" mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] An engineer, a mathematician, and a physicist went to the races one Saturday and laid their money down. Commiserating in the bar after the race, the engineer says, "I don't understand why I lost all my money. I measured all the horses and calculated their strength and mechanical advantage and figured out how fast they could run..." The physicist interrupted him: "...but you didn't take individual variations into account. I did a statistical analysis of their previous performances and bet on the horses with the highest probability of winning..." "...so if you're so hot why are you broke?" asked the engineer. But before the argument can grow, the mathematician takes out his pipe and they get a glimpse of his well-fattened wallet. Obviously here was a man who knows something about horses. They both demanded to know his "Well," he says, between puffs on the pipe, "first I assumed all the horses were identical and spherical..." mathematics physics biology [Top of page] [Bottom of page] [Index] [Send comment] A group of wealthy investors wanted to be able to predict the outcome of a horse race. So they hired a group of biologists, a group of statisticians, and a group of physicists. Each group was given a year to research the issue. After one year, the groups all reported to the investors. The biologists said that they could genetically engineer an unbeatable racehorse, but it would take 200 years and $100 billion. The statisticians reported next. They said that they could predict the outcome of any race, at a cost of $100 million per race, and they would only be right 10% of the time. Finally, the physicists reported that they could also predict the outcome of any race, and that their process was cheap and simple. The investors listened eagerly to this proposal. The head physicist reported, "We have made several simplifying assumptions... first, let each horse be a perfect rolling sphere..." mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] A group of scientists were doing an investigation into problem-solving techniques, and constructed an experiment involving a physicist, an engineer, and a mathematician. The experimental apparatus consisted of a water spigot and two identical pails, one of which was fastened to the ground ten feet from the spigot. Each of the subjects was given the second pail, empty, and told to fill the pail on the ground. The physicist was the first subject: he carried his pail to the spigot, filled it there, carried it full of water to the pail on the ground, and poured the water into it. Standing back, he declared, "There: I have solved the problem." The engineer and the mathematician each approached the problem similarly. Upon finishing, the engineer noted that the solution was exact, since the volumes of the pails were equal. The mathematician merely noted that he had proven that a solution exists. Now, the experimenters altered the parameters of the task a bit: the pail on the ground was still empty, but the subjects were presented with a pail that was already half-filled with water. The physicist immediately carried his pail over to the one on the ground, emptied the water into it, went back to the spigot, *filled* the pail, and finally emptied the entire contents into the pail on the ground, overflowing it and spilling some of the water. Upon finishing, he commented that the problem should have been better stated. The engineer, in turn, thought for some time before going into action. He then took his half-filled pail to the spigot, filled it to the brim, and filled the pail on the ground from it. Again he noted that the problem had an exact solution, which of course he had found. The mathematician thought for a long time before stirring. At last he stood up, emptied his pail onto the ground, and declared, "The problem has been reduced to one already solved." computer science [Top of page] [Bottom of page] [Index] [Send comment] A doctor, an architect, and a computer scientist were arguing about whose profession was the oldest. In the course of their arguments, they got all the way back to the Garden of Eden, whereupon the doctor said, "The medical profession is clearly the oldest, because Eve was made from Adam's rib, as the story goes, and that was a simply incredible surgical feat." The architect did not agree. He said, "But if you look at the Garden itself, in the beginning there was chaos and void, and out of that, the Garden and the world were created. So God must have been an The computer scientist, who had listened to all of this said, "Yes, but where do you think the chaos came from?" mathematics physics engineering biology [Top of page] [Bottom of page] [Index] [Send comment] From: mstueben#NoSpam.pen.k12.va.us (Michael A. Stueben) The biologist says "I study the principles of life." The psychologist says "You are controlled by the principles of life." The businessman says "My business can use its force to control the economy." The economist says "The forces of the economy will control your business." The engineer says: "My equations are a model of the universe." The physicist says: "The universe is a model of my equations." The mathematician says: "I don't care." physics chemistry engineering [Top of page] [Bottom of page] [Index] [Send comment] From: chemistrwb#NoSpam.aol.com (ChemistRWB) A chemist, a physicist and an Engineer went on a camping trip, accompanied by a guide. The were brought to a cabin in the deep Canadian wilderness. Inside the cabin was a wood-burning stove, but it was set up on bricks about 60 cm above the floor of the cabin. The three scientists speculated about the function of the high placement of the stove. The chemist said, "Obviously, the guide has anticipated the convection currents of the heat an placed the stove in a raised position to maximize the heat flow in the semi-adiabatic system." The Physicist believed, "No, it's far simpler than that, the guide placed the stove higher so movement from the countertops to the stove would be minimized and energy conserved." The engineer believed he had the true answer, "Obviously, you fellows don't do much camping. The stove is place higher so we can bring in wood and put it under the stove to dry." The guide soon returned and all three scientists were eager to find out who was right. The guide replied, "Well, we was bringin' the dang thing up the river and part of the chimney pipe fell off the boat, so we had to put it up for the pipe to reach the PS: If you know all the words in this essay, your English is better than 99% of native Americans. mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: grayd#NoSpam.is.dal.ca (James D. Gray) An Engineering Student, a Physics Student, and a Mathematics student were each given $150 dollars and were told to use that money to find out exactly how tall a particular hotel was. All three ran off, extremely keen on how to do this. The Physics student went out, purchased some stopwatches, a number of ball bearings, a calculator, and some friends. He had them all time the drop of ball bearings from the roof, and he then figured out the height from the time it took for the bearings to accelerate from rest until they impacted with the sidewalk. The Math student waited until the sun was going down, then she took out her protractor, plumb line, measuring tape,and scratch pad, measured the length of the shadow, found the angle the buildings roof made from the ground, and used trignometry to figure out the height of the building. These two students bumped into the Engineering student the next day, who was nursing a really bad hangover. When asked what he did to find the height of the building he replied: "Well, I walked up to the bell hop, gave him 10 bucks, asked him how tall the hotel was, and hit the bar inside for happy hour!" mathematics physics [Top of page] [Bottom of page] [Index] [Send comment] From: arkoff#NoSpam.sun.lclark.edu (Gary Arkoff) A math student and a physics student are camping. The physics students takes his turn to do the cooking first. He makes a tasty stew, but in so doing, uses up all the water. The next day, it is the math student's turn to do the cooking. The physics student watches him go to the creek to fetch the water. He puts the water into the pot and then stops and goes off to do something else. Puzzled, the physics student asks the math student when he is going to finish making dinner. The math student tells him that there is nothing left to do as now it has been reduced to a problem which has already been mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: spencer#NoSpam.cwis.unomaha.edu (Tom Spencer) A mathematician, a physicist and an engineer were all umpiring a softball game. The batter hit a fly ball to the outfield that was not caught. All the runners who were on base scored easily and the batter tried to turn it into an inside the park home run. It became clear that there would be a close play at the plate and all three umpires rushed into position to make the call. They all called the batter out. The captain of the batting team went out to argue and demanded "Why is he out?" The engineer said "He looked out to me, so he's out." The physicist said "I watched very carefully, and I saw that, at the moment that the batter was tagged, he had not touched home plate; so he's out." The mathematician said "He's out because I called him out." mathematics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: agdoll#NoSpam.wimsey.com (Alex Doll) Ask a surveyor, a statistician, and an engineer to measure a 4 cm piece of Surveyor gets out his tripod, gets an assistant to hold the rod, then compensates for temperature and declares that the string is 4.000 cm long. Statstician takes a ruler marked in metres and makes (n^-1)/(1-1/n)! measurements before declaring that the string is between 1 cm and 10 cm 90 percent of the time Engineer takes out a pair of scissors and asks "How long do you want it to mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: j.p.openshaw#NoSpam.swansea.ac.uk (John Openshaw) A Mathematician, Physicist and an Engineer all have to nip to the loo. The M has a leak, and then sprinkles a few drops of water on his hands, turns to the attendant and says 'Mathematicians learn to be concise'. The P has a turn, spends 5 minutes scrubbing his hands, then turns to the attendant and says 'Physicists learn to be thorough'. The engineer has a wee, doesn't bother washing his hands, turns to the attendant and say 'Engineers learn not to pee all over their hands'. mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: Alexis Monnerot-Dumaine <alexis.monnerot-dumaine#NoSpam.bnpgroup.com> A mathematician, a physician and an engineer are on vacation in Paris at their friend's Jean-Pierre. - How high exactly is that Eiffel Tower? asks the mathematician - I've got an idea, replied Jean-Pierre. How about guessing it, and the winner wins a good dinner in a good restaurant?, what do you think? - All right, says the physician,...but let's leave us some time and meet tomorrow at 10 a.m., Ok? - Ok. As the mathematician and the physician stay to think on the problem, the engineer leaves: " Sorry, I've got a date, see you tomorrow ". The next morning, the friends meet at the bottom of the Eiffel Tower. - So, what's your estimation ? asked Jean-Pierre. - Well, says the mathematician, I measured the length of the shadow of the tower and, according to the position of the sun, date and time GMT, a simple trigonometric calculation gave me 320,68 metres. - Not a bad idea, replied Jean-Pierre, but not quite the right answer. What about you? - Well, says the physician, I climbed the stairs up to the top of the tower, then I started a chronograph and dropped it immediately. As it hit the ground, it broke, indicating the duration of the fall. Considering the Newton equations and the viscosity of the air, my calculations gave me 321,9 metres. - That's a bit better, but not the right answer, says Jean-Pierre. But, where is our engineer? The engineer arrives: - Sorry, I'm late, but, woahoo, what a night I had! . - So, what about our little bet ? asked the physician. - Our bet? What bet? Oh yes, the Eiffel Tower! I forgot...err...just wait here a moment. He turns back and comes again 2 minutes later: - The Eiffel Tower is 321,50 metres high. - That's absolutely right, says Jean-Pierre, you won the bet! The mathematician and the physician are puzzled: - How did you do it? And the engineer replies: - Oh...well...quite simple, in fact... I just went to that caf़ over there...and asked the waiter... . engineering computer science [Top of page] [Bottom of page] [Index] [Send comment] From: Russell Turner <turnerr#NoSpam.actrix.gen.nz> Once upon a time, in a kingdom not far from here, a king summoned two of his advisors for a test. He showed them both a shiny metal box with two slots in the top, a control knob, and a lever. "What do you think this is?" One advisor, an engineer, answered first. "It is a toaster," he said. The king asked, "How would you design an embedded computer for it?" The engineer replied, "Using a four-bit microcontroller, I would write a simple program that reads the darkness knob and quantizes its position to one of 16 shades of darkness, from snow white to coal black. The program would use that darkness level as the index to a 16-element table of initial timer values. Then it would turn on the heating elements and start the timer with the initial value selected from the table. At the end of the time delay, it would turn off the heat and pop up the toast. Come back next week, and I'll show you a working prototype." The second advisor, a computer scientist, immediately recognized the danger of such short-sighted thinking. He said, "Toasters don't just turn bread into toast, they are also used to warm frozen waffles. What you see before you is really a breakfast food cooker. As the subjects of your kingdom become more sophisticated, they will demand more capabilities. They will need a breakfast food cooker that can also cook sausage, fry bacon, and make scrambled eggs. A toaster that only makes toast will soon be obsolete. If we don't look to the future, we will have to completely redesign the toaster in just a few years." "With this in mind, we can formulate a more intelligent solution to the problem. First, create a class of breakfast foods. Specialize this class into subclasses: grains, pork, and poultry. The specialization process should be repeated with grains divided into toast, muffins, pancakes, and waffles; pork divided into sausage, links, and bacon; and poultry divided into scrambled eggs, hard- boiled eggs, poached eggs, fried eggs, and various omelet classes." "The ham and cheese omelet class is worth special attention because it must inherit characteristics from the pork, dairy, and poultry classes. Thus, we see that the problem cannot be properly solved without multiple inheritance. At run time, the program must create the proper object and send a message to the object that says, 'Cook yourself.' The semantics of this message depend, of course, on the kind of object, so they have a different meaning to a piece of toast than to scrambled eggs." "Reviewing the process so far, we see that the analysis phase has revealed that the primary requirement is to cook any kind of breakfast food. In the design phase, we have discovered some derived requirements. Specifically, we need an object-oriented language with multiple inheritance. Of course, users don't want the eggs to get cold while the bacon is frying, so concurrent processing is required, too." "We must not forget the user interface. The lever that lowers the food lacks versatility, and the darkness knob is confusing. Users won't buy the product unless it has a user-friendly, graphical interface. When the breakfast cooker is plugged in, users should see a cowboy boot on the screen. Users click on it, and the message 'Booting UNIX v.8.3' appears on the screen. (UNIX 8.3 should be out by the time the product gets to the market.) Users can pull down a menu and click on the foods they want to cook." "Having made the wise decision of specifying the software first in the design phase, all that remains is to pick an adequate hardware platform for the implementation phase. An Intel 80386 with 8MB of memory, a 30MB hard disk, and a VGA monitor should be sufficient. If you select a multitasking, object oriented language that supports multiple inheritance and has a built-in GUI, writing the program will be a snap. (Imagine the difficulty we would have had if we had foolishly allowed a hardware-first design strategy to lock us into a four-bit microcontroller!)." The king wisely had the computer scientist beheaded, and they all lived happily ever after. mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: Rich Griffiths <richg#NoSpam.cybercomm.net> A mathematician and a physicist are trying to measure the height of a flag pole using a long tape measure. The mathematician takes the tape measure, walks up to the flag pole, and begins to shinny up the pole. A short way up, he slips and falls down. The physicist notices a ladder lying nearby in the bushes. He leans the ladder against the pole, but it reaches only half way up. He climbs the ladder and tries to shinny up from there, but he also slips and falls. While they sit near the pole scratching their heads, an engineer walks by, so the mathematician and the physicist tell him their problem. The engineer notices a crank at the base of the flag pole. He turns the crank, and the flag pole tilts over until it lies on the ground. The engineer stretches out the tape measure, cranks the pole back up, and tells the mathematician and the physicist: 'It is 15 meters.' As the engineer walks off into the distance, the mathematician looks at the physicist and says: 'Isn't that just like an engineer? You ask him for the height, and he gives you the length.' A team of engineers were required to measure the height of a flag pole. They only had a measuring tape, and were getting quite frustrated trying to keep the tape along the pole. It kept falling down, etc. A mathematician comes along, finds out their problem, and proceeds to remove the pole from the ground and measure it easily. When he leaves, one engineer says to the other: "Just like a mathematician! We need to know the height, and he gives us the mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: Henry Cate's Life collection A mathematician, scientist, and engineer are each asked: "Suppose we define a horse's tail to be a leg. How many legs does a horse have?" The mathematician answers "5"; the scientist "1"; and the engineer says "But you can't do that!" mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: Henry Cate's Life collection 3.F There are three umpires at a baseball game. One is an engineer, one is a physicist, and one is a mathematician. There is a close play at home plate and all three umpires call the man out. The manager runs out of the dugout and asks each umpire why the man was called out. The physicist says "He's out because I calls 'em as I sees 'em". The engineer says "He's out because I calls 'em as they are". And the mathematician says "He's out because I called him out". physics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: oldbear#NoSpam.arctos.com (The Old Bear) In an effort to determine the department which produces the most intelligent graduates, a university president threw down a challenge to the deans of the schools of science, engineering, and business. He asked each to send him their brightest student from the current graduating class to compete in solving a simple problem. The next day, three students showed up at the university president's office. He explained the problem as follows: "I want you to determine the height of the university's newest residence tower. I am giving each of you only three tools to work with: a stop watch, a ruler and a ball of string. You are each to devise your own solution to the problem and report back here by the end of the day. Whoever has the most accurate answer wins." The three students set off to the new residence tower. The science manor went immediately to the roof of the building and dropped the ruler over the side, carefully timing its descent with the stop watch. Factoring in the aerodynamic properties of the ruler, the science major calculated the height of the building within six inches. Next the engineering major, still panting from running up all the stairs to the roof, took his turn. He tied the stop watch onto the end of the ball of string and gently lowered it until it just touched the ground. Reeling the string back up, he measured it carefully with the ruler, making adjustments for its elasticity under the weight of the stop watch, and calculated the height of the building within two inches. At that point, the science major turns to the engineering major and asks, "What happened to the kid from the business school? I thought he was right behind us." They head back down to the building lobby and there, sitting comfortably in an upholstered chair, is the business major. "So, what are you going to do?" asks the science major. "Oh, I'm done," says the business major, unfolding a piece of paper on which is written the height of the building expressed to the last one-eighth inch. "How did you do that?" asks the engineering major. "Simple," replies the student from the business school. "While you guys were screwing around up on the roof, I went down to the basement and found the building superintendant. I told him I'd give him a nice stop watch if he'd let me look through the architectural plans for the building." There were a number of these kind of stories (which are somewhat similar in stucture to the many "There was a priest, a minister and a rabbi..." mathematics engineering [Top of page] [Bottom of page] [Index] [Send comment] From: The Ghost In The Machine <ewill#NoSpam.sirius.athghost7038suus.net> A mathematician, engineer, and average Joe walk into a bar. The mathematician immediately orders a pie. The engineer immediately orders an 'e', since it's Euler's number, after all, and many engineers have to oil things. The average Joe doesn't exist, being a statistical anomaly. mathematics engineering computer science [Top of page] [Bottom of page] [Index] [Send comment] March 14 What is "pi"? Mathematician: Pi is the ratio of circumference of a circle to its diameter. Engineer: Pi is about 22/7. Computer Programmer: Pi is 3.141592653589 in double precision. Nutritionist: You one track math-minded fellows, Pie is a healthy and delicious dessert! mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] An engineer, a physicist, a mathematician, and a mystic were asked to name the greatest invention of all time. The engineer chose fire, which gave humanity power over matter. The physicist chose the wheel, which gave humanity the power over space. The mathematician chose the alphabet, which gave humanity power over symbols. The mystic chose the thermos bottle. "Why a thermos bottle?" the others asked. "Because the thermos keeps hot liquids hot in winter and cold liquids cold in summer." "Yes -- so what?" "Think about it." said the mystic reverently. That little bottle -- how does it *know*?" mathematics physics engineering [Top of page] [Bottom of page] [Index] [Send comment] An engineer, a physicist and a mathematician find themselves in an anecdote, indeed an anecdote quite similar to many that you have no doubt already heard. After some observations and rough calculations the engineer realizes the situation and starts laughing. A few minutes later the physicist understands too and chuckles to himself happily as he now has enough experimental evidence to publish a paper. This leaves the mathematician somewhat perplexed, as he had observed right away that he was the subject of an anecdote, and deduced quite rapidly the presence of humour from similar anecdotes, but considers this anecdote to be too trivial a corollary to be significant, let alone funny. next:6.1 the locked room and the tin can | Index | Comments and Contributions Member of the Science Humor Net Ring [ Previous 5 Sites | Previous | Next | Next 5 Sites ] [ Random Site | List Sites ]
{"url":"http://jcdverha.home.xs4all.nl/scijokes/6.html","timestamp":"2014-04-20T06:23:19Z","content_type":null,"content_length":"75773","record_id":"<urn:uuid:a8c60b01-9ad1-40e6-ba46-a01feebb0df6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalized native spaces . . 2008. Book Title: Approximation Theory XII: San Antonio 2007 Nashboro Press Given a strictly positive definite function, we generalize the usual reproducing kernel Hilbert space type of native space construction in order to create {\$Lˆp\$} based types of native spaces for \ $1 {\textless} p {\textbackslash}le 2\$. These spaces are Banach spaces, but when \$p = 2\$ we recover the usual native space. While giving up on the Hilbert part of the {RKHS} framework we are still able to recover function values with the help of Fourier transforms since we are using strictly positive definite functions defined on all of {\${\textbackslash}mathbb{R}ˆd\$.} We obtain generalized generic power function error estimates. Erickson, JF, Fasshauer GE. 2008. Generalized native spaces. Approximation Theory XII: San Antonio 2007. :133–142.
{"url":"http://math.iit.edu/~openscholar/fass/publications/generalized-native-spaces","timestamp":"2014-04-16T13:17:31Z","content_type":null,"content_length":"15294","record_id":"<urn:uuid:0f4a995d-6022-46ce-9946-d13054445430>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Get homework help at HomeworkMarket.com Submitted by on Wed, 2012-06-13 08:27 due on Fri, 2012-06-15 08:25 answered 2 time(s) Hand shake with : Complete Hand shake with : In progress Viper1 is willing to pay $10.00 one anonymous student showed interest Balance the assembly Line (Operations Managment) Balance the assembly line for the tasks contained in the table. The desired output is 240 units per day. Available production time per day is 480 minutes. Work Element Time (Sec.) Immediate Predecessor(s) A 40 --- B 45 --- C 55 A D 55 B E 65 B F 40 C,D G 25 D,E a) What is the desired Cycle time in seconds? b) What is the theoretical minimum number of stations? c) Use trial and error to work out a solution in the table below. Your efficiency should be at least 90%. Station Candidates Choice Work-Element Time (Sec) Cumulative Time (Sec) Idle Time d) Calculate the efficiency of your solution. Submitted by on Thu, 2012-06-14 16:46 purchased 6 times price: $8.00 Answer rating (rated one time) correct answer body preview (3 words) check xxxxxxxx xxxxxx file1.docx preview (359 words) 4. xxxxxxx the assembly xxxx xxx the xxxxx xxxxxxxxx in xxx xxxxxx The xxxxxxx output is xxx units per xxxx Available xxxxxxxxxx xxxx xxx day xx xxx minutes. xxxx xxxxxxx xxxx xxxxxx xxxxxxxxx Predecessor(s) A xx xxx B 45 --- x 55 A x x x 55 x x x x 65 B x xx xxx x x G xxx xxxx is xxx xxxxxxx xxxxx xxxx xx seconds? xx xxxxxxx Production xxxx xxx day x xxx minutes x 8 xxxxx Desired xxxxxx rate(r) is xxx xxxxx per day = xxxxx = xx xxxxx xxx hour Hence, desired xxxxx xxxx in seconds is given by c x xxxxxxx x 120 xxxxxxx xxx xxxxx xxxx is xxx theoretical xxxxxxx number xx stations? xx xxxxxxx Theoretical minimum, TM x xxxx of all xxxxx xx x (40+45+55+55+65+40+25)/360 xxxx = xxxxxxx = - - - more text follows - - - Buy this answer Try it before you buy it Check plagiarism for $2.00 Submitted by StatSolver purchased 4 times price: $7.00 Please see the attachment for solution. If you need any clarification please ask me. Thanks. body preview (15 words) Please see the xxxxxxxxxx xxx solution. xx xxx need any clarification please ask me. Thanks. file1.doc preview (253 words) xx Balance xxx xxxxxxxx line xxx xxx xxxxx contained in xxx xxxxxx The desired output xx xxx units per xxxx Available production xxxx xxx xxx is xxx xxxxxxxx Work Element Time xxxxxx xxxxxxxxx xxxxxxxxxxxxxx x x A 40 xxx x 45 --- x x C xx A D 55 B x 65 x x xx C,D x x G 25 D,E a) What is xxx xxxxxxx Cycle xxxx in seconds? xx xxxxxxx xxxxxx time xx xxxxx xxx xx = = (480 minutes/240 units)(60 seconds/minute) xxxx x 120 seconds/unit xx What is the xxxxxxxxxxx xxxxxxx xxxxxx xx stations? xx xxxxxxx The theoretical xxxxxxx xxxxxx of stations is xxxxx xxxx xxxxxx = = (40 + 45 + 55 + 55 + xx x 40 + 25)/120 x xxxxxxxx xxxx x 2.7083 or 3 xxxxxxxxx xx Use trial and error to work out a solution xx xxx table xxxxxx xxxx - - - more text follows - - - Buy this answer Try it before you buy it Check plagiarism for $2.00
{"url":"http://www.homeworkmarket.com/content/balance-assembly-line-operations-managment","timestamp":"2014-04-20T03:33:12Z","content_type":null,"content_length":"61822","record_id":"<urn:uuid:cb3ee346-1fc1-478a-8e06-aa4db8b094f9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Converting base 10 to Roman Numerals Ok i will do that. In line 7 when he sets the int p value = to 100 why does he do it? Does it help convert from base 10 to roman numerals? Line 10 is the one i dont really understand and i take it, its pretty important.
{"url":"http://www.physicsforums.com/showpost.php?p=2674756&postcount=3","timestamp":"2014-04-16T10:29:12Z","content_type":null,"content_length":"7248","record_id":"<urn:uuid:41e22475-597b-41de-8cfd-4852b2a12647>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about QFT on Secret Blogging Seminar Posted by Chris Schommer-Pries in mathematical physics, QFT, quantum topology, talks, tqft, Uncategorized. Tags: tqft comments closed In February there is going to be a workshop and school dedicated to exploring the interactions of Quantum Gravity, Higher Gauge Theory, and Topological Field Theory. I’m excited about the chance to share ideas and hopefully create some new mathematics. The conference will take place in Lisbon, Portugal, and yours truly will be giving one of the mini-courses for the school (the topic is going to be the classification of extended 2D tqfts, something near and dear to my heart). Of course that makes me really excited, but I am also excited about the other topics too and I think the mix of ideas will be invigorating. For more info look below the Posted by Ben Webster in conferences, QFT. comments closed I wanted to take a moment to plug a conference in my soon-to-be hometown Eugene, OR organized by my once and future colleague Nick Proudfoot. Aside from Eugene being lovely in August, I felt this conference was worth a post because it’s something of a unique format. Rather than being a bunch of experts on the subject (as it says in the title, the subject is the conjunction of operator algebras and CFT) getting together and giving talks that only they understand, it will be aimed at being educational for graduate students and interested non-experts (such as myself). The format is a bit similar to that of Talbot. In particular, in addition to an organizer (Nick) it has a “leader” who is in charge of mathematical content (but will delegate quite a few of the lectures); that will be the incomparable Andre Henriques. (more…) Posted by Chris Schommer-Pries in differential geometry, low-dimensional topology, mathematical physics, Paper Advertisement, QFT, Shamelss Self Promotion, tqft, websites. comments closed So I’ve finally managed to bang my dissertation into something more or less ready for public consumption. It is basically finished (except for some typos and spell checking). It is available on my new website. The title is “The Classification of Two-Dimensional Extended Topological Field Theories”. Posted by Chris Schommer-Pries in Algebraic Topology, groupoids, QFT, tqft. comments closed This morning Jacob Lurie posted a draft of an expository paper on his work (with Mike Hopkins) classifying extended (infinity, n)-categorical topological field theories and their relation to the Baez-Dolan cobordism hypothesis. Should make for some intersting bedtime reading… Posted by Chris Schommer-Pries in low-dimensional topology, Pictorial Algebra, planar algebras, QFT, quantum algebra, subfactors, talks, tqft. comments closed This is the third and final post in my series about using planar algebras to construct TQFTs. In the first post we looked at the 2D case and came up with a master strategy for constructing TQFTs. In the last post we began carrying out that strategy in the 3-dimensional setting, but ran into some difficulties. In this post we will overcome those difficulties and build a TQFT. Posted by Chris Schommer-Pries in low-dimensional topology, Pictorial Algebra, planar algebras, QFT, subfactors, talks, tqft. comments closed In my last post I explained a strategy for using n-dimensional algebraic objects to construct (n+1)-dimensional TQFTs, and I went through the n=1 case: Showing how a semi-simple symmetric Frobenius algebra gives rise to a 2-dimensional TQFT. But then I had to disappear and go give my talk. I didn’t make it to the punchline, which is how planar algebras can give rise to 3D TQFTs! In this post I will start explaining the 3D part of the talk. I won’t be able to finish before I run out of steam; that will have to wait for another post. But I will promise to use lots of pretty Posted by Chris Schommer-Pries in low-dimensional topology, Pictorial Algebra, planar algebras, QFT, subfactors, talks, tqft. comments closed So today I am giving a talk in the Subfactor seminar here at Berkeley, and I thought it might by nice to write my pre-talk notes here on the blog, rather then on pieces of paper destined for the recycling bin. This talk is about how you can use Planar algebras planar techniques to construct 3D topological quantum field theories (TQFTs) and is supposed to be introductory. We’ve discussed planar algebras on this blog here and here. So the first order of buisness: What is a TQFT? Posted by Ben Webster in Algebraic Geometry, category O, crazy ideas, link homology, mathematical physics, QFT, talks. comments closed I’ve been too lazy to write in detail about the progress in my research (well, I am writing six papers and applying to jobs, so it isn’t entirely due to laziness), but I did recently speak in the symplectic seminar at MIT, and have posted the slides on my webpage. Obviously, they’re less useful without someone to explain them, but given the current lack of an overarching paper on the subject (that’s no. 5 on the list, I promise), I thought it might be edifying. Executive summary below the cut. (more…) Posted by Ben Webster in Algebraic Topology, combinatorics, low-dimensional topology, QFT, topology, tqft. comments closed So, a subject rather near and dear to the hearts of many of my fellow co-bloggers is that of 1+1-dimensional TQFT: that is, of monoidal functors from the category of 1-manifolds with morphisms given by smooth cobordisms to the category of vector spaces over your favorite field $k$. There’s a rather remarkable theorem about such functors, which really deserves a post of its own for proper explanation, but I’ll spoil the surprise here. Any such functor associates a vector space $A$ to a single circle, and to the “pair of pants” cobordism, it assigns a map $m:A\otimes A\to A$, which one can check is a commutative multiplication. Furthermore, the cap, thought of as a cobordism from the empty set to a circle gives a map $i:k\to A$, which gives a unit of this algebra. Thought of as a cobordism from the circle to the empty set, it gives us a map $\mathrm{tr}:A\to k$ which we call the counit or Frobenius trace. Theorem. A commutative algebra with counit $(A,\mathrm{tr})$ arises from a TQFT if and only if $\mathrm{tr}$ kills no left ideal of $A$. Posted by Scott Carnahan in differential geometry, low-dimensional topology, mathematical physics, QFT. comments closed Edward Witten gave two talks at MIT last week. The first was on gauge theory and wild ramification – very similar to earlier work he did with Kapustin and Gukov on geometric Langlands, but with some clever use of nineteenth century technology (namely, Stokes matrices) to deal with irregular singularities. I won’t say much about it, except to mention that his use of the term “wild ramification” employs a tacit conjectural dictionary between irregularity for differential equations and the Swan conductor for Galois representations. The second talk was on some calculations in pure 3D gravity he did with Alex Maloney, and even though I didn’t understand much of it, I’m going to write about it. Perhaps people with more background in phyisics or three-manifold topology can make illuminating comments and corrections. Recent Comments Erka on Course on categorical act… Qiaochu Yuan on The many principles of conserv… David Roberts on Australian Research Council jo… David Roberts on Australian Research Council jo… Elsevier maths journ… on Mathematics Literature Project…
{"url":"http://sbseminar.wordpress.com/category/qft/","timestamp":"2014-04-18T08:30:21Z","content_type":null,"content_length":"68602","record_id":"<urn:uuid:4f811602-bee4-44e9-9799-eeed27682ac8>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Count till current row Count till current row • Hi, Let's say that in the PowerPivot window, I have a single column of names as I would like to write a calculated column formula which will determine the occurrence of each name from the first entry till the current entry. Therefore, the result I am expecting is I wrote the following two calculated column formulas but they return the following result Please help. Regards, Ashish Mathur Microsoft Excel MVP www.ashishmathur.com Saturday, September 07, 2013 3:01 AM • yes, it could (theoretically) happen, that RAND() generates the same number twice for two different rows - though, I do not think that this would ever happen in practice the problem is that your requirement and/or data do not really follow any pattern. Usually you would e.g. have any other column to distinguish the different rows for [Names] (in this case e.g. an OrderNumber, Date, SurrogateKey, etc. which could replace the artificial RAND()-column as you do not have any of these (?) it is also hard to bring your rows in any specific order that's why we need to do the workaround using RAND() - www.pmOne.com - Tuesday, September 10, 2013 7:22 AM All replies • You can add another condition to your FILTER so that current row is only compared to rows equal to or less than it in the sequence. You seem to be expecting your list of names to be in some order. A rownumber or date (with tiebreakers) would work well. If that doesn't help, post some sample rows from your table to make it more clear what you're working with. Brent Greenwood, MS, MCITP, CBIP // Please mark correct answers and helpful posts // http://brentgreenwood.blogspot.com Saturday, September 07, 2013 8:42 PM • Hi, Your solution would require me to add another column in my base data with row numbers. I do not want to add another column in my base data although I am OK with using additional columns (calculated columns) in the Power Query window. Given the list of names as shown above, I simply want to get the result as 1,1,1,2,3,2,4. I am OK with getting this result after using additional Pivot Table calculated columns but I do not want to add another column in my base data. Thank you. Regards, Ashish Mathur Microsoft Excel MVP www.ashishmathur.com Monday, September 09, 2013 12:38 AM • Hi, Let me share a better description of my problem. I have a two column table as follows: Ashish 101 Mahesh 10 Rajesh 11 Ashish 234 Ashish 234 Rajesh 34 Ashish 101 Now my objective is the sum up the top3 values for each name. So this is what I did Step 1: I first wrote a calculated item formula to compute the rank Step 2: I then wrote the following calculated Field formula to sum the top 3 values per name The result for Ashish 670 (this is the sum of all value of Ashish) whereas it should actually be 569 (234+234+101). I am getting an incorrect result for Ashish because the rank assigned to various instances of Ashish is 2,1,1,2. Since all these numbers are less than 3, it sums up all the amounts of Ashish. I am somehow trying to figure out a way to assign different rank values to Ashish even if the amounts are the same. Hope this is a much better description. Thank you for your help. Regards, Ashish Mathur Microsoft Excel MVP www.ashishmathur.com Monday, September 09, 2013 1:13 AM • you could create a calculated column as =RAND() and call it [MyRandColumn] then you can use this calculated measure to get the TOP3 only: not very elegant but should solve your issue - www.pmOne.com - Monday, September 09, 2013 8:45 PM • Hi, Thank you for replying. Taking a cue from your solution, this is what I did. 1. I created a calculated column called random by using =RAND() 2. I then created another calculated column called rank by using The second calculated column produced different ranks for same amounts of Ashish 3. To sum the top 3 amounts for each name, I used the following calculated field formula in the PowerPivot The problem (I think) is that the random values generated in step 1 may/may not be unique i.e. for both values of Ashish 234, the random values generated by =RAND() could be the same. If that happens, then the RANK calculated column will generate the same rank for the same amount. So is there a way to generate unique random numbers in step 1. Thank you. Regards, Ashish Mathur Microsoft Excel MVP www.ashishmathur.com Tuesday, September 10, 2013 1:52 AM • yes, it could (theoretically) happen, that RAND() generates the same number twice for two different rows - though, I do not think that this would ever happen in practice the problem is that your requirement and/or data do not really follow any pattern. Usually you would e.g. have any other column to distinguish the different rows for [Names] (in this case e.g. an OrderNumber, Date, SurrogateKey, etc. which could replace the artificial RAND()-column as you do not have any of these (?) it is also hard to bring your rows in any specific order that's why we need to do the workaround using RAND() - www.pmOne.com - Tuesday, September 10, 2013 7:22 AM • Thank you for reconfirming. Regards, Ashish Mathur Microsoft Excel MVP www.ashishmathur.com Tuesday, September 10, 2013 7:33 AM
{"url":"http://social.technet.microsoft.com/Forums/en-US/20c9341b-228b-488c-a94e-8f595bd77fcb/count-till-current-row","timestamp":"2014-04-18T06:04:49Z","content_type":null,"content_length":"97133","record_id":"<urn:uuid:28f60a3f-ab4c-44ab-9bb4-1b7c92407b4c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Long division Date: Tue, 15 Nov 94 10:09:04 PST From: Keith Averell Subject: math Dear Dr. Math, Could you help us with one of our third grade really hard math problems? Here it is: 7626614/255=? Thanks for your help! Mr. Averell's third grade class Date: Tue, 15 Nov 1994 19:50:56 -0500 (EST) From: Dr. Sydney Subject: Re: math Dear Mr. Averall's 3rd Grade Class: Thanks for writing to Dr. Math!! We are glad to hear from you. Do you know how to do long division? Your problem is simpler if you treat it as a long division problem: So, finding 7626614/255 is the same as finding: Doing long division numbers with numbers that are more than one digit can be tricky, but not impossible. How many times does 255 go into 7? None, so next we ask how many times does 255 go into 76? None, so how many times does 255 go into 762? AHA! Not none this time! 255 goes into 762 2 times with a remainder of 252. So, we write: Now we bring down the next digit, 6, and repeat the same procedure. How many times does 255 go into 2526? Do you see how you would continue? See if you can figure out the rest of the problem yourselves, and feel free to write back if you have any more questions on this or anything else. Have fun!
{"url":"http://mathforum.org/library/drmath/view/58817.html","timestamp":"2014-04-21T10:15:19Z","content_type":null,"content_length":"6193","record_id":"<urn:uuid:c0efb86c-79dd-4fdf-96c3-0ca73621e6c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
AUTOMORPHISM for 12 year olds The author does a superb job, particular when using a permutation domain to a binary domain in the first example. The three tenets are clearly motivated: that the closure notion is sustained between G the group and H the subgroup even when the group operators are distinct as in f3; that the set types of G and H may or may not be the same; and then epi-/mono-/iso-morphism build up much like sur-/inj-/bi-jection build up. As I remember, we get a quick lesson on it when I was 13/14. I can remember the room now. We were our (new) math teachers first ever class (and he actually took all the way through to 18). It’s just so clearly laid out in this article, without the faffy stuff from textbooks or research papers. I think Il summarize when he captured about ideals, kernels, and co-kernels too. Once one thinks of H in terms of parity matrices for dual spaces. ..and then sum-product algorithms and conditional probabilities for inference, it all becomes rather more interesting!
{"url":"http://yorkporc.wordpress.com/2012/08/25/automorphism-for-12-year-olds/","timestamp":"2014-04-21T12:08:15Z","content_type":null,"content_length":"34400","record_id":"<urn:uuid:2793c62b-e136-4caa-885a-685536c9292f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Group actions July 4th 2009, 06:00 AM #1 May 2009 Group actions Hey, We can define a group action in two ways: 1) A group G is said to act on a set X, if there exists a group homomorphism ψ : G --> S(X) [the symmetric group of X]. 2)Equivalently, a group G acts on a set X, if there is a map from G x X --> X which assigns to each ordered pair <g,x>--->g.x, such that : For all x Є X, e.x = x (h.g).(x) = h.(g.x), where h, g Є G, and x Є X How are these two definitions equivalent? $(1)\ \implies\ (2)$ Consider the map $\phi:G\times X\to X,\ \phi((g,x))=\psi(g)(x).$ $(2)\ \implies\ (1)$ For each $g\in G,$ define $\psi_g:X\to X$ by $\psi_g(x)=g\cdot x$ for all $x\in X.$ Then $\psi_g\in S(X)$ for all $g\in G$$(\psi_g$ is a bijection from $X$ to itself) and the required homomorphism $\psi:G\to S(X)$ is defined by $\psi(g)=\psi_g$ for all $g\in G$$(\psi_{hg}=\psi_h\psi_g$ for all $h,g\in G).$ July 4th 2009, 04:12 PM #2
{"url":"http://mathhelpforum.com/advanced-algebra/94364-group-actions.html","timestamp":"2014-04-17T23:14:33Z","content_type":null,"content_length":"35124","record_id":"<urn:uuid:8b803709-1f15-4b4a-b926-179e8a169d35>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Harvey, IL Algebra 1 Tutor Find a Harvey, IL Algebra 1 Tutor ...My goal is to help all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon. I consistently monitor progress and adjust lessons to meet the specific needs of each individual student. Thank you for considering my services. 12 Subjects: including algebra 1, calculus, algebra 2, geometry ...I spent a summer teaching at a high school math camp, I have been a coach for the Georgia state math team for a few years now, and I was a grader for the IU math department. I've tutored many people I've encountered, including friends, roommates, and people who've sat next to me on trains, aside... 13 Subjects: including algebra 1, calculus, geometry, statistics ...Among other honors, I won the 1L Award for Excellence in Oral Advocacy, which covers not just my speaking ability but also the prep work that goes behind it. You'll be in good hands. I have a unique approach to reading on the SAT, which I honed and perfected for years on higher-level tests like the LSAT as well. 28 Subjects: including algebra 1, English, reading, physics ...Thank you for considering me as a suitable tutor. My experience in tutoring began during my undergraduate years when I tutored in the university wring center. At the writing center I guided and coached students in their various writing projects which include essays and research papers. 61 Subjects: including algebra 1, reading, English, geometry I am an experienced, professional educator who has worked with students mainly in the area of mathematics. I have guided students to success from grades pre-k through graduate school. In addition, I also work with test preparation including district wide tests and will be working with students pre... 76 Subjects: including algebra 1, English, reading, Spanish Related Harvey, IL Tutors Harvey, IL Accounting Tutors Harvey, IL ACT Tutors Harvey, IL Algebra Tutors Harvey, IL Algebra 2 Tutors Harvey, IL Calculus Tutors Harvey, IL Geometry Tutors Harvey, IL Math Tutors Harvey, IL Prealgebra Tutors Harvey, IL Precalculus Tutors Harvey, IL SAT Tutors Harvey, IL SAT Math Tutors Harvey, IL Science Tutors Harvey, IL Statistics Tutors Harvey, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Harvey_IL_algebra_1_tutors.php","timestamp":"2014-04-21T15:12:24Z","content_type":null,"content_length":"24035","record_id":"<urn:uuid:276aa459-13e9-46a9-a235-fdb08d58a706>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
probability question November 20th 2010, 07:10 AM #1 probability question just a quick question! For $n \geq 1$, let $X_1,X_2,....,X_n$ denote $n$ independent and identically distributed random variables according to $X$, with cdf $F_X(y)$. Let $Y = max\{X_1,X_2,....,X_n \}$. Show that the cdf $F_Y(y)$ of $Y$ satisfies: $F_Y(y)=(F_X(y))^n \ y\in \Re$ that is meant to be y in the real numbers but cant find what the latex is for it. just a quick question! For $n \geq 1$, let $X_1,X_2,....,X_n$ denote $n$ independent and identically distributed random variables according to $X$, with cdf $F_X(y)$. Let $Y = max\{X_1,X_2,....,X_n \}$. Show that the cdf $F_Y(y)$ of $Y$ satisfies: $F_Y(y)=(F_X(y))^n \ y\in \Re$ that is meant to be y in the real numbers but cant find what the latex is for it. $P(Y\le y)=P((X_1\le y)\wedge (X_2\le y) \wedge ... \wedge (X_n\le y))=P(X_1\le y) P(X_2\le y) ... P(X_n\le y))$ November 20th 2010, 09:30 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/advanced-statistics/163827-probability-question.html","timestamp":"2014-04-17T19:24:34Z","content_type":null,"content_length":"37229","record_id":"<urn:uuid:d02952d7-d804-4326-bbbd-933c23347cbf>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
weight lifting miracle today Re: weight lifting miracle today Yes, I do. Enjoy the salmon. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today This candlepin/tenpin bowling system works great for silver #47. silver is 107.8682 Took me a whole day once to comprehend the tenpin structure in my head well. Par example: the 7-8-5-4 diamond shape, likewise the 0-9-5-6 diamond shape. And the 1-2-5-3 diamond shape. And the hexagon: 2-4-8-9-6-3 igloo myrtilles fourmis Re: weight lifting miracle today That is amazing! 41:47 for me. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today It sounds terrific and I'm on 54:59 igloo myrtilles fourmis Re: weight lifting miracle today It is good to do math while listening! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today Keeps the happiness flowing even better than just numbers alone. It was cloudy today here, so this is nice. Music instead of sunlight. igloo myrtilles fourmis Re: weight lifting miracle today It is cloudy and grey here too. Looks like rain. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today Here's what I'm listening to now. Yang Bartolotti, violin soloist in "Winter" Movement #2 at Saddleback's Feast of Lights Concert It downloaded very slowly, so I've heard the beginning about 10 times while it downloaded and I changed the setting gear to 280? instead of 360. This is only 1/12th of the whole Four Seasons. It is the middle of the winter, the slow part. Each season has 3 parts: fast, slow, fast. Last edited by John E. Franklin (2012-01-26 09:51:24) igloo myrtilles fourmis Re: weight lifting miracle today I am still in the other one but rapidly coming to the conclusion. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today That long one actually might be missing the very last act, according to a question by a viewer. My cat Princess is sitting up beside the computer watching my every move and enjoying the light beside the computer. It is an old 60 watt bulb, but it heats her up when she's right under it. I have two of the new Phillips 12 watt LED bulbs in the house and we like them. They are yellow plastic when shut off, and the light shines thru them. They were $30 a piece, no $39 actually. They are as bright as a 90 watt bulb old style. Just darker than a 100 I mean. I have a very short mohawk now, did it myself today. igloo myrtilles fourmis Re: weight lifting miracle today I have heard good things about those bulbs. How do you like them? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today They give me no bad feelings, like the CFL do, or used to two years ago when we had to remove them from the house. The LED ones I mentioned seem to be voltage rectified to DC in the socket part that warms up a bit, but this creates no blinking of the LEDs at a rate of 60 times a second in the USA. Actually, some of the old incandescents, depending on the company, some, only a small percentage have bothered me, but these Phillips LED ones seem terrific. I've only tried the yellow ones with the three sections. There are three grooves in them at 120 degrees and 240 degrees and 360/0 degrees. Really happy with them and only 12 watts they say. Sometimes I leave one on for several hours for that wattage, even when I'm asleep sometimes, but because I'm not careful to shut the one beside my floor-bed right away before I close my eyes. I sleep on matting I designed on the floor. It is made of many towels and a sleeping bag opened up and five pillows for the shoulders and head. Everything is washable if the cat peas on it. I got a great deal on four pillow for $3 bucks each!! The pillow covers were like $1.50 more than that each. igloo myrtilles fourmis Re: weight lifting miracle today I haven't used CFL for a while. I hear these get a little on the hot side and need to be kept outside of an enclosed lighting fixture. I use a cot as my bed. Easy to move around ad very stiff, helps my back. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today Yes I have green canvas cot too, but I built seven clothes lines over it, with four vertical posts tied to the four corners, and some other connecting wickits I got from after an election, by collecting the signs on the side of the road. The huge wickets are used to make connections to the two posts on each end of the cot. Then clotheslines cross on two levels. Sometimes I still use the 230 volt clothes dryer downstairs, but usually I just wash in the washer and then dry on above the cot. Sometimes, I even wash my clothes in the bathtub. That's a good workout. It's a camping cot made for up to near 300 lbs. Not a cheap one, about $70 I remember. Right now a cheap violin and guitar are on it in cases under clothes when they are there. Right now it is empty on the lines. About time to do another wash, maybe tomorrow. igloo myrtilles fourmis Re: weight lifting miracle today Mine is only rated for 250 lbs. That is more than enough for me. I was going to get one of those hanging sleeping bags and attach it to the corners of the room. Maybe next time. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today youtube is choppy after 5pm I noticed ESTime today anyway. Maybe higher volumes of people. Can't even download some songs at all more than 22 seconds. igloo myrtilles fourmis Re: weight lifting miracle today Yea, it gets an increased load here too. Also your provider might be cutting some of your bandwidth down. I have a chore to do see you in a bit. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today Okay, I'll go study the elements. igloo myrtilles fourmis Re: weight lifting miracle today Hi John; See you later. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today Well, I thought I'd take a quick look around the forum before I head to the sack. I tried to memorize #57 Lanthanum and #59 Praseodymium. Let's see if I can recite them without papers... Hmmm... 140.90765 and 138.90547. I think that's right. #59 and #57 in that order. Now I'll go get the papers and check... Yup, it looks good!! So I remembered them for over an hour, so they might be getting into longterm memory soon. igloo myrtilles fourmis Re: weight lifting miracle today Hi John; After a good nights rest they will sink in. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today Lutetium works nicely in diagonal lines on the ten-pin theme: 174.9668 I think that's what it is, I'll go check... Yup, that's just right. At least from 2009 to 2011, twenty may change this summer a little. igloo myrtilles fourmis Re: weight lifting miracle today Hola John; How about thorium? Now that is a name for an element. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: weight lifting miracle today Thorium is one of the only 3 radioactive elements above or equal to #84 polonium, that IUPAC mentions with a weight, probably because the longest isotope half-life is greater than 10,000 years. Thorium is about, let me guess now first: about 231.079, probably wrong. I'll go check my chart: Okay the actual value is 232.03806. I haven't tried to memorize thorium, or #91 Protactinium much. #90 Th, #91 Pa, and #92 U are mentioned by IUPAC. igloo myrtilles fourmis Re: weight lifting miracle today Okay, John thanks for the info. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=199727","timestamp":"2014-04-16T16:23:04Z","content_type":null,"content_length":"40910","record_id":"<urn:uuid:6eff0019-9d5a-4207-b301-606d797c9412>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
11] [12] [13] [14] [15] [16] [17] [18] [19 - In Proceedings of the 23 rd Annual ACM Symposium on Theory of Computing , 1991 "... Concurrent programming enjoys a proliferation of languages but suffers from the lack of a general method of language comparison. In particular, concurrent (as well as sequential) programming languages can-not be usefully distinguished based on complexity-theoretic considerations, since most of them ..." Cited by 12 (1 self) Add to MetaCart Concurrent programming enjoys a proliferation of languages but suffers from the lack of a general method of language comparison. In particular, concurrent (as well as sequential) programming languages can-not be usefully distinguished based on complexity-theoretic considerations, since most of them are Turing-complete. Nevertheless, differences between program-ming languages matter, else we would not have invented so many of them. We develop a general method for comparing concur-rent programming languages based on their algebraic (structural) complexity, and, using this method, achieve separation results among many well-known concurrent languages. The method is not restricted to concurrent languages. It can be used to compare the algebraic complexity of abstract machine models, other families of programming languages, logics, and, more generaly, any family of lan-guages with some syntactic operations and a notion of semantic equivalence. The method can also be used to compare the algebraic complexity of families of opera-tions wit hin a language or across languages. We note that using the method we were able to compare lan-guages and computational models that do not have a common semantic basis.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4447085","timestamp":"2014-04-17T15:52:04Z","content_type":null,"content_length":"12999","record_id":"<urn:uuid:e92c9188-4391-4b44-b20e-cc09ce7c9091>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Chapter 1 Adaptive Markov Chain Monte Carlo: Theory and Methods Yves Atchad´e 1 , Gersende Fort and Eric Moulines 2 , Pierre Priouret 3 1.1 Introduction Markov chain Monte Carlo (MCMC) methods allow to generate samples from an arbitrary distribution known up to a scaling factor; see Robert and Casella (1999). The method consists in sampling a Markov chain {Xk, k 0} on a state space X with transition probability P admitting as its unique invariant distribution, i.e P = . Such samples can be used e.g. to approximate (f) = X f (x) (dx) for some -integrable function f : X R, by f(Xk) . (1.1)
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/505/1083487.html","timestamp":"2014-04-21T13:36:40Z","content_type":null,"content_length":"7699","record_id":"<urn:uuid:a7a8e0bf-4a70-47e8-9967-39e6bb0ee972>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Estimating Wind Speed And Direction From a Doppler Wind Image The Max/Min Method For Estimating Wind Speed and Direction It should be emphasized before starting that the Max/Min Method for estimating Wind Speed and Direction relies on a major simplifying assumption - that the wind is uniform in both speed and direction at all points around the radar. This is never exactly true, but is often approximately true, at least in close proximity to the radar. The method is as follows: 1. Locate the point where you want to estimate the wind speed - we've chosen point P in our example below 2. Draw a circle through the point, centred on the radar 3. Find the point on the circle with the strongest inbound Doppler velocity - we've labelled ours Q 4. Find the point on the circle with the strongest outbound Doppler velocity - we've labelled ours R 5. R should be lie exactly opposite to Q on the circle. If Q and R are not opposite (or at least approximately so), it indicates that our assumptions are not valid (ie the wind is not uniform at all points around the radar), and consequently the method will not give a reliable result. It may not be possible to estimate the wind speed and direction in this case. 6. If R is opposite Q , then draw an arrow from strongest inbound (Q) to strongest outbound (R). This arrow is the wind direction at point P (and in fact all points on the circle - remember the method assumes that the wind is the same at all these points). Note that all Bureau radar images use the convention that true North is at the top of the image and East is to the right of the 7. Estimate the strength of the Doppler velocity at point Q by comparing the colour shown there to the velocity palette. Do the same for point R . These should be the same (or at least nearly the same). If not then our assumptions are not valid, and the method will not give a reliable result. 8. The strength of the Doppler velocity at Q (or at R) is the wind strength at point P (and in fact all points on the circle - remember the method assumes that the wind is the same at all these The wind direction in this simple example is a Westerly (blowing from west to east). Click here for a larger image Figure 1. Estimating wind direction using the Max/Min Method. The wind speed in this simple example is 50 km/h. Click here for a larger image Figure 2. Estimating wind speed using the Max/Min Method. The Zero Isodop Method for Estimating Wind Direction One feature that is often easy to identify on a Doppler velocity image is the zero isodop. An isodop is a line connecting points with equal Doppler velocity. The zero isodop is a line joining places with zero Doppler velocity. It shows up as a line a white separating a region of inbound (blue) velocities from a region of outbound (orange) ones. The zero isodop for our simple uniform Westerly wind example is shown below. Click here for a larger image Figure 3. The zero isodop for a simple uniform Westerly wind example. In the real world, the zero isodop is usually a curved line instead of a straight line, but the technique described below is still valid. The Zero Isodop Method for Estimating Wind Direction at any point on the Zero Isodop is as follows: 1. Locate a point on the zero isodop - we've chosen point X in our example 2. Draw a radial from the radar to the point of interest - this is the arrow from R (the radar) to X in our example 3. Draw another arrow perpendicular (at right angles) to the first arrow and orient it so that it points from the inbound side of the zero isodop to the outbound side. In our example the arrow must run from Y to Z , not Z to Y . 4. This second arrow (Y to Z) represents the wind direction at point X. Again, the convention used for all Bureau radar images is that true North is at the top of the image and East is to the right of the image. Figure 4. The zero isodop method for estimating wind direction. Note that the wind direction arrow is to be drawn perpendicular to the radial from the radar to the point of interest, not perpendicular to the zero isodop. In our simple example the wind direction is also perpendicular to the zero isodop, but this is not always the case! Real-Life Example To learn more about using these two techniques to estimate the wind speed and direction from a Doppler wind image, see this example from the Buckland Park radar in Adelaide.
{"url":"http://www.bom.gov.au/australia/radar/about/estimating_wind.shtml","timestamp":"2014-04-21T04:41:25Z","content_type":null,"content_length":"31177","record_id":"<urn:uuid:32d573c8-5f00-482e-b782-36f58589e180>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How do you find the horizontal asymptote.. It says in my textbook that " to find horizontal asymptote divide numerator and denominator by x & investigate x --> +/- infinite .. I dont understand the investigation part :S • one year ago • one year ago Best Response You've already chosen the best response. 1) Put equation or function in standard form. 2) Remove everything except the biggest exponents of x found in the numerator and denominator. Best Response You've already chosen the best response. One way to find the horizontal asymptote: Take the highest degree of the numerator and the denominator and divide. For example, -3x^2/x^2 = -3 Horizontal asymptote is y=-3. Now if you do that and it turns out to be x/x^2, (simplifying to 1/x) your horizontal asymptote is y=0. If it ends up being x^2/x, (simplifying to x/1) then you have no horizontal asymptote. Best Response You've already chosen the best response. your book is stupid, if you don't mind me saying so Best Response You've already chosen the best response. LOOOLL agreed Best Response You've already chosen the best response. if the degree of the numerator is larger than the degree of the denominator, there is no horizontal asymptote if the degree of the numerator is smaller than the degree of the denominator, the horizontal asymptote is \(y=0\) if the degrees are the same, the horizontal asymptote is the ratio of the leading coefficients Best Response You've already chosen the best response. yaah I know that way, but on my test I have to show how I got the horizontal asymptotes by doing what it says inthe textbook Best Response You've already chosen the best response. then i guess you have to do it Best Response You've already chosen the best response. set your limit to +/- infinity and solve, if you come out with a number that is your horizontal asymptote Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508891a7e4b0524190914564","timestamp":"2014-04-18T13:50:52Z","content_type":null,"content_length":"47587","record_id":"<urn:uuid:e8a1feea-4cd8-453b-9f7b-4ad42175cae5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving Linear Systems by Elimination through Addition or Subtraction 7.3: Solving Linear Systems by Elimination through Addition or Subtraction Created by: CK-12 Learning Objectives At the end of this lesson, students will be able to: • Solve a linear system of equations using elimination by addition. • Solve a linear system of equations using elimination by subtraction. • Solve real-world problems using linear systems by elimination. Terms introduced in this lesson: method of elimination Teaching Strategies and Tips Use Example 1 to motivate the elimination method. • To find the cost of one banana, it makes sense to subtract the equations. • In general, equations can be added or subtracted so that a variable cancels. Show that in Example 2, equations can be added column-wise; each column representing a different variable. When $x$$y$$0x$$0y$ In Example 3, • Drawing a picture helps. • Encourage students to label variables first. The two unknowns are the speed of the river and the speed of the canoe in still water. • Compare going downstream to walking on a moving walkway in an airport terminal. Travelers walk faster than usual as their speeds are boosted by the walkway. The opposite is true going upstream and against the moving walkway. Error Troubleshooting Remind students in Review Problem 3 to align the variables column-wise. In Review Problems 4, 6, 7, and 9, encourage students to put the equation being subtracted in parentheses. This way, it will be easier to remember to distribute the negative. General Tip: When subtracting two equations, remind students not to forget to subtract the constants on the right side. You can only attach files to None which belong to you If you would like to associate files with this None, please make a copy first.
{"url":"http://www.ck12.org/tebook/Algebra-I-Teacher%2527s-Edition/r1/section/7.3/anchor-content","timestamp":"2014-04-20T14:30:21Z","content_type":null,"content_length":"112410","record_id":"<urn:uuid:e824cdbd-636a-46e4-add8-f8990a39874e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Baytown ACT Tutor Find a Baytown ACT Tutor I have taught math and science as a tutor since 1989. I am a retired state certified teacher in Texas both in composite high school science and mathematics. I offer a no-fail guarantee (contact me via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible. 35 Subjects: including ACT Math, chemistry, physics, calculus ...I can almost guarantee that you will have “Aha! So that’s how it works!” moments as algebra becomes more familiar and understandable. Algebra 2 builds on the foundation of algebra 1, especially in the ongoing application of the basic concepts of variables, solving equations, and manipulations such as factoring. 20 Subjects: including ACT Math, writing, algebra 1, algebra 2 ...I am an excellent communicator with a desire to help student achieve their educational goals. I prefer using some internet instructional tools if available, but I spend most of the time in guiding my student through the process of identifying specifically what is being asked, what are the import... 9 Subjects: including ACT Math, geometry, algebra 1, precalculus ...Let me guess: you're getting ready to run the gauntlet of standardized tests that guard admission to the school of your dreams, and it feels like that dream where you walk into a class naked, without having studied, and with no number-2 pencil. We can beat that monster together. I'm a graduate ... 32 Subjects: including ACT Math, English, Spanish, writing Howdy, I'm J.C., and I'd love to be your tutor! I'm a recent graduate of Texas A&M and received my degree in industrial engineering. I'm an experienced tutor and will effectively teach all subjects in a way that is easily understood. 17 Subjects: including ACT Math, reading, calculus, geometry
{"url":"http://www.purplemath.com/baytown_act_tutors.php","timestamp":"2014-04-17T13:42:22Z","content_type":null,"content_length":"23518","record_id":"<urn:uuid:5a8e2c60-fa6f-4708-8f2a-a820e072636c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Harvey, IL Algebra 1 Tutor Find a Harvey, IL Algebra 1 Tutor ...My goal is to help all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon. I consistently monitor progress and adjust lessons to meet the specific needs of each individual student. Thank you for considering my services. 12 Subjects: including algebra 1, calculus, algebra 2, geometry ...I spent a summer teaching at a high school math camp, I have been a coach for the Georgia state math team for a few years now, and I was a grader for the IU math department. I've tutored many people I've encountered, including friends, roommates, and people who've sat next to me on trains, aside... 13 Subjects: including algebra 1, calculus, geometry, statistics ...Among other honors, I won the 1L Award for Excellence in Oral Advocacy, which covers not just my speaking ability but also the prep work that goes behind it. You'll be in good hands. I have a unique approach to reading on the SAT, which I honed and perfected for years on higher-level tests like the LSAT as well. 28 Subjects: including algebra 1, English, reading, physics ...Thank you for considering me as a suitable tutor. My experience in tutoring began during my undergraduate years when I tutored in the university wring center. At the writing center I guided and coached students in their various writing projects which include essays and research papers. 61 Subjects: including algebra 1, reading, English, geometry I am an experienced, professional educator who has worked with students mainly in the area of mathematics. I have guided students to success from grades pre-k through graduate school. In addition, I also work with test preparation including district wide tests and will be working with students pre... 76 Subjects: including algebra 1, English, reading, Spanish Related Harvey, IL Tutors Harvey, IL Accounting Tutors Harvey, IL ACT Tutors Harvey, IL Algebra Tutors Harvey, IL Algebra 2 Tutors Harvey, IL Calculus Tutors Harvey, IL Geometry Tutors Harvey, IL Math Tutors Harvey, IL Prealgebra Tutors Harvey, IL Precalculus Tutors Harvey, IL SAT Tutors Harvey, IL SAT Math Tutors Harvey, IL Science Tutors Harvey, IL Statistics Tutors Harvey, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Harvey_IL_algebra_1_tutors.php","timestamp":"2014-04-21T15:12:24Z","content_type":null,"content_length":"24035","record_id":"<urn:uuid:276aa459-13e9-46a9-a235-fdb08d58a706>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 39 , 1996 "... In the past 20 years there has been treftlendous progress in developing and analyzing parallel algorithftls. Researchers have developed efficient parallel algorithms to solve most problems for which efficient sequential solutions are known. Although some ofthese algorithms are efficient only in a th ..." Cited by 193 (9 self) Add to MetaCart In the past 20 years there has been treftlendous progress in developing and analyzing parallel algorithftls. Researchers have developed efficient parallel algorithms to solve most problems for which efficient sequential solutions are known. Although some ofthese algorithms are efficient only in a theoretical framework, many are quite efficient in practice or have key ideas that have been used in efficient implementations. This research on parallel algorithms has not only improved our general understanding ofparallelism but in several cases has led to improvements in sequential algorithms. Unf:ortunately there has been less success in developing good languages f:or prograftlftling parallel algorithftls, particularly languages that are well suited for teaching and prototyping algorithms. There has been a large gap between languages , 2006 "... The edit distance between two strings S and R is defined to be the minimum number of character inserts, deletes and changes needed to convert R to S. Given a text string t of length n, and a pattern string p of length m, informally, the string edit distance matching problem is to compute the smalles ..." Cited by 58 (3 self) Add to MetaCart The edit distance between two strings S and R is defined to be the minimum number of character inserts, deletes and changes needed to convert R to S. Given a text string t of length n, and a pattern string p of length m, informally, the string edit distance matching problem is to compute the smallest edit distance between p and substrings of t. We relax the problem so that (a) we allow an additional operation, namely, substring moves, and (b) we allow approximation of this string edit distance. Our result is a near linear time deterministic algorithm to produce a factor of O(log n log ∗ n) approximation to the string edit distance with moves. This is the first known significantly subquadratic algorithm for a string edit distance problem in which the distance involves nontrivial alignments. Our results are obtained by embedding strings into L1 vector space using a simplified parsing technique we call Edit , 1992 "... We show how to construct an O( p n)-separator decomposition of a planar graph G in O(n) time. Such a decomposition defines a binary tree where each node corresponds to a subgraph of G and stores an O( p n)-separator of that subgraph. We also show how to construct an O(n ffl )-way decomposition tree ..." Cited by 51 (7 self) Add to MetaCart We show how to construct an O( p n)-separator decomposition of a planar graph G in O(n) time. Such a decomposition defines a binary tree where each node corresponds to a subgraph of G and stores an O ( p n)-separator of that subgraph. We also show how to construct an O(n ffl )-way decomposition tree in parallel in O(log n) time so that each node corresponds to a subgraph of G and stores an O(n 1= 2+ffl )-separator of that subgraph. We demonstrate the utility of such a separator decomposition by showing how it can be used in the design of a parallel algorithm for triangulating a simple polygon deterministically in O(log n) time using O(n= log n) processors on a CRCW PRAM. Keywords: Computational geometry, algorithmic graph theory, planar graphs, planar separators, polygon triangulation, parallel algorithms, PRAM model. 1 Introduction Let G = (V; E) be an n-node graph. An f(n)-separator is an f(n)-sized subset of V whose removal disconnects G into two subgraphs G 1 and G 2 each... - Proceedings 22nd International Colloquium on Automata, Languages and Programming , 1995 "... We describe the first parallel algorithm with optimal speedup for constructing minimum-width tree decompositions of graphs of bounded treewidth. On n-vertex input graphs, the algorithm works in O((logn)^2) time using O(n) operations on the EREW PRAM. We also give faster parallel algorithms with opti ..." Cited by 33 (10 self) Add to MetaCart We describe the first parallel algorithm with optimal speedup for constructing minimum-width tree decompositions of graphs of bounded treewidth. On n-vertex input graphs, the algorithm works in O ((logn)^2) time using O(n) operations on the EREW PRAM. We also give faster parallel algorithms with optimal speedup for the problem of deciding whether the treewidth of an input graph is bounded by a given constant and for a variety of problems on graphs of bounded treewidth, including all decision problems expressible in monadic second-order logic. On n-vertex input graphs, the algorithms use O(n) operations together with O(log n log n) time on the EREW PRAM, or O(log n) time on the CRCW PRAM. , 1992 "... We present a simple and implementable algorithm that computes a minimum spanning tree of an undirected weighted graph G = (V, E) of n = |V| vertices and m = |E| edges on an EREW PRAM in O(log 3= 2 n) time using n+m processors. This represents a substantial improvement in the running time over the ..." Cited by 31 (3 self) Add to MetaCart We present a simple and implementable algorithm that computes a minimum spanning tree of an undirected weighted graph G = (V, E) of n = |V| vertices and m = |E| edges on an EREW PRAM in O(log 3=2 n) time using n+m processors. This represents a substantial improvement in the running time over the previous results for this problem using at the same time the weakest of the PRAM models. It also implies the existence of algorithms having the same complexity bounds for the EREW PRAM, for connectivity, ear decomposition, biconnectivity, strong orientation, st-numbering and Euler tours - Ann. Rev. Comput. Sci , 1988 "... this paper and supplied many helpful comments. This research was supported in part by NSF grants DCR-85-11713, CCR-86-05353, and CCR-88-14977, and by DARPA contract N00039-84-C-0165. ..." Cited by 29 (3 self) Add to MetaCart this paper and supplied many helpful comments. This research was supported in part by NSF grants DCR-85-11713, CCR-86-05353, and CCR-88-14977, and by DARPA contract N00039-84-C-0165. , 1992 "... We describe randomized parallel algorithms for building trapezoidal diagrams of line segments in the plane. The algorithms are designed for a CRCW PRAM. For general segments, we give an algorithm requiring optimal O(A + n log n) expected work and optimal O(logn) time, where A is the number of inters ..." Cited by 23 (0 self) Add to MetaCart We describe randomized parallel algorithms for building trapezoidal diagrams of line segments in the plane. The algorithms are designed for a CRCW PRAM. For general segments, we give an algorithm requiring optimal O(A + n log n) expected work and optimal O(logn) time, where A is the number of intersecting pairs of segments. If the segments form a simple chain, we give an algorithm requiring optimal O(n) expected work and O(logn log log n log n) expected time a , and a simpler algorithm requiring O(n log n) expected work. The serial algorithm corresponding to the latter is among the simplest known algorithms requiring O(n log n) expected operations. For a set of segments forming K chains, we give an algorithm requiring O(A + n log n + K log n) expected work and O(logn log log n log n) expected time. The parallel time bounds require the assumption that enough processors are available, with processor allocations every log n steps. Keywords: randomized, parallel, trapez... - SIAM J. COMPUT , 1991 "... Parallel algorithms for several graph and geometric problems are presented, including transitive closure and topological sorting in planar st-graphs, preprocessing planar subdivisions for point location queries, and construction of visibility representations and drawings of planar graphs. Most of th ..." Cited by 23 (11 self) Add to MetaCart Parallel algorithms for several graph and geometric problems are presented, including transitive closure and topological sorting in planar st-graphs, preprocessing planar subdivisions for point location queries, and construction of visibility representations and drawings of planar graphs. Most of these algorithms achieve optimal O(log n) running time using n = log n processors in the EREW PRAM model, n being the number of vertices.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=814431","timestamp":"2014-04-18T11:39:50Z","content_type":null,"content_length":"34707","record_id":"<urn:uuid:13ac6f06-726d-4146-a849-5a93ae6c0986>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Uncertainty - how to calculate? I am looking for the method to calculate the total uncertainty of measurements. When we do labs, we always have a independent and dependent variable. So, for example, when we investigate "how temperature affects reaction xy" we use different temperatures (e.g. 5) and measure the reaction of xy by using a gas syringe (for example). We also do 3 trials for every temperature. I know the uncertainty of the thermometer (1) and the gas syringe (0.1). What I am looking for is, if you take the average gas volume for every temperature, how do you calculate the uncertainty? I've read that you use the standard deviation, but I am not sure if that's the right method. Also, I want some kind of total uncertainty for all averages. In my raw data table, my uncertainty for all data would simply be "0.1" for the gas syringe. That would apply to all my measurements of gas. But, when I calculate the average gas volume, and the according uncertainties, the uncertainties could differ for each average. I tried out the standard deviation, and for the first average the uncertainty was 0.2 and for the second 0.1 (I just made that up for this case). Now I don't want 5 individiual tables for my 5 averages (because I have 5 temperatures). I want one table with all my 5 temperatures and the according gas volume averages. Also I want the total uncertainty for all my averages together. How do I calculate this total uncertainty? I hope I am not confusing...
{"url":"http://www.chemicalforums.com/index.php?topic=29665.0","timestamp":"2014-04-20T03:12:34Z","content_type":null,"content_length":"37520","record_id":"<urn:uuid:de53a890-5995-4192-85bf-e207c0d429ba>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Haciendas El Zorzal, PR Math Tutor Find a Haciendas El Zorzal, PR Math Tutor ...I love working with students and see them making progress. I am mostly available on the weekends. If you are interested, please let me know. 8 Subjects: including calculus, trigonometry, algebra 1, algebra 2 ...I live near ASU campus, so naturally that is my target audience and preferred location to teach and tutor. I am, however, willing to visit off-campus sites up to 5 miles away.Started with AP Calc in high school (received a 5 on the test). I actively use calculus day to day with my physics students at a variety of levels. Geometry is essential to any physicist, student or professor. 14 Subjects: including algebra 1, algebra 2, trigonometry, SAT math ...I have taught & tutored college-level physics, both calculus-based and algebra/trigonometry-based. Topics have included force statics & dynamics, kinematics, conservation of energy & momentum, fluid statics & dynamics, LIGHT & optics, electrostatics, thermodynamics, quantum chemistry, & nuclear decay. Work with me so I can put you on the FAST track to physics success! 20 Subjects: including calculus, golf, MCAT, career development ...However, for those seeking additional job-related skills I am familiar in both the use and teaching of various helpful skills and concepts. Among these are the basics of corporate finance, especially how to read and understand an income statement, balance sheet, and cash flow statement. I can a... 19 Subjects: including algebra 2, precalculus, SAT math, finance ...I have great energy and untiring patience while working with young people. I am completely devoted to seeing a child reach his or her highest level of success. My world travels and life pursuits will help create a pleasant, insightful environment supplementing the learning experience. 12 Subjects: including algebra 1, American history, vocabulary, grammar
{"url":"http://www.purplemath.com/Haciendas_El_Zorzal_PR_Math_tutors.php","timestamp":"2014-04-21T05:03:06Z","content_type":null,"content_length":"24571","record_id":"<urn:uuid:8759d97a-d807-4bec-94e0-82e669a0dd14>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: "The Terminator" Fortran routine > Subroutine energye, which contains this statement, is dead code, > probably a relic. Yes, this again is used in the main program. It calculates what we call the Jacobi Integral, which in the restricted Three Body Problem is a constant value for the whole trajectory from Earth to Mars - the heliocentric part, when not near Earth or Mars. This is how we can confirm that the trajectory is accurate; which mine was. It's a very convenient way to authenticate trajectory calculations like this one.
{"url":"http://coding.derkeiler.com/Archive/Fortran/comp.lang.fortran/2005-06/msg00899.html","timestamp":"2014-04-24T16:35:55Z","content_type":null,"content_length":"10577","record_id":"<urn:uuid:850301c1-cc34-49b1-b990-d8146a38fd4e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Matlab: Number formating April 7th 2008, 04:50 AM Matlab: Number formating Is there a way for Matlab to write out numbers in a "normal" way? That is, to write 10000 simply as 10000? I'd like to have more than three figures in front of the decimal dot. Is that possible? These kind of print outs are just sad: 1.0e+003 * 0.0032 0.0035 0.0046 1.5706 1.5930 I'd like it to be: 3.2 3.5 4.6 1570.06 1593.0 April 7th 2008, 08:39 AM Is there a way for Matlab to write out numbers in a "normal" way? That is, to write 10000 simply as 10000? I'd like to have more than three figures in front of the decimal dot. Is that possible? These kind of print outs are just sad: 1.0e+003 * 0.0032 0.0035 0.0046 1.5706 1.5930 I'd like it to be: 3.2 3.5 4.6 1570.06 1593.0 Try "help format"
{"url":"http://mathhelpforum.com/math-software/33519-matlab-number-formating-print.html","timestamp":"2014-04-24T00:20:44Z","content_type":null,"content_length":"4559","record_id":"<urn:uuid:94008985-68a0-4d7c-a21c-fd11c129cfa9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
From HaskellWiki (Difference between revisions) (→Further reading: reference for Traversable laws) (→Further reading: add link to cool use of Monoid) ← Older edit Newer edit → Line 1,035: Line 1,035: good example of using <code>Monoid</code> to generalize a data good example of using <code>Monoid</code> to generalize a data structure. Some other interesting examples of <code>Monoid</code> use include [http://www.reddit.com/r/programming/comments/7cf4r / monoids_in_my_programming_language/c06adnx building elegant list sorting combinators], [http://byorgey.wordpress.com/2008/ Some other interesting examples of <code>Monoid</code> use include 04/17/collecting-unstructured-information-with-the-monoid-of-partial-knowledge/ collecting unstructured information], [http://www.reddit.com/r/programming/comments/7cf4r/ [http://izbicki.me/blog/gausian-distributions-are-monoids combining probability distributions], and a brilliant series of − monoids_in_my_programming_language/c06adnx building elegant list sorting + posts by Chung-Chieh Shan and Dylan Thurston using <code>Monoid</code>s to [http://conway.rutgers.edu/~ccshan/wiki/blog/ combinators], posts/WordNumbers1/ elegantly solve a difficult combinatorial puzzle] (followed by [http://conway.rutgers.edu/~ccshan/wiki /blog/posts/WordNumbers2/ part 2], [http://conway.rutgers.edu/~ccshan/wiki/blog/posts/WordNumbers3/ part 3], [http:// conway.rutgers.edu/~ccshan/wiki/blog/posts/WordNumbers4/ part 4]). − collecting-unstructured-information-with-the-monoid-of-partial-knowledge / collecting unstructured information], − and a brilliant series of posts by Chung-Chieh Shan and Dylan Thurston using <code>Monoid</code>s to [http://conway.rutgers.edu/~ccshan/wiki/ − blog/posts/WordNumbers1/ elegantly solve a difficult combinatorial puzzle] (followed by [http://conway.rutgers.edu/~ccshan/wiki/blog/posts/WordNumbers2/ part − 2], [http://conway.rutgers.edu/~ccshan/wiki/blog/posts/WordNumbers3/ part − 3], [http://conway.rutgers.edu/~ccshan/wiki/blog/posts/WordNumbers4/ part − 4]). As unlikely as it sounds, monads can actually be viewed as a sort of As unlikely as it sounds, monads can actually be viewed as a sort of Revision as of 22:13, 3 January 2013 By Brent Yorgey, byorgey@cis.upenn.edu Originally published 12 March 2009 in issue 13 of the Monad.Reader. Ported to the Haskell wiki in November 2011 by Geheimdienst. This is now the official version of the Typeclassopedia and supersedes the version published in the Monad.Reader. Please help update and extend it by editing it yourself or by leaving comments, suggestions, and questions on the talk page. 1 Abstract The standard Haskell libraries feature a number of type classes with algebraic or category-theoretic underpinnings. Becoming a fluent Haskell hacker requires intimate familiarity with them all, yet acquiring this familiarity often involves combing through a mountain of tutorials, blog posts, mailing list archives, and IRC logs. The goal of this document is to serve as a starting point for the student of Haskell wishing to gain a firm grasp of its standard type classes. The essentials of each type class are introduced, with examples, commentary, and extensive references for further reading. 2 Introduction Have you ever had any of the following thoughts? • What the heck is a monoid, and how is it different from a monad? • I finally figured out how to use Parsec with do-notation, and someone told me I should use something called Applicative instead. Um, what? • Someone in the #haskell IRC channel used (***), and when I asked lambdabot to tell me its type, it printed out scary gobbledygook that didn’t even fit on one line! Then someone used fmap fmap fmap and my brain exploded. • When I asked how to do something I thought was really complicated, people started typing things like zip.ap fmap.(id &&& wtf) and the scary thing is that they worked! Anyway, I think those people must actually be robots because there’s no way anyone could come up with that in two seconds off the top of their head. If you have, look no further! You, too, can write and understand concise, elegant, idiomatic Haskell code with the best of them. There are two keys to an expert Haskell hacker’s wisdom: 1. Understand the types. 2. Gain a deep intuition for each type class and its relationship to other type classes, backed up by familiarity with many examples. It’s impossible to overstate the importance of the first; the patient student of type signatures will uncover many profound secrets. Conversely, anyone ignorant of the types in their code is doomed to eternal uncertainty. “Hmm, it doesn’t compile ... maybe I’ll stick in an fmap here ... nope, let’s see ... maybe I need another (.) somewhere? ... um ...” The second key—gaining deep intuition, backed by examples—is also important, but much more difficult to attain. A primary goal of this document is to set you on the road to gaining such intuition. There is no royal road to Haskell. —Euclid This document can only be a starting point, since good intuition comes from hard work, not from learning the right metaphor. Anyone who reads and understands all of it will still have an arduous journey ahead—but sometimes a good starting point makes a big difference. It should be noted that this is not a Haskell tutorial; it is assumed that the reader is already familiar with the basics of Haskell, including the standard Prelude, the type system, data types, and type classes. The type classes we will be discussing and their interrelationships: ∗ Semigroup can be found in the semigroups package, Apply in the semigroupoids package, and Comonad in the comonad package. • Solid arrows point from the general to the specific; that is, if there is an arrow from Foo to Bar it means that every Bar is (or should be, or can be made into) a Foo. • Dotted arrows indicate some other sort of relationship. • Monad and ArrowApply are equivalent. • Semigroup, Apply and Comonad are greyed out since they are not actually (yet?) in the standard Haskell libraries ∗. One more note before we begin. The original spelling of “type class” is with two words, as evidenced by, for example, the Haskell 98 Revised Report, early papers on type classes like Type classes in Haskell and Type classes: exploring the design space, and Hudak et al.’s history of Haskell. However, as often happens with two-word phrases that see a lot of use, it has started to show up as one word (“typeclass”) or, rarely, hyphenated (“type-class”). When wearing my prescriptivist hat, I prefer “type class”, but realize (after changing into my descriptivist hat) that there's probably not much I can do about it. We now begin with the simplest type class of all: Functor. 3 Functor The Functor class (haddock) is the most basic and ubiquitous type class in the Haskell libraries. A simple intuition is that a Functor represents a “container” of some sort, along with the ability to apply a function uniformly to every element in the container. For example, a list is a container of elements, and we can apply a function to every element of a list, using map. As another example, a binary tree is also a container of elements, and it’s not hard to come up with a way to recursively apply a function to every element in a tree. Another intuition is that a Functor represents some sort of “computational context”. This intuition is generally more useful, but is more difficult to explain, precisely because it is so general. Some examples later should help to clarify the Functor-as-context point of view. In the end, however, a Functor is simply what it is defined to be; doubtless there are many examples of Functor instances that don’t exactly fit either of the above intuitions. The wise student will focus their attention on definitions and examples, without leaning too heavily on any particular metaphor. Intuition will come, in time, on its own. 3.1 Definition Here is the type class declaration for Functor: class Functor f where fmap :: (a -> b) -> f a -> f b Functor is exported by the Prelude, so no special imports are needed to use it. First, the f a and f b in the type signature for fmap tell us that f isn’t just a type; it is a type constructor which takes another type as a parameter. (A more precise way to say this is that the kind of f must be * -> *.) For example, Maybe is such a type constructor: Maybe is not a type in and of itself, but requires another type as a parameter, like Maybe Integer. So it would not make sense to say instance Functor Integer, but it could make sense to say instance Functor Maybe. Now look at the type of fmap: it takes any function from a to b, and a value of type f a, and outputs a value of type f b. From the container point of view, the intention is that fmap applies a function to each element of a container, without altering the structure of the container. From the context point of view, the intention is that fmap applies a function to a value without altering its context. Let’s look at a few specific examples. 3.2 Instances ∗ Recall that [] has two meanings in Haskell: it can either stand for the empty list, or, as here, it can represent the list type constructor (pronounced “list-of”). In other words, the type [a] (list-of-a) can also be written [] a. ∗ You might ask why we need a separate map function. Why not just do away with the current list-only map function, and rename fmap to map instead? Well, that’s a good question. The usual argument is that someone just learning Haskell, when using map incorrectly, would much rather see an error about lists than about Functors. As noted before, the list constructor [] is a functor ∗; we can use the standard list function map to apply a function to each element of a list ∗. The Maybe type constructor is also a functor, representing a container which might hold a single element. The function fmap g has no effect on Nothing (there are no elements to which g can be applied), and simply applies g to the single element inside a Just. Alternatively, under the context interpretation, the list functor represents a context of nondeterministic choice; that is, a list can be thought of as representing a single value which is nondeterministically chosen from among several possibilities (the elements of the list). Likewise, the Maybe functor represents a context with possible failure. These instances are: instance Functor [] where fmap _ [] = [] fmap g (x:xs) = g x : fmap g xs -- or we could just say fmap = map instance Functor Maybe where fmap _ Nothing = Nothing fmap g (Just a) = Just (g a) As an aside, in idiomatic Haskell code you will often see the letter f used to stand for both an arbitrary Functor and an arbitrary function. In this document, f represents only Functors, and g or h always represent functions, but you should be aware of the potential confusion. In practice, what f stands for should always be clear from the context, by noting whether it is part of a type or part of the code. There are other Functor instances in the standard libraries; below are a few. Note that some of these instances are not exported by the Prelude; to access them, you can import • Either e is an instance of Functor; Either e a represents a container which can contain either a value of type a, or a value of type e (often representing some sort of error condition). It is similar to Maybe in that it represents possible failure, but it can carry some extra information about the failure as well. • ((,) e) represents a container which holds an “annotation” of type e along with the actual value it holds. It might be clearer to write it as (e,), by analogy with an operator section like (1+), but that syntax is not allowed in types (although it is allowed in expressions with the TupleSections extension enabled). However, you can certainly think of it as (e,). • ((->) e) (which can be thought of as (e ->); see above), the type of functions which take a value of type e as a parameter, is a Functor. As a container, (e -> a) represents a (possibly infinite) set of values of a, indexed by values of e. Alternatively, and more usefully, ((->) e) can be thought of as a context in which a value of type e is available to be consulted in a read-only fashion. This is also why ((->) e) is sometimes referred to as the reader monad; more on this later. • IO is a Functor; a value of type IO a represents a computation producing a value of type a which may have I/O effects. If m computes the value x while producing some I/O effects, then fmap g m will compute the value g x while producing the same I/O effects. • Many standard types from the containers library (such as Tree, Map, and Sequence) are instances of Functor. A notable exception is Set, which cannot be made a Functor in Haskell (although it is certainly a mathematical functor) since it requires an Ord constraint on its elements; fmap must be applicable to any types a and b. However, Set (and other similarly restricted data types) can be made an instance of a suitable generalization of Functor, either by making a and b arguments to the Functor type class themselves, or by adding an associated constraint. 1. Implement Functor instances for Either e and ((->) e). 2. Implement Functor instances for ((,) e) and for Pair, defined as data Pair a = Pair a a Explain their similarities and differences. 3. Implement a Functor instance for the type ITree, defined as data ITree a = Leaf (Int -> a) | Node [ITree a] 4. Give an example of a type of kind * -> * which cannot be made an instance of Functor (without using undefined). 5. Is this statement true or false? The composition of two Functors is also a Functor. If false, give a counterexample; if true, prove it by exhibiting some appropriate Haskell code. 3.3 Laws As far as the Haskell language itself is concerned, the only requirement to be a Functor is an implementation of fmap with the proper type. Any sensible Functor instance, however, will also satisfy the functor laws, which are part of the definition of a mathematical functor. There are two: fmap id = id fmap (g . h) = (fmap g) . (fmap h) ∗ Technically, these laws make f and fmap together an endofunctor on Hask, the category of Haskell types (ignoring ⊥, which is a party pooper). See Wikibook: Category theory. Together, these laws ensure that fmap g does not change the structure of a container, only the elements. Equivalently, and more simply, they ensure that fmap g changes a value without altering its context ∗. The first law says that mapping the identity function over every item in a container has no effect. The second says that mapping a composition of two functions over every item in a container is the same as first mapping one function, and then mapping the other. As an example, the following code is a “valid” instance of Functor (it typechecks), but it violates the functor laws. Do you see why? -- Evil Functor instance instance Functor [] where fmap _ [] = [] fmap g (x:xs) = g x : g x : fmap g xs Any Haskeller worth their salt would reject this code as a gruesome abomination. Unlike some other type classes we will encounter, a given type has at most one valid instance of Functor. This can be proven via the free theorem for the type of fmap. In fact, GHC can automatically derive Functor instances for many data types. A similar argument also shows that any Functor instance satisfying the first law (fmap id = id) will automatically satisfy the second law as well. Practically, this means that only the first law needs to be checked (usually by a very straightforward induction) to ensure that a Functor instance is valid. 1. Although it is not possible for a Functor instance to satisfy the first Functor law but not the second, the reverse is possible. Give an example of a (bogus) Functor instance which satisfies the second law but not the first. 2. Which laws are violated by the evil Functor instance for list shown above: both laws, or the first law alone? Give specific counterexamples. 3.4 Intuition There are two fundamental ways to think about fmap. The first has already been mentioned: it takes two parameters, a function and a container, and applies the function “inside” the container, producing a new container. Alternately, we can think of fmap as applying a function to a value in a context (without altering the context). Just like all other Haskell functions of “more than one parameter”, however, fmap is actually curried: it does not really take two parameters, but takes a single parameter and returns a function. For emphasis, we can write fmap’s type with extra parentheses: fmap :: (a -> b) -> (f a -> f b). Written in this form, it is apparent that fmap transforms a “normal” function (g :: a -> b) into one which operates over containers/contexts (fmap g :: f a -> f b). This transformation is often referred to as a lift; fmap “lifts” a function from the “normal world” into the “f world”. 3.5 Further reading A good starting point for reading about the category theory behind the concept of a functor is the excellent Haskell wikibook page on category theory. 4 Applicative A somewhat newer addition to the pantheon of standard Haskell type classes, applicative functors represent an abstraction lying in between Functor and Monad in expressivity, first described by McBride and Paterson. The title of their classic paper, Applicative Programming with Effects, gives a hint at the intended intuition behind the Applicative type class. It encapsulates certain sorts of “effectful” computations in a functionally pure way, and encourages an “applicative” programming style. Exactly what these things mean will be seen later. 4.1 Definition Recall that Functor allows us to lift a “normal” function to a function on computational contexts. But fmap doesn’t allow us to apply a function which is itself in a context to a value in a context. Applicative gives us just such a tool, (<*>). It also provides a method, pure, for embedding values in a default, “effect free” context. Here is the type class declaration for Applicative, as defined in Control.Applicative: class Functor f => Applicative f where pure :: a -> f a (<*>) :: f (a -> b) -> f a -> f b Note that every Applicative must also be a Functor. In fact, as we will see, fmap can be implemented using the Applicative methods, so every Applicative is a functor whether we like it or not; the Functor constraint forces us to be honest. ∗ Recall that ($) is just function application: f $ x = f x. As always, it’s crucial to understand the type signatures. First, consider (<*>): the best way of thinking about it comes from noting that the type of (<*>) is similar to the type of ($) ∗, but with everything enclosed in an f. In other words, (<*>) is just function application within a computational context. The type of (<*>) is also very similar to the type of fmap; the only difference is that the first parameter is f (a -> b), a function in a context, instead of a “normal” function (a -> b). pure takes a value of any type a, and returns a context/container of type f a. The intention is that pure creates some sort of “default” container or “effect free” context. In fact, the behavior of pure is quite constrained by the laws it should satisfy in conjunction with (<*>). Usually, for a given implementation of (<*>) there is only one possible implementation of pure. (Note that previous versions of the Typeclassopedia explained pure in terms of a type class Pointed, which can still be found in the pointed package. However, the current consensus is that Pointed is not very useful after all. For a more detailed explanation, see Why not Pointed?) 4.2 Laws ∗ See haddock for Applicative and Applicative programming with effects Traditionally, there are four laws that Applicative instances should satisfy ∗. In some sense, they are all concerned with making sure that pure deserves its name: • The identity law: • Homomorphism: pure f <*> pure x = pure (f x) Intuitively, applying a non-effectful function to a non-effectful argument in an effectful context is the same as just applying the function to the argument and then injecting the result into the context with pure. • Interchange: u <*> pure y = pure ($ y) <*> u Intuitively, this says that when evaluating the application of an effectful function to a pure argument, the order in which we evaluate the function and its argument doesn't matter. • Composition: u <*> (v <*> w) = pure (.) <*> u <*> v <*> w This one is the trickiest law to gain intuition for. In some sense it is expressing a sort of associativity property of (<*>). The reader may wish to simply convince themselves that this law is Considered as left-to-right rewrite rules, the homomorphism, interchange, and composition laws actually constitute an algorithm for transforming any expression using pure and (<*>) into a canonical form with only a single use of pure at the very beginning and only left-nested occurrences of (<*>). Composition allows reassociating (<*>); interchange allows moving occurrences of pure leftwards; and homomorphism allows collapsing multiple adjacent occurrences of pure into one. There is also a law specifying how Applicative should relate to Functor: It says that mapping a pure function g over a context x is the same as first injecting g into a context with pure, and then applying it to x with (<*>). In other words, we can decompose fmap into two more atomic operations: injection into a context, and application within a context. The Control.Applicative module also defines (<$>) as a synonym for fmap, so the above law can also be expressed as: g <$> x = pure g <*> x. 1. (Tricky) One might imagine a variant of the interchange law that says something about applying a pure function to an effectful argument. Using the above laws, prove that pure f <*> x = pure (flip ($)) <*> x <*> pure f 4.3 Instances Most of the standard types which are instances of Functor are also instances of Applicative. Maybe can easily be made an instance of Applicative; writing such an instance is left as an exercise for the reader. The list type constructor [] can actually be made an instance of Applicative in two ways; essentially, it comes down to whether we want to think of lists as ordered collections of elements, or as contexts representing multiple results of a nondeterministic computation (see Wadler’s How to replace failure by a list of successes). Let’s first consider the collection point of view. Since there can only be one instance of a given type class for any particular type, one or both of the list instances of Applicative need to be defined for a newtype wrapper; as it happens, the nondeterministic computation instance is the default, and the collection instance is defined in terms of a newtype called ZipList. This instance is: newtype ZipList a = ZipList { getZipList :: [a] } instance Applicative ZipList where pure = undefined -- exercise (ZipList gs) <*> (ZipList xs) = ZipList (zipWith ($) gs xs) To apply a list of functions to a list of inputs with (<*>), we just match up the functions and inputs elementwise, and produce a list of the resulting outputs. In other words, we “zip” the lists together with function application, ($); hence the name ZipList. The other Applicative instance for lists, based on the nondeterministic computation point of view, is: instance Applicative [] where pure x = [x] gs <*> xs = [ g x | g <- gs, x <- xs ] Instead of applying functions to inputs pairwise, we apply each function to all the inputs in turn, and collect all the results in a list. Now we can write nondeterministic computations in a natural style. To add the numbers 3 and 4 deterministically, we can of course write (+) 3 4. But suppose instead of 3 we have a nondeterministic computation that might result in 2, 3, or 4; then we can write pure (+) <*> [2,3,4] <*> pure 4 or, more idiomatically, (+) <$> [2,3,4] <*> pure 4. There are several other Applicative instances as well: • IO is an instance of Applicative, and behaves exactly as you would think: to execute m1 <*> m2, first m1 is executed, resulting in a function f, then m2 is executed, resulting in a value x, and finally the value f x is returned as the result of executing m1 <*> m2. • ((,) a) is an Applicative, as long as a is an instance of Monoid (section Monoid). The a values are accumulated in parallel with the computation. • The Applicative module defines the Const type constructor; a value of type Const a b simply contains an a. This is an instance of Applicative for any Monoid a; this instance becomes especially useful in conjunction with things like Foldable (section Foldable). • The WrappedMonad and WrappedArrow newtypes make any instances of Monad (section Monad) or Arrow (section Arrow) respectively into instances of Applicative; as we will see when we study those type classes, both are strictly more expressive than Applicative, in the sense that the Applicative methods can be implemented in terms of their methods. 1. Implement an instance of Applicative for Maybe. 2. Determine the correct definition of pure for the ZipList instance of Applicative—there is only one implementation that satisfies the law relating pure and (<*>). 4.4 Intuition McBride and Paterson’s paper introduces the notation $[[g \; x_1 \; x_2 \; \cdots \; x_n]]\$ to denote function application in a computational context. If each $x_i\$ has type $f \; t_i\$ for some applicative functor $f\$, and $g\$ has type $t_1 \to t_2 \to \dots \to t_n \to t\$, then the entire expression $[[g \; x_1 \; \cdots \; x_n]]\$ has type $f \; t\$. You can think of this as applying a function to multiple “effectful” arguments. In this sense, the double bracket notation is a generalization of fmap, which allows us to apply a function to a single argument in a context. Why do we need Applicative to implement this generalization of fmap? Suppose we use fmap to apply g to the first parameter x1. Then we get something of type f (t2 -> ... t), but now we are stuck: we can’t apply this function-in-a-context to the next argument with fmap. However, this is precisely what (<*>) allows us to do. This suggests the proper translation of the idealized notation $[[g \; x_1 \; x_2 \; \cdots \; x_n]]\$ into Haskell, namely g <$> x1 <*> x2 <*> ... <*> xn, recalling that Control.Applicative defines (<$>) as convenient infix shorthand for fmap. This is what is meant by an “applicative style”—effectful computations can still be described in terms of function application; the only difference is that we have to use the special operator (<*>) for application instead of simple juxtaposition. Note that pure allows embedding “non-effectful” arguments in the middle of an idiomatic application, like g <$> x1 <*> pure x2 <*> x3 which has type f d, given g :: a -> b -> c -> d x1 :: f a x2 :: b x3 :: f c The double brackets are commonly known as “idiom brackets”, because they allow writing “idiomatic” function application, that is, function application that looks normal but has some special, non-standard meaning (determined by the particular instance of Applicative being used). Idiom brackets are not supported by GHC, but they are supported by the Strathclyde Haskell Enhancement, a preprocessor which (among many other things) translates idiom brackets into standard uses of (<$>) and (<*>). This can result in much more readable code when making heavy use of Applicative. 4.5 Alternative formulation An alternative, equivalent formulation of Applicative is given by class Functor f => Monoidal f where unit :: f () (**) :: f a -> f b -> f (a,b) Intuitively, this states that a monoidal functor is one which has some sort of "default shape" and which supports some sort of "combining" operation. pure and (<*>) are equivalent in power to unit and (**) (see the Exercises below). Furthermore, to deserve the name "monoidal" (see the section on Monoids), instances of Monoidal ought to satisfy the following laws, which seem much more straightforward than the traditional Applicative laws: ∗ Here f *** g = \x -> (f x, g x). See Arrows. • Naturality∗: fmap (f *** g) (u ** v) = fmap f u ** fmap g v ∗ In this and the following laws, ≅ refers to isomorphism rather than equality. In particular we consider (x,()) ≅ x ≅ ((),x) and ((x,y),z) ≅ (x,(y,z)). • Left identity∗: • Right identity: • Associativity: u ** (v ** w) ≅ (u ** v) ** w These turn out to be equivalent to the usual Applicative laws. Much of this section was taken from a blog post by Edward Z. Yang; see his actual post for a bit more information. 1. Implement pure and (<*>) in terms of unit and (**), and vice versa. 2. (Tricky) Prove that given your implementations from the previous exercise, the usual Applicative laws and the Monoidal laws stated above are equivalent. 4.6 Further reading There are many other useful combinators in the standard libraries implemented in terms of pure and (<*>): for example, (*>), (<*), (<**>), (<$), and so on (see haddock for Applicative). Judicious use of such secondary combinators can often make code using Applicatives much easier to read. McBride and Paterson’s original paper is a treasure-trove of information and examples, as well as some perspectives on the connection between Applicative and category theory. Beginners will find it difficult to make it through the entire paper, but it is extremely well-motivated—even beginners will be able to glean something from reading as far as they are able. ∗ Introduced by an earlier paper that was since superseded by Push-pull functional reactive programming. Conal Elliott has been one of the biggest proponents of Applicative. For example, the Pan library for functional images and the reactive library for functional reactive programming (FRP) ∗ make key use of it; his blog also contains many examples of Applicative in action. Building on the work of McBride and Paterson, Elliott also built the TypeCompose library, which embodies the observation (among others) that Applicative types are closed under composition; therefore, Applicative instances can often be automatically derived for complex types built out of simpler ones. Although the Parsec parsing library (paper) was originally designed for use as a monad, in its most common use cases an Applicative instance can be used to great effect; Bryan O’Sullivan’s blog post is a good starting point. If the extra power provided by Monad isn’t needed, it’s usually a good idea to use Applicative instead. A couple other nice examples of Applicative in action include the ConfigFile and HSQL libraries and the formlets library. Gershom Bazerman's post contains many insights into applicatives. 5 Monad It’s a safe bet that if you’re reading this, you’ve heard of monads—although it’s quite possible you’ve never heard of Applicative before, or Arrow, or even Monoid. Why are monads such a big deal in Haskell? There are several reasons. • Haskell does, in fact, single out monads for special attention by making them the framework in which to construct I/O operations. • Haskell also singles out monads for special attention by providing a special syntactic sugar for monadic expressions: the do-notation. • Monad has been around longer than other abstract models of computation such as Applicative or Arrow. • The more monad tutorials there are, the harder people think monads must be, and the more new monad tutorials are written by people who think they finally “get” monads (the monad tutorial fallacy I will let you judge for yourself whether these are good reasons. In the end, despite all the hoopla, Monad is just another type class. Let’s take a look at its definition. 5.1 Definition The type class declaration for Monad is: class Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b (>>) :: m a -> m b -> m b m >> n = m >>= \_ -> n fail :: String -> m a The Monad type class is exported by the Prelude, along with a few standard instances. However, many utility functions are found in Control.Monad, and there are also several instances (such as ((->) e)) defined in Control.Monad.Instances. Let’s examine the methods in the Monad class one by one. The type of return should look familiar; it’s the same as pure. Indeed, return is pure, but with an unfortunate name. (Unfortunate, since someone coming from an imperative programming background might think that return is like the C or Java keyword of the same name, when in fact the similarities are minimal.) From a mathematical point of view, every monad is an applicative functor, but for historical reasons, the Monad type class declaration unfortunately does not require this. We can see that (>>) is a specialized version of (>>=), with a default implementation given. It is only included in the type class declaration so that specific instances of Monad can override the default implementation of (>>) with a more efficient one, if desired. Also, note that although _ >> n = n would be a type-correct implementation of (>>), it would not correspond to the intended semantics: the intention is that m >> n ignores the result of m, but not its effects. The fail function is an awful hack that has no place in the Monad class; more on this later. The only really interesting thing to look at—and what makes Monad strictly more powerful than Applicative—is (>>=), which is often called bind. An alternative definition of Monad could look like: class Applicative m => Monad' m where (>>=) :: m a -> (a -> m b) -> m b We could spend a while talking about the intuition behind (>>=)—and we will. But first, let’s look at some examples. 5.2 Instances Even if you don’t understand the intuition behind the Monad class, you can still create instances of it by just seeing where the types lead you. You may be surprised to find that this actually gets you a long way towards understanding the intuition; at the very least, it will give you some concrete examples to play with as you read more about the Monad class in general. The first few examples are from the standard Prelude; the remaining examples are from the transformers package. • The simplest possible instance of Monad is Identity, which is described in Dan Piponi’s highly recommended blog post on The Trivial Monad. Despite being “trivial”, it is a great introduction to the Monad type class, and contains some good exercises to get your brain working. • The next simplest instance of Monad is Maybe. We already know how to write return/pure for Maybe. So how do we write (>>=)? Well, let’s think about its type. Specializing for Maybe, we have (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b. If the first argument to (>>=) is Just x, then we have something of type a (namely, x), to which we can apply the second argument—resulting in a Maybe b, which is exactly what we wanted. What if the first argument to (>>=) is Nothing? In that case, we don’t have anything to which we can apply the a -> Maybe b function, so there’s only one thing we can do: yield Nothing. This instance is: instance Monad Maybe where return = Just (Just x) >>= g = g x Nothing >>= _ = Nothing We can already get a bit of intuition as to what is going on here: if we build up a computation by chaining together a bunch of functions with (>>=), as soon as any one of them fails, the entire computation will fail (because Nothing >>= f is Nothing, no matter what f is). The entire computation succeeds only if all the constituent functions individually succeed. So the Maybe monad models computations which may fail. • The Monad instance for the list constructor [] is similar to its Applicative instance; see the exercise below. • Of course, the IO constructor is famously a Monad, but its implementation is somewhat magical, and may in fact differ from compiler to compiler. It is worth emphasizing that the IO monad is the only monad which is magical. It allows us to build up, in an entirely pure way, values representing possibly effectful computations. The special value main, of type IO (), is taken by the runtime and actually executed, producing actual effects. Every other monad is functionally pure, and requires no special compiler support. We often speak of monadic values as “effectful computations”, but this is because some monads allow us to write code as if it has side effects, when in fact the monad is hiding the plumbing which allows these apparent side effects to be implemented in a functionally pure way. • As mentioned earlier, ((->) e) is known as the reader monad, since it describes computations in which a value of type e is available as a read-only environment. The Control.Monad.Reader module provides the Reader e a type, which is just a convenient newtype wrapper around (e -> a), along with an appropriate Monad instance and some Reader-specific utility functions such as ask (retrieve the environment), asks (retrieve a function of the environment), and local (run a subcomputation under a different environment). • The Control.Monad.Writer module provides the Writer monad, which allows information to be collected as a computation progresses. Writer w a is isomorphic to (a,w), where the output value a is carried along with an annotation or “log” of type w, which must be an instance of Monoid (see section Monoid); the special function tell performs logging. • The Control.Monad.State module provides the State s a type, a newtype wrapper around s -> (a,s). Something of type State s a represents a stateful computation which produces an a but can access and modify the state of type s along the way. The module also provides State-specific utility functions such as get (read the current state), gets (read a function of the current state), put (overwrite the state), and modify (apply a function to the state). • The Control.Monad.Cont module provides the Cont monad, which represents computations in continuation-passing style. It can be used to suspend and resume computations, and to implement non-local transfers of control, co-routines, other complex control structures—all in a functionally pure way. Cont has been called the “mother of all monads” because of its universal properties. 1. Implement a Monad instance for the list constructor, []. Follow the types! 2. Implement a Monad instance for ((->) e). 3. Implement Functor and Monad instances for Free f, defined as data Free f a = Var a | Node (f (Free f a)) You may assume that f has a Functor instance. This is known as the free monad built from the functor f. 5.3 Intuition Let’s look more closely at the type of (>>=). The basic intuition is that it combines two computations into one larger computation. The first argument, m a, is the first computation. However, it would be boring if the second argument were just an m b; then there would be no way for the computations to interact with one another (actually, this is exactly the situation with Applicative). So, the second argument to (>>=) has type a -> m b: a function of this type, given a result of the first computation, can produce a second computation to be run. In other words, x >>= k is a computation which runs x, and then uses the result(s) of x to decide what computation to run second, using the output of the second computation as the result of the entire computation. ∗ Actually, because Haskell allows general recursion, this is a lie: using a Haskell parsing library one can recursively construct infinite grammars, and hence Alternative by itself is enough to parse any context-sensitive language with a finite alphabet. See Parsing context-sensitive languages with Applicative. Intuitively, it is this ability to use the output from previous computations to decide what computations to run next that makes Monad more powerful than Applicative. The structure of an Applicative computation is fixed, whereas the structure of a Monad computation can change based on intermediate results. This also means that parsers built using an Applicative interface can only parse context-free languages; in order to parse context-sensitive languages a Monad interface is needed.∗ To see the increased power of Monad from a different point of view, let’s see what happens if we try to implement (>>=) in terms of fmap, pure, and (<*>). We are given a value x of type m a, and a function k of type a -> m b, so the only thing we can do is apply k to x. We can’t apply it directly, of course; we have to use fmap to lift it over the m. But what is the type of fmap k? Well, it’s m a -> m (m b). So after we apply it to x, we are left with something of type m (m b)—but now we are stuck; what we really want is an m b, but there’s no way to get there from here. We can add m’s using pure, but we have no way to collapse multiple m’s into one. ∗ You might hear some people claim that that the definition in terms of return, fmap, and join is the “math definition” and the definition in terms of return and (>>=) is something specific to Haskell. In fact, both definitions were known in the mathematics community long before Haskell picked up monads. This ability to collapse multiple m’s is exactly the ability provided by the function join :: m (m a) -> m a, and it should come as no surprise that an alternative definition of Monad can be given in terms of join: class Applicative m => Monad'' m where join :: m (m a) -> m a In fact, the canonical definition of monads in category theory is in terms of return, fmap, and join (often called η, T, and μ in the mathematical literature). Haskell uses an alternative formulation with (>>=) instead of join since it is more convenient to use ∗. However, sometimes it can be easier to think about Monad instances in terms of join, since it is a more “atomic” operation. (For example, join for the list monad is just concat.) 1. Implement (>>=) in terms of fmap (or liftM) and join. 2. Now implement join and fmap (liftM) in terms of (>>=) and return. 5.4 Utility functions The Control.Monad module provides a large number of convenient utility functions, all of which can be implemented in terms of the basic Monad operations (return and (>>=) in particular). We have already seen one of them, namely, join. We also mention some other noteworthy ones here; implementing these utility functions oneself is a good exercise. For a more detailed guide to these functions, with commentary and example code, see Henk-Jan van Tuyl’s tour. ∗ Still, it is unclear how this "bug" should be fixed. Making Monad require a Functor instance has some drawbacks, as mentioned in this 2011 mailing-list discussion. —Geheimdienst • liftM :: Monad m => (a -> b) -> m a -> m b. This should be familiar; of course, it is just fmap. The fact that we have both fmap and liftM is an unfortunate consequence of the fact that the Monad type class does not require a Functor instance, even though mathematically speaking, every monad is a functor. However, fmap and liftM are essentially interchangeable, since it is a bug (in a social rather than technical sense) for any type to be an instance of Monad without also being an instance of Functor ∗. • ap :: Monad m => m (a -> b) -> m a -> m b should also be familiar: it is equivalent to (<*>), justifying the claim that the Monad interface is strictly more powerful than Applicative. We can make any Monad into an instance of Applicative by setting pure = return and (<*>) = ap. • sequence :: Monad m => [m a] -> m [a] takes a list of computations and combines them into one computation which collects a list of their results. It is again something of a historical accident that sequence has a Monad constraint, since it can actually be implemented only in terms of Applicative. There is an additional generalization of sequence to structures other than lists, which will be discussed in the section on Traversable. • replicateM :: Monad m => Int -> m a -> m [a] is simply a combination of replicate and sequence. • when :: Monad m => Bool -> m () -> m () conditionally executes a computation, evaluating to its second argument if the test is True, and to return () if the test is False. A collection of other sorts of monadic conditionals can be found in the IfElse package. • mapM :: Monad m => (a -> m b) -> [a] -> m [b] maps its first argument over the second, and sequences the results. The forM function is just mapM with its arguments reversed; it is called forM since it models generalized for loops: the list [a] provides the loop indices, and the function a -> m b specifies the “body” of the loop for each index. • (=<<) :: Monad m => (a -> m b) -> m a -> m b is just (>>=) with its arguments reversed; sometimes this direction is more convenient since it corresponds more closely to function application. • (>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c is sort of like function composition, but with an extra m on the result type of each function, and the arguments swapped. We’ll have more to say about this operation later. There is also a flipped variant, (<=<). • The guard function is for use with instances of MonadPlus, which is discussed at the end of the Monoid section. Many of these functions also have “underscored” variants, such as sequence_ and mapM_; these variants throw away the results of the computations passed to them as arguments, using them only for their side effects. Other monadic functions which are occasionally useful include filterM, zipWithM, foldM, and forever. 5.5 Laws There are several laws that instances of Monad should satisfy (see also the Monad laws wiki page). The standard presentation is: return a >>= k = k a m >>= return = m m >>= (\x -> k x >>= h) = (m >>= k) >>= h fmap f xs = xs >>= return . f = liftM f xs The first and second laws express the fact that return behaves nicely: if we inject a value a into a monadic context with return, and then bind to k, it is the same as just applying k to a in the first place; if we bind a computation m to return, nothing changes. The third law essentially says that (>>=) is associative, sort of. The last law ensures that fmap and liftM are the same for types which are instances of both Functor and Monad—which, as already noted, should be every instance of Monad. ∗ I like to pronounce this operator “fish”. However, the presentation of the above laws, especially the third, is marred by the asymmetry of (>>=). It’s hard to look at the laws and see what they’re really saying. I prefer a much more elegant version of the laws, which is formulated in terms of (>=>) ∗. Recall that (>=>) “composes” two functions of type a -> m b and b -> m c. You can think of something of type a -> m b (roughly) as a function from a to b which may also have some sort of effect in the context corresponding to m. (>=>) lets us compose these “effectful functions”, and we would like to know what properties (>=>) has. The monad laws reformulated in terms of (>=>) are: return >=> g = g g >=> return = g (g >=> h) >=> k = g >=> (h >=> k) ∗ As fans of category theory will note, these laws say precisely that functions of type a -> m b are the arrows of a category with (>=>) as composition! Indeed, this is known as the Kleisli category of the monad m. It will come up again when we discuss Arrows. Ah, much better! The laws simply state that return is the identity of (>=>), and that (>=>) is associative ∗. There is also a formulation of the monad laws in terms of fmap, return, and join; for a discussion of this formulation, see the Haskell wikibook page on category theory. 1. Given the definition g >=> h = \x -> g x >>= h, prove the equivalence of the above laws and the usual monad laws. 5.6 do notation Haskell’s special do notation supports an “imperative style” of programming by providing syntactic sugar for chains of monadic expressions. The genesis of the notation lies in realizing that something like a >>= \x -> b >> c >>= \y -> d can be more readably written by putting successive computations on separate lines: a >>= \x -> b >> c >>= \y -> This emphasizes that the overall computation consists of four computations a, b, c, and d, and that x is bound to the result of a, and y is bound to the result of c (b, c, and d are allowed to refer to x, and d is allowed to refer to y as well). From here it is not hard to imagine a nicer notation: do { x <- a ; b ; y <- c ; d (The curly braces and semicolons may optionally be omitted; the Haskell parser uses layout to determine where they should be inserted.) This discussion should make clear that do notation is just syntactic sugar. In fact, do blocks are recursively translated into monad operations (almost) like this: do e → e do { e; stmts } → e >> do { stmts } do { v <- e; stmts } → e >>= \v -> do { stmts } do { let decls; stmts} → let decls in do { stmts } This is not quite the whole story, since v might be a pattern instead of a variable. For example, one can write but what happens if foo produces an empty list? Well, remember that ugly fail function in the Monad type class declaration? That’s what happens. See section 3.14 of the Haskell Report for the full details. See also the discussion of MonadPlus and MonadZero in the section on other monoidal classes. A final note on intuition: do notation plays very strongly to the “computational context” point of view rather than the “container” point of view, since the binding notation x <- m is suggestive of “extracting” a single x from m and doing something with it. But m may represent some sort of a container, such as a list or a tree; the meaning of x <- m is entirely dependent on the implementation of (>>=). For example, if m is a list, x <- m actually means that x will take on each value from the list in turn. 5.7 Further reading Philip Wadler was the first to propose using monads to structure functional programs. His paper is still a readable introduction to the subject. ∗ All About Monads, Monads as containers, Understanding monads, The Monadic Way, You Could Have Invented Monads! (And Maybe You Already Have.), there’s a monster in my Haskell!, Understanding Monads. For real., Monads in 15 minutes: Backtracking and Maybe, Monads as computation, Practical Monads There are, of course, numerous monad tutorials of varying quality ∗. A few of the best include Cale Gibbard’s Monads as containers and Monads as computation; Jeff Newbern’s All About Monads, a comprehensive guide with lots of examples; and Dan Piponi’s You Could Have Invented Monads!, which features great exercises. If you just want to know how to use IO, you could consult the Introduction to IO. Even this is just a sampling; the monad tutorials timeline is a more complete list. (All these monad tutorials have prompted parodies like think of a monad ... as well as other kinds of backlash like Monads! (and Why Monad Tutorials Are All Awful) or Abstraction, intuition, and the “monad tutorial fallacy”.) Other good monad references which are not necessarily tutorials include Henk-Jan van Tuyl’s tour of the functions in Control.Monad, Dan Piponi’s field guide, Tim Newsham’s What’s a Monad?, and Chris Smith's excellent article Why Do Monads Matter?. There are also many blog posts which have been written on various aspects of monads; a collection of links can be found under Blog articles/Monads. For help constructing monads from scratch, and for obtaining a "deep embedding" of monad operations suitable for use in, say, compiling a domain-specific language, see apfelmus's operational package. One of the quirks of the Monad class and the Haskell type system is that it is not possible to straightforwardly declare Monad instances for types which require a class constraint on their data, even if they are monads from a mathematical point of view. For example, Data.Set requires an Ord constraint on its data, so it cannot be easily made an instance of Monad. A solution to this problem was first described by Eric Kidd, and later made into a library named rmonad by Ganesh Sittampalam and Peter Gavin. There are many good reasons for eschewing do notation; some have gone so far as to consider it harmful. Monads can be generalized in various ways; for an exposition of one possibility, see Robert Atkey’s paper on parameterized monads, or Dan Piponi’s Beyond Monads. For the categorically inclined, monads can be viewed as monoids (From Monoids to Monads) and also as closure operators Triples and Closure. Derek Elkins’s article in issue 13 of the Monad.Reader contains an exposition of the category-theoretic underpinnings of some of the standard Monad instances, such as State and Cont. Jonathan Hill and Keith Clarke have an early paper explaining the connection between monads as they arise in category theory and as used in functional programming. There is also a web page by Oleg Kiselyov explaining the history of the IO monad. Links to many more research papers related to monads can be found under Research papers/Monads and arrows. 6 Monad transformers One would often like to be able to combine two monads into one: for example, to have stateful, nondeterministic computations (State + []), or computations which may fail and can consult a read-only environment (Maybe + Reader), and so on. Unfortunately, monads do not compose as nicely as applicative functors (yet another reason to use Applicative if you don’t need the full power that Monad provides), but some monads can be combined in certain ways. 6.1 Standard monad transformers The transformers library provides a number of standard monad transformers. Each monad transformer adds a particular capability/feature/effect to any existing monad. For example, StateT s Maybe is an instance of Monad; computations of type StateT s Maybe a may fail, and have access to a mutable state of type s. Monad transformers can be multiply stacked. One thing to keep in mind while using monad transformers is that the order of composition matters. For example, when a StateT s Maybe a computation fails, the state ceases being updated (indeed, it simply disappears); on the other hand, the state of a MaybeT (State s) a computation may continue to be modified even after the computation has "failed". This may seem backwards, but it is correct. Monad transformers build composite monads “inside out”; MaybeT (State s) a is isomorphic to s -> (Maybe a, s). (Lambdabot has an indispensable @unmtl command which you can use to “unpack” a monad transformer stack in this way.) Intuitively, the monads become "more fundamental" the further down in the stack you get, and the effects of a given monad "have precedence" over the effects of monads further up the stack. Of course, this is just handwaving, and if you are unsure of the proper order for some monads you wish to combine, there is no substitute for using @unmtl or simply trying out the various options. 6.2 Definition and laws All monad transformers should implement the MonadTrans type class, defined in Control.Monad.Trans.Class: class MonadTrans t where lift :: Monad m => m a -> t m a It allows arbitrary computations in the base monad m to be “lifted” into computations in the transformed monad t m. (Note that type application associates to the left, just like function application, so t m a = (t m) a.) lift must satisfy the laws lift . return = return lift (m >>= f) = lift m >>= (lift . f) which intuitively state that lift transforms m a computations into t m a computations in a "sensible" way, which sends the return and (>>=) of m to the return and (>>=) of t m. 1. What is the kind of t in the declaration of MonadTrans? 6.3 Transformer type classes and "capability" style ∗ The only problem with this scheme is the quadratic number of instances required as the number of standard monad transformers grows—but as the current set of standard monad transformers seems adequate for most common use cases, this may not be that big of a deal. There are also type classes (provided by the mtl package) for the operations of each transformer. For example, the MonadState type class provides the state-specific methods get and put, allowing you to conveniently use these methods not only with State, but with any monad which is an instance of MonadState—including MaybeT (State s), StateT s (ReaderT r IO), and so on. Similar type classes exist for Reader, Writer, Cont, IO, and others ∗. These type classes serve two purposes. First, they get rid of (most of) the need for explicitly using lift, giving a type-directed way to automatically determine the right number of calls to lift. Simply writing put will be automatically translated into lift . put, lift . lift . put, or something similar depending on what concrete monad stack you are using. Second, they give you more flexibility to switch between different concrete monad stacks. For example, if you are writing a state-based algorithm, don't write foo :: State Int Char foo = modify (*2) >> return 'x' but rather foo :: MonadState Int m => m Char foo = modify (*2) >> return 'x' Now, if somewhere down the line you realize you need to introduce the possibility of failure, you might switch from State Int to MaybeT (State Int). The type of the first version of foo would need to be modified to reflect this change, but the second version of foo can still be used as-is. However, this sort of "capability-based" style (e.g. specifying that foo works for any monad with the "state capability") quickly runs into problems when you try to naively scale it up: for example, what if you need to maintain two independent states? A framework for solving this and related problems is described by Schrijvers and Olivera (Monads, zippers and views: virtualizing the monad stack, ICFP 2011) and is implemented in the Monatron package. 6.4 Composing monads Is the composition of two monads always a monad? As hinted previously, the answer is no. For example, XXX insert example here. Since Applicative functors are closed under composition, the problem must lie with join. Indeed, suppose m and n are arbitrary monads; to make a monad out of their composition we would need to be able to implement join :: m (n (m (n a))) -> m (n a) but it is not clear how this could be done in general. The join method for m is no help, because the two occurrences of m are not next to each other (and likewise for n). However, one situation in which it can be done is if n distributes over m, that is, if there is a function distrib :: n (m a) -> m (n a) satisfying certain laws. See Jones and Duponcheel (Composing Monads); see also the section on Traversable. • Implement join :: M (N (M (N a))) -> M (N a), given distrib :: N (M a) -> M (N a) and assuming M and N are instances of Monad. 6.5 Further reading Much of the monad transformer library (originally mtl, now split between mtl and transformers), including the Reader, Writer, State, and other monads, as well as the monad transformer framework itself, was inspired by Mark Jones’s classic paper Functional Programming with Overloading and Higher-Order Polymorphism. It’s still very much worth a read—and highly readable—after almost fifteen See Edward Kmett's mailing list message for a description of the history and relationships among monad transformer packages (mtl, transformers, monads-fd, monads-tf). There are two excellent references on monad transformers. Martin Grabmüller’s Monad Transformers Step by Step is a thorough description, with running examples, of how to use monad transformers to elegantly build up computations with various effects. Cale Gibbard’s article on how to use monad transformers is more practical, describing how to structure code using monad transformers to make writing it as painless as possible. Another good starting place for learning about monad transformers is a blog post by Dan Piponi. The ListT transformer from the transformers package comes with the caveat that ListT m is only a monad when m is commutative, that is, when ma >>= \a -> mb >>= \b -> foo is equivalent to mb >>= \b -> ma >>= \a -> foo (i.e. the order of m's effects does not matter). For one explanation why, see Dan Piponi's blog post "Why isn't ListT [] a monad". For more examples, as well as a design for a version of ListT which does not have this problem, see ListT done right. There is an alternative way to compose monads, using coproducts, as described by Lüth and Ghani. This method is interesting but has not (yet?) seen widespread use. 7 MonadFix Note: MonadFix is included here for completeness (and because it is interesting) but seems not to be used much. Skipping this section on a first read-through is perfectly OK (and perhaps even 7.1 mdo/do rec notation ∗ In GHC 7.6, the flag has been changed to -XRecursiveDo. The MonadFix class describes monads which support the special fixpoint operation mfix :: (a -> m a) -> m a, which allows the output of monadic computations to be defined via (effectful) recursion. This is supported in GHC by a special “recursive do” notation, enabled by the -XDoRec flag∗. Within a do block, one may have a nested rec block, like so: do { x <- foo ; rec { y <- baz ; z <- bar ; bob ; w <- frob Normally (if we had do in place of rec in the above example), y would be in scope in bar and bob but not in baz, and z would be in scope only in bob. With the rec, however, y and z are both in scope in all three of baz, bar, and bob. A rec block is analogous to a let block such as let { y = baz ; z = bar in bob because, in Haskell, every variable bound in a let-block is in scope throughout the entire block. (From this point of view, Haskell's normal do blocks are analogous to Scheme's let* construct.) What could such a feature be used for? One of the motivating examples given in the original paper describing MonadFix (see below) is encoding circuit descriptions. A line in a do-block such as describes a gate whose input wires are labeled y and z and whose output wire is labeled x. Many (most?) useful circuits, however, involve some sort of feedback loop, making them impossible to write in a normal do-block (since some wire would have to be mentioned as an input before being listed as an output). Using a rec block solves this problem. 7.2 Examples and intuition Of course, not every monad supports such recursive binding. However, as mentioned above, it suffices to have an implementation of mfix :: (a -> m a) -> m a, satisfying a few laws. Let's try implementing mfix for the Maybe monad. That is, we want to implement a function maybeFix :: (a -> Maybe a) -> Maybe a ∗ Actually, fix is implemented slightly differently for efficiency reasons; but the given definition is equivalent and simpler for the present purpose. Let's think for a moment about the implementation ∗ of the non-monadic fix :: (a -> a) -> a: Inspired by fix, our first attempt at implementing maybeFix might be something like maybeFix :: (a -> Maybe a) -> Maybe a maybeFix f = maybeFix f >>= f This has the right type. However, something seems wrong: there is nothing in particular here about Maybe; maybeFix actually has the more general type Monad m => (a -> m a) -> m a. But didn't we just say that not all monads support mfix? The answer is that although this implementation of maybeFix has the right type, it does not have the intended semantics. If we think about how (>>=) works for the Maybe monad (by pattern-matching on its first argument to see whether it is Nothing or Just) we can see that this definition of maybeFix is completely useless: it will just recurse infinitely, trying to decide whether it is going to return Nothing or Just, without ever even so much as a glance in the direction of f. The trick is to simply assume that maybeFix will return Just, and get on with life! maybeFix :: (a -> Maybe a) -> Maybe a maybeFix f = ma where ma = f (fromJust ma) This says that the result of maybeFix is ma, and assuming that ma = Just x, it is defined (recursively) to be equal to f x. Why is this OK? Isn't fromJust almost as bad as unsafePerformIO? Well, usually, yes. This is just about the only situation in which it is justified! The interesting thing to note is that maybeFix will never crash -- although it may, of course, fail to terminate. The only way we could get a crash is if we try to evaluate fromJust ma when we know that ma = Nothing. But how could we know ma = Nothing? Since ma is defined as f (fromJust ma), it must be that this expression has already been evaluated to Nothing -- in which case there is no reason for us to be evaluating fromJust ma in the first place! To see this from another point of view, we can consider three possibilities. First, if f outputs Nothing without looking at its argument, then maybeFix f clearly returns Nothing. Second, if f always outputs Just x, where x depends on its argument, then the recursion can proceed usefully: fromJust ma will be able to evaluate to x, thus feeding f's output back to it as input. Third, if f tries to use its argument to decide whether to output Just or Nothing, then maybeFix f will not terminate: evaluating f's argument requires evaluating ma to see whether it is Just, which requires evaluating f (fromJust ma), which requires evaluating ma, ... and so on. There are also instances of MonadFix for lists (which works analogously to the instance for Maybe), for ST, and for IO. The instance for IO is particularly amusing: it creates a new IORef (with a dummy value), immediately reads its contents using unsafeInterleaveIO (which delays the actual reading lazily until the value is needed), uses the contents of the IORef to compute a new value, which it then writes back into the IORef. It almost seems, spookily, that mfix is sending a value back in time to itself through the IORef -- though of course what is really going on is that the reading is delayed just long enough (via unsafeInterleaveIO) to get the process bootstrapped. • Implement a MonadFix instance for []. 7.3 GHC 7.6 changes GHC 7.6 reinstated the old mdo syntax, so the example at the start of this section can be written mdo { x <- foo ; y <- baz ; z <- bar ; bob ; w <- frob which will be translated into the original example (assuming that, say, bar and bob refer to y. The difference is that mdo will analyze the code in order to find minimal recursive blocks, which will be placed in rec blocks, whereas rec blocks desugar directly into calls to mfix without any further analysis. 7.4 Further reading For more information (such as the precise desugaring rules for rec blocks), see Levent Erkök and John Launchbury's 2002 Haskell workshop paper, A Recursive do for Haskell, or for full details, Levent Erkök’s thesis, Value Recursion in Monadic Computations. (Note, while reading, that MonadFix used to be called MonadRec.) You can also read the GHC user manual section on recursive do-notation. 8 Semigroup A semigroup is a set $S\$ together with a binary operation $\oplus\$ which combines elements from $S\$. The $\oplus\$ operator is required to be associative (that is, $(a \oplus b) \oplus c = a \ oplus (b \oplus c)\$, for any $a,b,c\$ which are elements of $S\$). For example, the natural numbers under addition form a semigroup: the sum of any two natural numbers is a natural number, and $(a+b)+c = a+(b+c)\$ for any natural numbers $a\$, $b\$, and $c\,\$. The integers under multiplication also form a semigroup, as do the integers (or rationals, or reals) under $\max\$ or $\min\$, Boolean values under conjunction and disjunction, lists under concatenation, functions from a set to itself under composition ... Semigroups show up all over the place, once you know to look for them. 8.1 Definition Semigroups are not (yet?) defined in the base package, but the semigroups package provides a standard definition. The definition of the Semigroup type class (haddock) is as follows: class Semigroup a where (<>) :: a -> a -> a sconcat :: NonEmpty a -> a sconcat = sconcat (a :| as) = go a as where go b (c:cs) = b <> go c cs go b [] = b times1p :: Whole n => n -> a -> a times1p = ... The really important method is (<>), representing the associative binary operation. The other two methods have default implementations in terms of (<>), and are included in the type class in case some instances can give more efficient implementations than the default. sconcat reduces a nonempty list using (<>); times1p n is equivalent to (but more efficient than) sconcat . replicate n. See the haddock documentation for more information on sconcat and times1p. 8.2 Laws The only law is that (<>) must be associative: (x <> y) <> z = x <> (y <> z) More coming soon... 9 Monoid Many semigroups have a special element e for which the binary operation $\oplus$ is the identity, that is, $e \oplus x = x \oplus e = x$ for every element x. Such a semigroup-with-identity-element is called a monoid. 9.1 Definition The definition of the Monoid type class (defined in Data.Monoid; haddock) is: class Monoid a where mempty :: a mappend :: a -> a -> a mconcat :: [a] -> a mconcat = foldr mappend mempty The mempty value specifies the identity element of the monoid, and mappend is the binary operation. The default definition for mconcat “reduces” a list of elements by combining them all with mappend, using a right fold. It is only in the Monoid class so that specific instances have the option of providing an alternative, more efficient implementation; usually, you can safely ignore mconcat when creating a Monoid instance, since its default definition will work just fine. The Monoid methods are rather unfortunately named; they are inspired by the list instance of Monoid, where indeed mempty = [] and mappend = (++), but this is misleading since many monoids have little to do with appending (see these Comments from OCaml Hacker Brian Hurt on the haskell-cafe mailing list). 9.2 Laws Of course, every Monoid instance should actually be a monoid in the mathematical sense, which implies these laws: mempty `mappend` x = x x `mappend` mempty = x (x `mappend` y) `mappend` z = x `mappend` (y `mappend` z) 9.3 Instances There are quite a few interesting Monoid instances defined in Data.Monoid. • [a] is a Monoid, with mempty = [] and mappend = (++). It is not hard to check that (x ++ y) ++ z = x ++ (y ++ z) for any lists x, y, and z, and that the empty list is the identity: [] ++ x = x ++ [] = x. • As noted previously, we can make a monoid out of any numeric type under either addition or multiplication. However, since we can’t have two instances for the same type, Data.Monoid provides two newtype wrappers, Sum and Product, with appropriate Monoid instances. > getSum (mconcat . map Sum $ [1..5]) > getProduct (mconcat . map Product $ [1..5]) This example code is silly, of course; we could just write sum [1..5] and product [1..5]. Nevertheless, these instances are useful in more generalized settings, as we will see in the section on Foldable. • Any and All are newtype wrappers providing Monoid instances for Bool (under disjunction and conjunction, respectively). • There are three instances for Maybe: a basic instance which lifts a Monoid instance for a to an instance for Maybe a, and two newtype wrappers First and Last for which mappend selects the first (respectively last) non-Nothing item. • Endo a is a newtype wrapper for functions a -> a, which form a monoid under composition. • There are several ways to “lift” Monoid instances to instances with additional structure. We have already seen that an instance for a can be lifted to an instance for Maybe a. There are also tuple instances: if a and b are instances of Monoid, then so is (a,b), using the monoid operations for a and b in the obvious pairwise manner. Finally, if a is a Monoid, then so is the function type e -> a for any e; in particular, g `mappend` h is the function which applies both g and h to its argument and then combines the results using the underlying Monoid instance for a. This can be quite useful and elegant (see example). • The type Ordering = LT || EQ || GT is a Monoid, defined in such a way that mconcat (zipWith compare xs ys) computes the lexicographic ordering of xs and ys (if xs and ys have the same length). In particular, mempty = EQ, and mappend evaluates to its leftmost non-EQ argument (or EQ if both arguments are EQ). This can be used together with the function instance of Monoid to do some clever things (example). • There are also Monoid instances for several standard data structures in the containers library (haddock), including Map, Set, and Sequence. Monoid is also used to enable several other type class instances. As noted previously, we can use Monoid to make ((,) e) an instance of Applicative: instance Monoid e => Applicative ((,) e) where pure x = (mempty, x) (u, f) <*> (v, x) = (u `mappend` v, f x) Monoid can be similarly used to make ((,) e) an instance of Monad as well; this is known as the writer monad. As we’ve already seen, Writer and WriterT are a newtype wrapper and transformer for this monad, respectively. Monoid also plays a key role in the Foldable type class (see section Foldable). 9.4 Other monoidal classes: Alternative, MonadPlus, ArrowPlus The Alternative type class (haddock) is for Applicative functors which also have a monoid structure: class Applicative f => Alternative f where empty :: f a (<|>) :: f a -> f a -> f a Of course, instances of Alternative should satisfy the monoid laws. Likewise, MonadPlus (haddock) is for Monads with a monoid structure: class Monad m => MonadPlus m where mzero :: m a mplus :: m a -> m a -> m a The MonadPlus documentation states that it is intended to model monads which also support “choice and failure”; in addition to the monoid laws, instances of MonadPlus are expected to satisfy mzero >>= f = mzero v >> mzero = mzero which explains the sense in which mzero denotes failure. Since mzero should be the identity for mplus, the computation m1 `mplus` m2 succeeds (evaluates to something other than mzero) if either m1 or m2 does; so mplus represents choice. The guard function can also be used with instances of MonadPlus; it requires a condition to be satisfied and fails (using mzero) if it is not. A simple example of a MonadPlus instance is [], which is exactly the same as the Monoid instance for []: the empty list represents failure, and list concatenation represents choice. In general, however, a MonadPlus instance for a type need not be the same as its Monoid instance; Maybe is an example of such a type. A great introduction to the MonadPlus type class, with interesting examples of its use, is Doug Auclair’s MonadPlus: What a Super Monad! in the Monad.Reader issue 11. There used to be a type class called MonadZero containing only mzero, representing monads with failure. The do-notation requires some notion of failure to deal with failing pattern matches. Unfortunately, MonadZero was scrapped in favor of adding the fail method to the Monad class. If we are lucky, someday MonadZero will be restored, and fail will be banished to the bit bucket where it belongs (see MonadPlus reform proposal). The idea is that any do-block which uses pattern matching (and hence may fail) would require a MonadZero constraint; otherwise, only a Monad constraint would be required. Finally, ArrowZero and ArrowPlus (haddock) represent Arrows (see below) with a monoid structure: class Arrow arr => ArrowZero arr where zeroArrow :: b `arr` c class ArrowZero arr => ArrowPlus arr where (<+>) :: (b `arr` c) -> (b `arr` c) -> (b `arr` c) 9.5 Further reading Monoids have gotten a fair bit of attention recently, ultimately due to a blog post by Brian Hurt, in which he complained about the fact that the names of many Haskell type classes (Monoid in particular) are taken from abstract mathematics. This resulted in a long haskell-cafe thread arguing the point and discussing monoids in general. ∗ May its name live forever. However, this was quickly followed by several blog posts about Monoid ∗. First, Dan Piponi wrote a great introductory post, Haskell Monoids and their Uses. This was quickly followed by Heinrich Apfelmus’s Monoids and Finger Trees, an accessible exposition of Hinze and Paterson’s classic paper on 2-3 finger trees, which makes very clever use of Monoid to implement an elegant and generic data structure. Dan Piponi then wrote two fascinating articles about using Monoids (and finger trees): Fast Incremental Regular Expressions and Beyond Regular Expressions In a similar vein, David Place’s article on improving Data.Map in order to compute incremental folds (see the Monad Reader issue 11) is also a good example of using Monoid to generalize a data Some other interesting examples of Monoid use include /monoids_in_my_programming_language/c06adnx building elegant list sorting combinators, collecting unstructured information, combining probability distributions, and a brilliant series of posts by Chung-Chieh Shan and Dylan Thurston using Monoids to elegantly solve a difficult combinatorial puzzle (followed by part 2, part 3, part 4). As unlikely as it sounds, monads can actually be viewed as a sort of monoid, with join playing the role of the binary operation and return the role of the identity; see Dan Piponi’s blog post. 10 Foldable The Foldable class, defined in the Data.Foldable module (haddock), abstracts over containers which can be “folded” into a summary value. This allows such folding operations to be written in a container-agnostic way. 10.1 Definition The definition of the Foldable type class is: class Foldable t where fold :: Monoid m => t m -> m foldMap :: Monoid m => (a -> m) -> t a -> m foldr :: (a -> b -> b) -> b -> t a -> b foldl :: (a -> b -> a) -> a -> t b -> a foldr1 :: (a -> a -> a) -> t a -> a foldl1 :: (a -> a -> a) -> t a -> a This may look complicated, but in fact, to make a Foldable instance you only need to implement one method: your choice of foldMap or foldr. All the other methods have default implementations in terms of these, and are presumably included in the class in case more efficient implementations can be provided. 10.2 Instances and examples The type of foldMap should make it clear what it is supposed to do: given a way to convert the data in a container into a Monoid (a function a -> m) and a container of a’s (t a), foldMap provides a way to iterate over the entire contents of the container, converting all the a’s to m’s and combining all the m’s with mappend. The following code shows two examples: a simple implementation of foldMap for lists, and a binary tree example provided by the Foldable documentation. instance Foldable [] where foldMap g = mconcat . map g data Tree a = Empty | Leaf a | Node (Tree a) a (Tree a) instance Foldable Tree where foldMap f Empty = mempty foldMap f (Leaf x) = f x foldMap f (Node l k r) = foldMap f l `mappend` f k `mappend` foldMap f r The foldr function has a type similar to the foldr found in the Prelude, but more general, since the foldr in the Prelude works only on lists. The Foldable module also provides instances for Maybe and Array; additionally, many of the data structures found in the standard containers library (for example, Map, Set, Tree, and Sequence) provide their own Foldable instances. 1. What is the type of foldMap . foldMap? Or foldMap . foldMap . foldMap, etc.? What do they do? 10.3 Derived folds Given an instance of Foldable, we can write generic, container-agnostic functions such as: -- Compute the size of any container. containerSize :: Foldable f => f a -> Int containerSize = getSum . foldMap (const (Sum 1)) -- Compute a list of elements of a container satisfying a predicate. filterF :: Foldable f => (a -> Bool) -> f a -> [a] filterF p = foldMap (\a -> if p a then [a] else []) -- Get a list of all the Strings in a container which include the -- letter a. aStrings :: Foldable f => f String -> [String] aStrings = filterF (elem 'a') The Foldable module also provides a large number of predefined folds, many of which are generalized versions of Prelude functions of the same name that only work on lists: concat, concatMap, and, or, any, all, sum, product, maximum(By), minimum(By), elem, notElem, and find. The important function toList is also provided, which turns any Foldable structure into a list of its elements in left-right order; it works by folding with the list monoid. There are also generic functions that work with Applicative or Monad instances to generate some sort of computation from each element in a container, and then perform all the side effects from those computations, discarding the results: traverse_, sequenceA_, and others. The results must be discarded because the Foldable class is too weak to specify what to do with them: we cannot, in general, make an arbitrary Applicative or Monad instance into a Monoid, but we can make m () into a Monoid for any such m. If we do have an Applicative or Monad with a monoid structure—that is, an Alternative or a MonadPlus—then we can use the asum or msum functions, which can combine the results as well. Consult the Foldable documentation for more details on any of these functions. Note that the Foldable operations always forget the structure of the container being folded. If we start with a container of type t a for some Foldable t, then t will never appear in the output type of any operations defined in the Foldable module. Many times this is exactly what we want, but sometimes we would like to be able to generically traverse a container while preserving its structure—and this is exactly what the Traversable class provides, which will be discussed in the next section. 1. Implement toList :: Foldable f => f a -> [a]. 2. Pick some of the following functions to implement: concat, concatMap, and, or, any, all, sum, product, maximum(By), minimum(By), elem, notElem, and find. Figure out how they generalize to Foldable and come up with elegant implementations using fold or foldMap along with appropriate Monoid instances. 10.4 Foldable actually isn't The generic term "fold" is often used to refer to the more technical concept of catamorphism. Intuitively, given a way to summarize "one level of structure" (where recursive subterms have already been replaced with their summaries), a catamorphism can summarize an entire recursive structure. It is important to realize that Foldable does not correspond to catamorphisms, but to something weaker. In particular, Foldable allows observing only the left-right order of elements within a structure, not the actual structure itself. Put another way, every use of Foldable can be expressed in terms of toList. For example, fold itself is equivalent to mconcat . toList. This is sufficient for many tasks, but not all. For example, consider trying to compute the depth of a Tree: try as we might, there is no way to implement it using Foldable. However, it can be implemented as a catamorphism. 10.5 Further reading The Foldable class had its genesis in McBride and Paterson’s paper introducing Applicative, although it has been fleshed out quite a bit from the form in the paper. An interesting use of Foldable (as well as Traversable) can be found in Janis Voigtländer’s paper Bidirectionalization for free!. 11 Traversable 11.1 Definition The Traversable type class, defined in the Data.Traversable module (haddock), is: class (Functor t, Foldable t) => Traversable t where traverse :: Applicative f => (a -> f b) -> t a -> f (t b) sequenceA :: Applicative f => t (f a) -> f (t a) mapM :: Monad m => (a -> m b) -> t a -> m (t b) sequence :: Monad m => t (m a) -> m (t a) As you can see, every Traversable is also a foldable functor. Like Foldable, there is a lot in this type class, but making instances is actually rather easy: one need only implement traverse or sequenceA; the other methods all have default implementations in terms of these functions. A good exercise is to figure out what the default implementations should be: given either traverse or sequenceA, how would you define the other three methods? (Hint for mapM: Control.Applicative exports the WrapMonad newtype, which makes any Monad into an Applicative. The sequence function can be implemented in terms of mapM.) 11.2 Intuition The key method of the Traversable class, and the source of its unique power, is sequenceA. Consider its type: sequenceA :: Applicative f => t (f a) -> f (t a) This answers the fundamental question: when can we commute two functors? For example, can we turn a tree of lists into a list of trees? The ability to compose two monads depends crucially on this ability to commute functors. Intuitively, if we want to build a composed monad M a = m (n a) out of monads m and n, then to be able to implement join :: M (M a) -> M a, that is, join :: m (n (m (n a))) -> m (n a), we have to be able to commute the n past the m to get m (m (n (n a))), and then we can use the joins for m and n to produce something of type m (n a). See Mark Jones’s paper for more details. Alternatively, looking at the type of traverse, traverse :: Applicative f => (a -> f b) -> t a -> f (t b) leads us to view Traversable as a generalization of Functor. traverse is an "effectful fmap": it allows us to map over a structure of type t a, applying a function to every element of type a and in order to produce a new structure of type t b; but along the way the function may have some effects (captured by the applicative functor f). 1. There are at least two natural ways to turn a tree of lists into a list of trees. What are they, and why? 2. Give a natural way to turn a list of trees into a tree of lists. 3. What is the type of traverse . traverse? What does it do? 11.3 Instances and examples What’s an example of a Traversable instance? The following code shows an example instance for the same Tree type used as an example in the previous Foldable section. It is instructive to compare this instance with a Functor instance for Tree, which is also shown. data Tree a = Empty | Leaf a | Node (Tree a) a (Tree a) instance Traversable Tree where traverse g Empty = pure Empty traverse g (Leaf x) = Leaf <$> g x traverse g (Node l x r) = Node <$> traverse g l <*> g x <*> traverse g r instance Functor Tree where fmap g Empty = Empty fmap g (Leaf x) = Leaf $ g x fmap g (Node l x r) = Node (fmap g l) (g x) (fmap g r) It should be clear that the Traversable and Functor instances for Tree are almost identical; the only difference is that the Functor instance involves normal function application, whereas the applications in the Traversable instance take place within an Applicative context, using (<$>) and (<*>). In fact, this will be true for any type. Any Traversable functor is also Foldable, and a Functor. We can see this not only from the class declaration, but by the fact that we can implement the methods of both classes given only the Traversable methods. The standard libraries provide a number of Traversable instances, including instances for [], Maybe, Map, Tree, and Sequence. Notably, Set is not Traversable, although it is Foldable. 1. Implement fmap and foldMap using only the Traversable methods. (Note that the Traversable module provides these implementations as fmapDefault and foldMapDefault.) 11.4 Laws Any instance of Traversable must statisfy the following two laws, where Identity is the identity functor (as defined in the Data.Functor.Identity module from the transformers package), and Compose wraps the composition of two functors (as defined in Data.Functor.Compose): 1. traverse Identity = Identity 2. traverse (Compose . fmap g . f) = Compose . fmap (traverse g) . traverse f The first law essentially says that traversals cannot make up arbitrary effects. The second law explains how doing two traversals in sequence can be collapsed to a single traversal. Additionally, suppose eta is an "Applicative morphism", that is, eta :: forall a f g. (Applicative f, Applicative g) => f a -> g a and eta preserves the Applicative operations: eta (pure x) = pure x and eta (x <*> y) = eta x <*> eta y. Then, by parametricity, any instance of Traversable satisfying the above two laws will also satisfy eta . traverse f = traverse (eta . f). 11.5 Further reading The Traversable class also had its genesis in McBride and Paterson’s Applicative paper, and is described in more detail in Gibbons and Oliveira, The Essence of the Iterator Pattern, which also contains a wealth of references to related work. Traversable forms a core component of Edward Kmett's lens library. Watching Edward's talk on the subject is a highly recommended way to gain better insight into Traversable, Foldable, Applicative, and many other things besides. For references on the Traversable laws, see Russell O'Connor's mailing list post (and subsequent thread). 12 Category Category is a relatively recent addition to the Haskell standard libraries. It generalizes the notion of function composition to general “morphisms”. ∗ GHC 7.6.1 changed its rules regarding types and type variables. Now, any operator at the type level is treated as a type constructor rather than a type variable; prior to GHC 7.6.1 it was possible to use (~>) instead of `arr`. For more information, see the discussion on the GHC-users mailing list. For a new approach to nice arrow notation that works with GHC 7.6.1, see this messsage and also this message from Edward Kmett, though for simplicity I haven't adopted it here. The definition of the Category type class (from Control.Category—haddock) is shown below. For ease of reading, note that I have used an infix type variable `arr`, in parallel with the infix function type constructor (->). ∗ This syntax is not part of Haskell 2010. The second definition shown is the one used in the standard libraries. For the remainder of this document, I will use the infix type constructor `arr` for Category as well as Arrow. class Category arr where id :: a `arr` a (.) :: (b `arr` c) -> (a `arr` b) -> (a `arr` c) -- The same thing, with a normal (prefix) type constructor class Category cat where id :: cat a a (.) :: cat b c -> cat a b -> cat a c Note that an instance of Category should be a type constructor which takes two type arguments, that is, something of kind * -> * -> *. It is instructive to imagine the type constructor variable cat replaced by the function constructor (->): indeed, in this case we recover precisely the familiar identity function id and function composition operator (.) defined in the standard Prelude. Of course, the Category module provides exactly such an instance of Category for (->). But it also provides one other instance, shown below, which should be familiar from the previous discussion of the Monad laws. Kleisli m a b, as defined in the Control.Arrow module, is just a newtype wrapper around a -> m b. newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b } instance Monad m => Category (Kleisli m) where id = Kleisli return Kleisli g . Kleisli h = Kleisli (h >=> g) The only law that Category instances should satisfy is that id and (.) should form a monoid—that is, id should be the identity of (.), and (.) should be associative. Finally, the Category module exports two additional operators: (<<<), which is just a synonym for (.), and (>>>), which is (.) with its arguments reversed. (In previous versions of the libraries, these operators were defined as part of the Arrow class.) 12.1 Further reading The name Category is a bit misleading, since the Category class cannot represent arbitrary categories, but only categories whose objects are objects of Hask, the category of Haskell types. For a more general treatment of categories within Haskell, see the category-extras package. For more about category theory in general, see the excellent Haskell wikibook page, Steve Awodey’s new book, Benjamin Pierce’s Basic category theory for computer scientists, or Barr and Wells’s category theory lecture notes. Benjamin Russell’s blog post is another good source of motivation and category theory links. You certainly don’t need to know any category theory to be a successful and productive Haskell programmer, but it does lend itself to much deeper appreciation of Haskell’s underlying theory. 13 Arrow The Arrow class represents another abstraction of computation, in a similar vein to Monad and Applicative. However, unlike Monad and Applicative, whose types only reflect their output, the type of an Arrow computation reflects both its input and output. Arrows generalize functions: if arr is an instance of Arrow, a value of type b `arr` c can be thought of as a computation which takes values of type b as input, and produces values of type c as output. In the (->) instance of Arrow this is just a pure function; in general, however, an arrow may represent some sort of “effectful” computation. 13.1 Definition The definition of the Arrow type class, from Control.Arrow (haddock), is: class Category arr => Arrow arr where arr :: (b -> c) -> (b `arr` c) first :: (b `arr` c) -> ((b, d) `arr` (c, d)) second :: (b `arr` c) -> ((d, b) `arr` (d, c)) (***) :: (b `arr` c) -> (b' `arr` c') -> ((b, b') `arr` (c, c')) (&&&) :: (b `arr` c) -> (b `arr` c') -> (b `arr` (c, c')) ∗ In versions of the base package prior to version 4, there is no Category class, and the Arrow class includes the arrow composition operator (>>>). It also includes pure as a synonym for arr, but this was removed since it conflicts with the pure from Applicative. The first thing to note is the Category class constraint, which means that we get identity arrows and arrow composition for free: given two arrows g :: b `arr` c and h :: c `arr` d, we can form their composition g >>> h :: b `arr` d ∗. As should be a familiar pattern by now, the only methods which must be defined when writing a new instance of Arrow are arr and first; the other methods have default definitions in terms of these, but are included in the Arrow class so that they can be overridden with more efficient implementations if desired. 13.2 Intuition Let’s look at each of the arrow methods in turn. Ross Paterson’s web page on arrows has nice diagrams which can help build intuition. • The arr function takes any function b -> c and turns it into a generalized arrow b `arr` c. The arr method justifies the claim that arrows generalize functions, since it says that we can treat any function as an arrow. It is intended that the arrow arr g is “pure” in the sense that it only computes g and has no “effects” (whatever that might mean for any particular arrow type). • The first method turns any arrow from b to c into an arrow from (b,d) to (c,d). The idea is that first g uses g to process the first element of a tuple, and lets the second element pass through unchanged. For the function instance of Arrow, of course, first g (x,y) = (g x, y). • The second function is similar to first, but with the elements of the tuples swapped. Indeed, it can be defined in terms of first using an auxiliary function swap, defined by swap (x,y) = (y,x). • The (***) operator is “parallel composition” of arrows: it takes two arrows and makes them into one arrow on tuples, which has the behavior of the first arrow on the first element of a tuple, and the behavior of the second arrow on the second element. The mnemonic is that g *** h is the product (hence *) of g and h. For the function instance of Arrow, we define (g *** h) (x,y) = (g x, h y). The default implementation of (***) is in terms of first, second, and sequential arrow composition (>>>). The reader may also wish to think about how to implement first and second in terms of • The (&&&) operator is “fanout composition” of arrows: it takes two arrows g and h and makes them into a new arrow g &&& h which supplies its input as the input to both g and h, returning their results as a tuple. The mnemonic is that g &&& h performs both g and h (hence &) on its input. For functions, we define (g &&& h) x = (g x, h x). 13.3 Instances The Arrow library itself only provides two Arrow instances, both of which we have already seen: (->), the normal function constructor, and Kleisli m, which makes functions of type a -> m b into Arrows for any Monad m. These instances are: instance Arrow (->) where arr g = g first g (x,y) = (g x, y) newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b } instance Monad m => Arrow (Kleisli m) where arr f = Kleisli (return . f) first (Kleisli f) = Kleisli (\ ~(b,d) -> do c <- f b return (c,d) ) 13.4 Laws ∗ See John Hughes: Generalising monads to arrows; Sam Lindley, Philip Wadler, Jeremy Yallop: The arrow calculus; Ross Paterson: Programming with Arrows. There are quite a few laws that instances of Arrow should satisfy ∗: arr id = id arr (h . g) = arr g >>> arr h first (arr g) = arr (g *** id) first (g >>> h) = first g >>> first h first g >>> arr (id *** h) = arr (id *** h) >>> first g first g >>> arr fst = arr fst >>> g first (first g) >>> arr assoc = arr assoc >>> first g assoc ((x,y),z) = (x,(y,z)) Note that this version of the laws is slightly different than the laws given in the first two above references, since several of the laws have now been subsumed by the Category laws (in particular, the requirements that id is the identity arrow and that (>>>) is associative). The laws shown here follow those in Paterson’s Programming with Arrows, which uses the Category class. ∗ Unless category-theory-induced insomnolence is your cup of tea. The reader is advised not to lose too much sleep over the Arrow laws ∗, since it is not essential to understand them in order to program with arrows. There are also laws that ArrowChoice, ArrowApply, and ArrowLoop instances should satisfy; the interested reader should consult Paterson: Programming with Arrows. 13.5 ArrowChoice Computations built using the Arrow class, like those built using the Applicative class, are rather inflexible: the structure of the computation is fixed at the outset, and there is no ability to choose between alternate execution paths based on intermediate results. The ArrowChoice class provides exactly such an ability: class Arrow arr => ArrowChoice arr where left :: (b `arr` c) -> (Either b d `arr` Either c d) right :: (b `arr` c) -> (Either d b `arr` Either d c) (+++) :: (b `arr` c) -> (b' `arr` c') -> (Either b b' `arr` Either c c') (|||) :: (b `arr` d) -> (c `arr` d) -> (Either b c `arr` d) A comparison of ArrowChoice to Arrow will reveal a striking parallel between left, right, (+++), (|||) and first, second, (***), (&&&), respectively. Indeed, they are dual: first, second, (***), and (&&&) all operate on product types (tuples), and left, right, (+++), and (|||) are the corresponding operations on sum types. In general, these operations create arrows whose inputs are tagged with Left or Right, and can choose how to act based on these tags. • If g is an arrow from b to c, then left g is an arrow from Either b d to Either c d. On inputs tagged with Left, the left g arrow has the behavior of g; on inputs tagged with Right, it behaves as the identity. • The right function, of course, is the mirror image of left. The arrow right g has the behavior of g on inputs tagged with Right. • The (+++) operator performs “multiplexing”: g +++ h behaves as g on inputs tagged with Left, and as h on inputs tagged with Right. The tags are preserved. The (+++) operator is the sum (hence +) of two arrows, just as (***) is the product. • The (|||) operator is “merge” or “fanin”: the arrow g ||| h behaves as g on inputs tagged with Left, and h on inputs tagged with Right, but the tags are discarded (hence, g and h must have the same output type). The mnemonic is that g ||| h performs either g or h on its input. The ArrowChoice class allows computations to choose among a finite number of execution paths, based on intermediate results. The possible execution paths must be known in advance, and explicitly assembled with (+++) or (|||). However, sometimes more flexibility is needed: we would like to be able to compute an arrow from intermediate results, and use this computed arrow to continue the computation. This is the power given to us by ArrowApply. 13.6 ArrowApply The ArrowApply type class is: class Arrow arr => ArrowApply arr where app :: (b `arr` c, b) `arr` c If we have computed an arrow as the output of some previous computation, then app allows us to apply that arrow to an input, producing its output as the output of app. As an exercise, the reader may wish to use app to implement an alternative “curried” version, app2 :: b `arr` ((b `arr` c) `arr` c). This notion of being able to compute a new computation may sound familiar: this is exactly what the monadic bind operator (>>=) does. It should not particularly come as a surprise that ArrowApply and Monad are exactly equivalent in expressive power. In particular, Kleisli m can be made an instance of ArrowApply, and any instance of ArrowApply can be made a Monad (via the newtype wrapper ArrowMonad). As an exercise, the reader may wish to try implementing these instances: instance Monad m => ArrowApply (Kleisli m) where app = -- exercise newtype ArrowApply a => ArrowMonad a b = ArrowMonad (a () b) instance ArrowApply a => Monad (ArrowMonad a) where return = -- exercise (ArrowMonad a) >>= k = -- exercise 13.7 ArrowLoop The ArrowLoop type class is: class Arrow a => ArrowLoop a where loop :: a (b, d) (c, d) -> a b c trace :: ((b,d) -> (c,d)) -> b -> c trace f b = let (c,d) = f (b,d) in c It describes arrows that can use recursion to compute results, and is used to desugar the rec construct in arrow notation (described below). Taken by itself, the type of the loop method does not seem to tell us much. Its intention, however, is a generalization of the trace function which is also shown. The d component of the first arrow’s output is fed back in as its own input. In other words, the arrow loop g is obtained by recursively “fixing” the second component of the input to g. It can be a bit difficult to grok what the trace function is doing. How can d appear on the left and right sides of the let? Well, this is Haskell’s laziness at work. There is not space here for a full explanation; the interested reader is encouraged to study the standard fix function, and to read Paterson’s arrow tutorial. 13.8 Arrow notation Programming directly with the arrow combinators can be painful, especially when writing complex computations which need to retain simultaneous reference to a number of intermediate results. With nothing but the arrow combinators, such intermediate results must be kept in nested tuples, and it is up to the programmer to remember which intermediate results are in which components, and to swap, reassociate, and generally mangle tuples as necessary. This problem is solved by the special arrow notation supported by GHC, similar to do notation for monads, that allows names to be assigned to intermediate results while building up arrow computations. An example arrow implemented using arrow notation, taken from Paterson, is: class ArrowLoop arr => ArrowCircuit arr where delay :: b -> (b `arr` b) counter :: ArrowCircuit arr => Bool `arr` Int counter = proc reset -> do rec output <- idA -< if reset then 0 else next next <- delay 0 -< output + 1 idA -< output This arrow is intended to represent a recursively defined counter circuit with a reset line. There is not space here for a full explanation of arrow notation; the interested reader should consult Paterson’s paper introducing the notation, or his later tutorial which presents a simplified 13.9 Further reading An excellent starting place for the student of arrows is the arrows web page, which contains an introduction and many references. Some key papers on arrows include Hughes’s original paper introducing arrows, Generalising monads to arrows, and Paterson’s paper on arrow notation. Both Hughes and Paterson later wrote accessible tutorials intended for a broader audience: Paterson: Programming with Arrows and Hughes: Programming with Arrows. Although Hughes’s goal in defining the Arrow class was to generalize Monads, and it has been said that Arrow lies “between Applicative and Monad” in power, they are not directly comparable. The precise relationship remained in some confusion until analyzed by Lindley, Wadler, and Yallop, who also invented a new calculus of arrows, based on the lambda calculus, which considerably simplifies the presentation of the arrow laws (see The arrow calculus). Some examples of Arrows include Yampa, the Haskell XML Toolkit, and the functional GUI library Grapefruit. Some extensions to arrows have been explored; for example, the BiArrows of Alimarine et al., for two-way instead of one-way computation. The Haskell wiki has links to many additional research papers relating to Arrows. 14 Comonad The final type class we will examine is Comonad. The Comonad class is the categorical dual of Monad; that is, Comonad is like Monad but with all the function arrows flipped. It is not actually in the standard Haskell libraries, but it has seen some interesting uses recently, so we include it here for completeness. 14.1 Definition The Comonad type class, defined in the Control.Comonad module of the comonad library, is: class Functor w => Comonad w where extract :: f a -> a duplicate :: w a -> w (w a) duplicate = extend id extend :: (w a -> b) -> w a -> w b extend f = fmap f . duplicate As you can see, extract is the dual of return, duplicate is the dual of join, and extend is the dual of (=<<). The definition of Comonad is a bit redundant, giving the programmer the choice on whether extend or duplicate are implemented; the other operation then has a default implementation. A prototypical example of a Comonad instance is: -- Infinite lazy streams data Stream a = Cons a (Stream a) -- 'duplicate' is like the list function 'tails' -- 'extend' computes a new Stream from an old, where the element -- at position n is computed as a function of everything from -- position n onwards in the old Stream instance Comonad Stream where extract (Cons x _) = x duplicate s@(Cons x xs) = Cons s (duplicate xs) extend g s@(Cons x xs) = Cons (g s) (extend g xs) -- = fmap g (duplicate s) 14.2 Further reading Dan Piponi explains in a blog post what cellular automata have to do with comonads. In another blog post, Conal Elliott has examined a comonadic formulation of functional reactive programming. Sterling Clover’s blog post Comonads in everyday life explains the relationship between comonads and zippers, and how comonads can be used to design a menu system for a web site. Uustalu and Vene have a number of papers exploring ideas related to comonads and functional programming: 15 Acknowledgements A special thanks to all of those who taught me about standard Haskell type classes and helped me develop good intuition for them, particularly Jules Bean (quicksilver), Derek Elkins (ddarius), Conal Elliott (conal), Cale Gibbard (Cale), David House, Dan Piponi (sigfpe), and Kevin Reid (kpreid). I also thank the many people who provided a mountain of helpful feedback and suggestions on a first draft of the Typeclassopedia: David Amos, Kevin Ballard, Reid Barton, Doug Beardsley, Joachim Breitner, Andrew Cave, David Christiansen, Gregory Collins, Mark Jason Dominus, Conal Elliott, Yitz Gale, George Giorgidze, Steven Grady, Travis Hartwell, Steve Hicks, Philip Hölzenspies, Edward Kmett, Eric Kow, Serge Le Huitouze, Felipe Lessa, Stefan Ljungstrand, Eric Macaulay, Rob MacAulay, Simon Meier, Eric Mertens, Tim Newsham, Russell O’Connor, Conrad Parker, Walt Rorie-Baety, Colin Ross, Tom Schrijvers, Aditya Siram, C. Smith, Martijn van Steenbergen, Joe Thornber, Jared Updike, Rob Vollmert, Andrew Wagner, Louis Wasserman, and Ashley Yakeley, as well as a few only known to me by their IRC nicks: b_jonas, maltem, tehgeekmeister, and ziman. I have undoubtedly omitted a few inadvertently, which in no way diminishes my gratitude. Finally, I would like to thank Wouter Swierstra for his fantastic work editing the Monad.Reader, and my wife Joyia for her patience during the process of writing the Typeclassopedia. Brent Yorgey (blog, homepage) is (as of November 2011) a fourth-year Ph.D. student in the programming languages group at the University of Pennsylvania. He enjoys teaching, creating EDSLs, playing Bach fugues, musing upon category theory, and cooking tasty lambda-treats for the denizens of #haskell. 17 Colophon The Typeclassopedia was written by Brent Yorgey and initally published in March 2009. Painstakingly converted to wiki syntax by User:Geheimdienst in November 2011, after asking Brent’s permission. If something like this tex to wiki syntax conversion ever needs to be done again, here are some vim commands that helped: • %s/\\section{\([^}]*\)}/=\1=/gc • %s/\\subsection{\([^}]*\)}/==\1==/gc • %s/^ *\\item /\r* /gc • %s/---/—/gc • %s/\$\([^$]*\)\$/<math>\1\\ <\/math>/gc Appending “\ ” forces images to be rendered. Otherwise, Mediawiki would go back and forth between one font for short <math> tags, and another more Tex-like font for longer tags (containing more than a few characters)"" • %s/|\([^|]*\)|/<code>\1<\/code>/gc • %s/\\dots/.../gc • %s/^\\label{.*$//gc • %s/\\emph{\([^}]*\)}/''\1''/gc • %s/\\term{\([^}]*\)}/''\1''/gc The biggest issue was taking the academic-paper-style citations and turning them into hyperlinks with an appropriate title and an appropriate target. In most cases there was an obvious thing to do (e.g. online PDFs of the cited papers or Citeseer entries). Sometimes, however, it’s less clear and you might want to check the original Typeclassopedia PDF with the original bibliography file. To get all the citations into the main text, I first tried processing the source with Tex or Lyx. This didn’t work due to missing unfindable packages, syntax errors, and my general ineptitude with I then went for the next best solution, which seemed to be extracting all instances of “\cite{something}” from the source and in that order pulling the referenced entries from the .bib file. This way you can go through the source file and sorted-references file in parallel, copying over what you need, without searching back and forth in the .bib file. I used: • egrep -o "\cite\{[^\}]*\}" ~/typeclassopedia.lhs | cut -c 6- | tr "," "\n" | tr -d "}" > /tmp/citations • for i in $(cat /tmp/citations); do grep -A99 "$i" ~/typeclassopedia.bib|egrep -B99 '^\}$' -m1 ; done > ~/typeclasso-refs-sorted
{"url":"http://www.haskell.org/haskellwiki/index.php?title=Typeclassopedia&diff=55211&oldid=55210","timestamp":"2014-04-18T08:12:24Z","content_type":null,"content_length":"256658","record_id":"<urn:uuid:88c8b265-ba97-481e-8ef8-89ddf660f4de>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Applying Risk Analysis To Play-Balance RPGs How often do players and game magazines compliment a recently released game by saying that it's "well balanced"? How often do they say this about products that have been out for years? In the real-time strategy (RTS) and role-playing game (RPG) genres, the answer to the first question is seldom, if ever. In the case of massively multiplayer online role-playing games (MMORPGs), the answer to the second question is not as often as they should. Good game balance is, in many ways, the holy grail of game design. Companies invest significant time and resources in attempts to balance their games so that they're neither too hard nor too easy for players. These efforts can be a significant drain on a company's resources, since designers and engineers working to solve balancing problems aren't adding functionality. Instead, they're spending valuable time reviewing the same equations or lines of code in an attempt to determine why, for instance, one character class consistently outperforms another. Unbalanced games hurt development companies in other ways, too. An unbalanced game frustrates players and generates negative publicity and reviews. Both of these situations negatively impact game sales and/or subscriber retention levels, which in turn decreases the cash flow to the publisher and developer, and restricts the capital and resources available to those firms. Without those resources, the ability to improve the functionality and features of existing games is limited. Additionally, the funding available for the development of new games is decreased. Developing AAA games titles is a multi-million dollar expense and the costs continue to rise. On average it costs three million dollars to release a title, and MMORPG development costs easily exceed ten million dollars. This is all upfront cost and does not include the post-release expenses incurred as designers and engineers attempt to remove bugs and balance their games. In the case of some games, the post-release expenses can be quite significant. Much of the post-release costs stem from ineffective balancing during the initial design, alpha and beta phases. Fortunately, the proper application of risk analysis techniques can help you in two ways. It can reduce development time, and therefore costs, and also increase sales and subscriber levels due to positive word of mouth. Probabilistic and Risk Analysis At the most basic level, an RPG is simply a large collection of numbers and equations. As such, an entire game could be developed using only a spreadsheet program, such as Microsoft Excel, and a probability plugin, like Palisade's @Risk. @Risk is a commercial add-in designed to work with Microsoft Excel. The program allows a designer to exchange uncertain and variable values for @Risk functions. By doing so, an entire spectrum of results can be observed instead of a simple average value. The use of @Risk and Microsoft Excel allows one to model and analyze critical game systems. Of course, a game created with only a spreadsheet and probability package would lack the meaning, context, and emotion that an RPG brings to players. However, such a game contains the same fundamental systems that allow players to advance and evolve within online games, with the benefit that meaning, context and emotion are stripped from the game. By focusing just on the underlying combat algorithms, systems can be developed and modeled rapidly and simulated thousands of times in order to determine the likely outcome of a situation. By analyzing these outcomes, truly balanced games can be created. Risk analysis techniques work the same irrespective of the industry they're applied to, and the desired results are even quite similar. Just as the petroleum industry might try to predict future utilization of fixed assets, a game developer might attempt to predict future results of a given game situation. In the petroleum industry, data models supply executives with information so they may make cost/benefit decisions regarding the assets. Similarly, a combat model for a game helps a designer predict the outcome and see if adjustments need to be made to races, classes, and groups such that balance is achieved. In both cases, risk analysis techniques decrease uncertainties and allow the maximum potential to be reached. While this article focuses primarily on RPG gaming systems, the application of risk analysis techniques can be used to play-balance games within most other genres. Real-time strategy games are excellent candidates in particular, due to the variety of combat scenarios that need to be tested. Where risk analysis modeling is not applicable are relatively simple games such as platformers and side-scrollers, since the systems that players interact with in these games do not generally contain systems that are worth modeling - they can be balanced by the designer through simple trial and Origins Of This Article (Editor's Note: This section amended at author's request on 6/16/03.) The origins of this article began after making observations of Asheron's Call 2 (AC2) during its beta and retail stages. While reviewing the game's pseudo-class-based system and its associated skills, inconsistencies were observed in the balance of certain game systems. Through correspondence with AC2's program manager, Matthew Ford, it was discovered that while AC2's developers tuned game balance by using numerical analysis techniques common to MMORPG development, neither AC2 or any other MMORPG in Ford's knowledge used the deep risk analysis methods described in this article. I found this interesting; it explained the degree of specialization, class, and realm/faction imbalance found in many retail products. In my correspondence with Ford, we discussed the concepts behind risk analysis and its practical application for MMOPRPGs. We also examined the predictive benefits gained by modeling and testing MMORPG systems. Ford agreed that any MMORPG would find this risk analysis a "valuable companion to traditional mathematical tuning methods"; he particularly saw it as a well-founded way to catch unexpected consequences created by seemingly unrelated changes to game balance values. Ford also agreed that this method could be a good way to rapidly iterate experimental game dynamics in the early stages of a game's design, as well as refine game balance in the later stages of development. Significant Game Systems When designing an RPG, numerous systems need to be developed and balanced. Many times what appears to be ideal on paper is found unacceptable when actually applied in game, as small differences in skills, spells, or other abilities become pronounced in the course of using them thousands of times. Some of the systems that need to be balanced in RPGs include: • Player Character Races • Player Character Classes • Player vs. Environment Conflict (PvE) • Player vs. Player Conflict (PvP) • Player Advancement and/or Skill Improvement Rates • Crafting Cost and the Time Required to Increase Skills • Resource Limitations This list is just a fraction of the systems that need to be designed and balanced in a successful RPG. And to make things even more complicated, someone trying to balance one game system may unwittingly unbalance another, since game systems are not always discrete elements. Most systems interrelate, and adjustments made to basic areas like player races and classes have the potential to affect every facet of a game. Developing a Model In order to apply risk-analysis techniques to game balancing, a base-case scenario is required. In the case of a combat system, a base case represents a typical player character of predetermined level/class/race/etc. For an existing game, care should be taken to gather data so that an average player character can be simulated. If balancing is attempted prior to the stage where the game is playable, then designers should use both personal experience and future expectations to develop a base case that accurately reflects what they want to see in the completed game. Qualification of Input Parameters Once the base case is determined, all of the constants, variables, and equations contained by the system need to be quantified: • Constants. In an RPG, constants represent unchanging values in the models. Examples of constants might be character level, NPC spawn timers, and initial racial statistics. While constants will be adjusted as games are being balanced, no probability distributions will be applied to them. • Variables. These represent in-game values that can differ between players. They include the allocation of skill points, duplicate instances of the same NPC monster, and craft quality distributions. Variables are accounted for by using probability distribution functions. Normal, log normal, Chi squared, and many others distributions can be used to represent random elements in a game. • Equations. These are the calculations that determine the success or failure rate of an action, the mathematical relationship between shield skill and blocking percentage, or the effect of primary statistics on a to hit or damage roll. When designing models, a caveat exists: a model is only as good as the data fed into it. When inaccurate data or equations are placed into a model, the results generated by that model will be inaccurate and irrelevant. "Garbage in, garbage out," is a frequent adage. It is therefore important to understand both the range of conditions available in the game system as well as a practical understanding of player tendencies and minimum/maximum potential. If a model was being developed to tweak the success rate between a basic melee player character and a basic melee monster, it might require the following for each: Model Requirements: • Primary Statistics • Health Totals • Mana Totals • Armor Type • Armor Level • Armor Absorption • Weapon Type • Weapon Damage • Offensive/Defensive Bonuses • Offensive/Defensive Penalties • Evade/Parry/Block/To Hit percentages The above represent only a partial listing. If systems within a game are affected by features such as the time of day, or other outside factors, then the model will require inputs for those elements. Simple games require fewer inputs in order to generate meaningful results. The converse is also true - complex games require more effort and control when developing a model. In order to create more accurate models, most of the above requirements should be inserted into the model in the form of variables. The use of variables, as opposed to constants, allows for a range of conditions to be observed. If, for example, players can vary their primary statistics between values X and Y, then the model should account for that. By factoring variations into a model, designers will not be limited to results provided by constant average values. Once all the constants, variables and equations have been quantified and input into the spreadsheet, it is time to run the simulation. The number of trials performed is arbitrary and up to the modeler. The more trials conducted, the more even distribution you'll get, and therefore you'll get more accurate results. However, especially in the case of exceptionally complex models, conducting trials takes time. In general, a simulation containing five thousand iterations is adequate. This limits the effect of any given iteration to only 0.0002%. If unaccountable outliers are observed, the number of simulations performed can be increased in order to smooth the curves. Comparing Results to Design Objectives and "What If" Scenarios Once a simulation is complete, the results can be analyzed. One of the primary benefits of risk analysis software is the ability to observe cumulative probability graphs for specified outputs. Cumulative probability graphs illustrate the percentage chance that an outcome Y will occur at least X percent of the time. In the case of combat between a basic player character and a basic NPC monster, designers can observe the hit point variations each round for the duration of combat. Figure 1 shows a sample cumulative probability distribution for melee hit points, and illustrates the Figure 1: Sample Cumulative Probability Distribution for Melee Hit Points. Click on Image to Enlarge. Tabulating the chance of death each round allows designers to observe the likelihood of a combatant dying on a round-by-round basis (see Figure 2). Naturally, subtracting the chance of dying from the value 1.0 produces the chance of survival (e.g., 1.0-.6=.4, or 40% chance of survival). In a fight to the death (which most game battles are), survival of the player character indicates that the enemy was dispatched first. In the case of PC vs. NPC combat, plotting the chance of NPC death and one (1) minus the chance of PC death displays the likelihood of victory or loss each round. The point where these two lines intersect is the expected outcome of the fight. If, on a plot containing the chance of NPC death and one minus the chance of player character death, the lines intersect at sixty percent (60%) in round twenty-five (25), then the player character will have a sixty percent (60%) chance of victory in any single combat against that NPC. This intersection point can be compared to the desired outcome of the combat simulation. Desired outcomes might be: · A 75% chance of beating an equal level monster using no special skills or styles · A 95% chance of beating an equal level monster using optimum level skills and abilities · An average of 60 hours spent crafting before a trade skill reaches its maximum · A 40% chance that one class will beat another class using optimum skills/styles. If the desired outcome and the simulation results are equal, the system is balanced and a designer can move on the next simulation. If the simulation result and the desired outcome are unequal, then adjustments need to be made to some part of the system. Figure 2: PC Melee vs. NPC Monster Expected Combat Outcome. Click on Image to Enlarge. When attempting to balance a combat system in this fashion, the process is reminiscent of supply-and-demand graphs used by economists. On a supply-and-demand graph, certain factors shift the curves in various directions. The same holds true for the victory and defeat curves. By varying constants or equations, such as weapon damage and "to hit" rolls, it is possible to horizontally translate one or both of the curves. By adjusting inputs until the point of intersection occurs at the desired outcome, the game system can be balanced. The benefit of an effective model is that it not only lets you balance specific classes and encounters, it also lets you explore limitless "what if" scenarios. For example, suppose that player dexterity affects the "to hit" roll, damage roll, and blocking chance. If a designer wants to increase player damage by increasing a class's dexterity, he has to worry about the effects on the "to hit" roll and blocking percent. If a designer wants to increase a race's base dexterity, he needs to consider the effects of this change on every other class in the game. However, if a model was already developed, it would be a simple endeavor to increase the dexterity of the class or race in question, re-run the simulation, and determine whether the dexterity increase threw off the balance in relation to other classes, races or NPCs.
{"url":"http://www.gamasutra.com/view/feature/2843/applying_risk_analysis_to_.php","timestamp":"2014-04-18T10:37:02Z","content_type":null,"content_length":"76532","record_id":"<urn:uuid:d8c95764-8845-4553-a229-6cb1c45944a6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Maths Museum - Museum © Islington Artefact Library The abacus has been used by the inhabitants of China, Japan and Russia for thousands of years. It is the ancestor of our modern calculator although not as we know it. The users of an abacus would carry out the calculations in their head, and would use the abacus to keep track of the sums and carry-overs as necessary. The invention of the abacus evolved from a simple need to count numbers. Merchants trading goods not only needed a way to count goods bought and sold, but also to quickly calculate the cost of multiples of those goods. Until numbers were invented these counting devices were used to make everyday calculations. The earliest abacuses have been lost in time due to the perishable materials used to construct them. However, we know that the simplest abacuses probably involved drawing lines (representing units, tens, hundreds etc.) in the sand using small pebbles as place holders representing numbers within those marks. With the need for something more durable, wooden boards with grooves carved into them were created. The oldest surviving counting board is the Salamis tablet which was used by the Babylonians in about 300BC. During the Middle Ages wood was the primary material from which abacuses were manufactured. As Hindu-Arabic number notation gained popularity in the latter part of the middle ages, particularly with the advent of notation for zero, the use of the abacus began to diminish in Europe. However, the abacus is still used in the Middle East, China and Japan. Many competitions have been held in order to tell which method is quicker when completing a complicated calculation. A man skilled with an abacus is likely to beat a man noting his calculations with a pencil and paper! The abacus in the picture is divided into two decks by a crossbar. It is held horizontally with the smaller deck at the top. Each bead on the top deck has the value 5 and each bead on the lower deck has the value 1. The beads are pushed towards the central crossbar to show numbers. Working from right to left, the first vertical line represents units, the next tens, the next hundreds and so on. So for example to show the number 9, on the first line, one bead from the top deck would be moved down (representing 5 units) and 4 beads from the bottom deck would be moved up (each representing 4 units). To show the number 79, in addition to the beads in the first line used to make the number 9, one bead would be moved down on the upper deck and two beads from the lower deck would be moved up on the second line, representing 5 tens and 2 tens respectively.
{"url":"http://www.counton.org/museum/floor3/gallery7/gal1_2p4.html","timestamp":"2014-04-19T12:37:52Z","content_type":null,"content_length":"5600","record_id":"<urn:uuid:ddddce8d-8e23-43df-94e1-151b66b58204>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: 18.014­ESG Problem Set 2 Pramod N. Achar Fall 1999 1. Exercise 2 in Section I 3.5 of Apostol. (You may use Axioms 1­9 and Theorems I.1­I.25.) 2. Let C denote the field of complex numbers. (Of course, we haven't defined C rigorously in class; this problem requires you to use what you know about complex numbers in "real life.") Show that there is no subset C with respect to which the order axioms are satisfied. (Hint: Suppose there is such a C+ , and consider i = -1. If i C+ , where must -i = i · i · i lie? What if i / C+ ? Derive a contradiction to one of the order axioms.) 3. (Optional) We have assumed in this class that there exists a certain set
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/161/4777055.html","timestamp":"2014-04-21T02:20:27Z","content_type":null,"content_length":"7727","record_id":"<urn:uuid:cb32cd08-5b8f-48af-93fa-29bfc31c02ea>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Natural logarithm: Riemann surfaces Riemann surfaces Real and imaginary parts of the continuation of over the . The imaginary part is a faithful representation of the Riemann surface of . Imaginary parts of the continuation of over the . The viewpoint is from the lower half‐plane. Imaginary parts of the continuation of over the Riemann sphere. The branch points are at the intersection of the equator with the ‐plane and at the south pole.
{"url":"http://functions.wolfram.com/ElementaryFunctions/Log/visualizations/10/ShowAll.html","timestamp":"2014-04-18T06:09:31Z","content_type":null,"content_length":"37186","record_id":"<urn:uuid:6fcf6fc4-dd8a-4aa8-84e7-22cd929fa11b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Mattel Disney Pixar Diecast CARS: The Scale Of It Mattel Disney Pixar CARS Mattel Disney Pixar Diecast CARS: The Scale Of It Mattel’s version of Lightning McQueen is @3.25″ inches in length and in the announced scale of 1:55. 1:55 indicates the ratio – he is 1/55th the size he would be in “real life.” Of course, technically, you want to measure scale in overall girth (aka: volume) so you really want to measure and account for width & height & length and comes up with a cubic number or a displacement number because going by one measurement is not accurate mathematical wise and aesthetic wise but since I’m not actually re-constructing anything – I’ll take the lazy way out. Scaled up from 3.25″ at the 1/55th ratio to the real world means that a real Lightning McQueen would be @179″ inches in length … next time you’re at the Disney parks (or at Pixar), whip out a tape measure and see if you get 179″ at the life size model of Lightning McQueen. So, how accurate is this? There are some “real” cars in CARS so if we take the Plymouth Superbird which is in real life @218 inches in length, he would be @3.96″ at 1:55 scale – the Mattel diecast version as the KING is just a skoosh over but the KING is actually slightly different than the real Superbird. The real Superbird, as kooky as that ginormous spoiler is – the spoiler does not extend past the bumper while the CARS KING does … so that .4 makes him pretty much on target … AND the different in length (218 inches versus 179 inches) is virtually accurate in 1:55 scale. The difference of @.7″ of the two diecasts is about the difference when you minus 179″ from 218″ inches and scale it to 1/55. But for scalologists – they might have some scatalogy hissy fits with this next few bits of info … If we presume the 179″ inches in length is accurate in the real world, then many of the other Lightning McQueen’s from others with scales “announced” are off base … The Disney Store 1:43 Series? Well, Lightning is more 1:40 in scale. It should be noted that nowhere on the packaging does it claim to be 1:43 in scale but most people referred to it as such since 1:43 is a common scale but there’s nothing saying you can’t make up your own scale … the only other problem is that the Disney Store diecasts tend to vary greatly in scale … The Disney Store diecasts have gotten much better but scale is not exactly real high on the priority list on the checklist … AND in many cases, companies have a tendency to use the scale numbers as more of a “classification.” For instance, Mattel used the classification 1:50 on their recent Star Trek starship releases as they were about 4-5 inches in length … um, 1:50 scale would mean the Enterprise is about the size of a Plymouth Superbird … but I’m presuming they figure something like 1:15,000 on the box would confuse the tribbles out of people? The Tomica 1:64 series? Well, McQueen is closer to 1:60 … now, keep in mind, that does not sound like much of a difference but in reality, that means on a diecast scale of increments of inches, that’s about .3 inches – a HUGE difference when the whole thing is only about 3″ in length. And even the Mattel 1:24 Lightning McQueen? Well, technically you are getting more than you were told as Lightning is about 1:21 in scale and not necessarily 1:24 in scale (going by the 1:55 version as accurate). So, the good thing is no one seems to be selling you anything SMALLER than what they have announced on the packaging. In all cases, you are getting a slightly larger scale than they “promised.” Here are some other versions – a couple vending machines ones, 1:110 version from Tomy: Also, a couple other real world comparisons don’t hold up either, Sally, Sheriff & Fillmore are all a little shorter than they could be in 1:55 scale but it’s understandable … though it would be interesting to see what the life size replica’s actual length is at the theme park … so now you know next time you’re in Disneyland … the 4th thing on your list – measure Lightning … Get one of these measuring wheels – disguise it as a stroller wheel – once you’re inside the park – reassemble and viola … 20 Comments • I bought the “1/24th” scale Doc Hudson model a month or so ago. It looks nice but when I got down a Franklin Mint Hudson Hornet racer that I bought many years ago for comparison, I found there is no comparison! The Doc Hudson model is no where near 1/24th. It is much bigger. I haven’t done any exact measuring, but it would not surprise me to find that it is almost 1/20th scale. Actually, it doesn’t really matter, but I cannot display the two together. • neat! but where’s the mcdonald’s version? • Love the censorship on this board… sad….you make Red cry. □ There are children who read this board. Your comment was distasteful and vulgar and warranted deletion. ☆ Got that right, PSTS. • Don’t even talk about how much hate the UPS guy would have for you tryin’ to get all those cars into your mailbox!!!! □ John in Missouri would have to get a much bigger mailbox to handle the 1:18 scale deliveries! • Boy, what if Mattel had opted to go with the 1:18 scale for the entire CARS line? Talk about having to add a room onto the house! • If the King only measures 4 inches, can he really be called the King? (MET: In the land of Chuki and Midget Fred, he is indeed the King). • This is Mattel after all…scale means nothing to them. Next time you’re in the toys aisle start comparing Hot Wheels…go-karts are as big as VW’s and motorcycles are as big as school buses! • Why does anyone really care about the scale of the cars i don’t • I’d like to hear from the female readers of this site whether size does matter… □ I don’t know if there is a pun in there some where, as meaning something else then what you are refering to, but when it comes to buying display cases, it’s helpfulp to know what scale size you need. As far as the explanation of the scale size above-you have totally lost me. I have asked questions about this dilema before,here on this sight and have had them answered. □ ahem…lol • I guess we are “stretched” for content lately MET? • i’m suing for false advertising • It’s not so much the scale to real life cars, but more the scale of each car compared to another, (for example Fred compared to McQueen) that I am interested in. (MET: There is a Fred scale post to McQueen post). □ A while back I did some of this myself with the “real” cars like Sally and Sheriff, and also looked at width and height. What I discovered is that Pixar Cars tend to run about 84% of their real-world length when compared to width and height. Which makes them stubbier and thus “cuter.” A lot of that lost length is in the cabin, which results in a more vertical windshield, all the better for eye contact. Within their own Mattel universe, they are fairly consistent in their disproportions and scale. Sally is a bit hippy and Doc Hudson is the most ratio-accurate car. So there you go, more scalology than you really wanted. • I’m glad I have a Masters Degree in Scalology, as this was pretty easy to follow along with. • waiting for the 1:1 range You must be logged in to post a comment.
{"url":"http://www.takefiveaday.com/2010/02/08/mattel-disney-pixar-diecast-cars-the-scale-of-it/","timestamp":"2014-04-19T10:21:30Z","content_type":null,"content_length":"84899","record_id":"<urn:uuid:208fd5f5-3f40-451d-aad8-bd0a58d120b9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
{- | Module : Data.Graph.Analysis Description : A Graph-Theoretic Analysis Library. Copyright : (c) Ivan Lazar Miljenovic 2009 License : 2-Clause BSD Maintainer : Ivan.Miljenovic@gmail.com This is the root module of the /Graphalyze/ library, which aims to provide a way of analysing the relationships inherent in discrete data as a graph. The original version of this library was written as part of my mathematics honours thesis, /Graph-Theoretic Analysis of the Relationships in Discrete Data/. module Data.Graph.Analysis ( version, -- * Re-exporting other modules module Data.Graph.Analysis.Types, module Data.Graph.Analysis.Utils, module Data.Graph.Analysis.Algorithms, module Data.Graph.Analysis.Visualisation, module Data.Graph.Analysis.Reporting, module Data.Graph.Inductive.Graph, -- * Importing data -- * Result analysis -- $analfuncts ) where import Data.Graph.Analysis.Internal import Data.Graph.Analysis.Utils import Data.Graph.Analysis.Types import Data.Graph.Analysis.Algorithms import Data.Graph.Analysis.Visualisation import Data.Graph.Analysis.Reporting import Data.Graph.Inductive.Graph import Data.List(find) import Data.Maybe(mapMaybe) import qualified Data.Map as M import Data.Map(Map) import qualified Data.Set as S import Data.Set(Set) import Control.Arrow(first) import Data.Version(showVersion) import qualified Paths_Graphalyze as Paths(version) -- ----------------------------------------------------------------------------- -- | The library version. version :: String version = showVersion Paths.version {- | This represents the information that's being passed in that we want to analyse. If the graph is undirected, it is better to list each edge once rather than both directions. data ImportParams n e = ImpParams { -- | The discrete points. dataPoints :: [n] -- | The relationships between the points. , relationships :: [Rel n e] -- | The expected roots of the graph. -- If @'directed' = 'False'@, then this is ignored. , roots :: [n] -- | 'False' if relationships are symmetric -- (i.e. an undirected graph). , directed :: Bool {- | Import data into a format suitable for analysis. This function is /edge-safe/: if any datums are listed in the edges of 'ImportParams' that aren't listed in the data points, then those edges are ignored. Thus, no sanitation of the 'relationships' in @ImportParams@ is necessary. The unused relations are stored in 'unusedRelationships'. Note that it is assumed that all datums in 'roots' are also contained within 'dataPoints'. importData :: (Ord n, Ord e) => ImportParams n e -> GraphData n e importData params = GraphData { graph = dGraph , wantedRootNodes = rootNodes , directedData = isDir , unusedRelationships = unRs isDir = directed params -- Adding Node values to each of the data points. lNodes = zip [1..] (dataPoints params) -- The valid edges in the graph along with the unused relationships. (unRs, graphEdges) = relsToEs isDir lNodes (relationships params) -- Creating a lookup map from the label to the @Node@ value. nodeMap = mkNodeMap lNodes -- Validate a node validNode l = M.lookup l nodeMap -- Construct the root nodes rootNodes = if isDir then mapMaybe validNode (roots params) else [] -- Construct the graph. dGraph = mkGraph lNodes graphEdges -- ----------------------------------------------------------------------------- {- $analfuncts Extra functions for data analysis. -- | Returns the mean and standard deviations of the lengths of the sublists, -- as well all those lists more than one standard deviation longer than -- the mean. lengthAnalysis :: [[a]] -> (Int,Int,[(Int,[a])]) lengthAnalysis as = (av,stdDev,as'') as' = addLengths as ls = map fst as' (av,stdDev) = statistics' ls as'' = filter (\(l,_) -> l > (av+stdDev)) as' {- | Compare the actual roots in the graph with those that are expected (i.e. those in 'wantedRootNodes'). Returns (in order): * Those roots that are expected (i.e. elements of 'wantedRootNodes' that are roots). * Those roots that are expected but not present (i.e. elements of 'wantedRootNodes' that /aren't/ roots. * Unexpected roots (i.e. those roots that aren't present in classifyRoots :: GraphData n e -> (Set Node, Set Node, Set Node) classifyRoots gd = (areWanted, notRoots, notWanted) wntd = S.fromList $ wantedRootNodes gd rts = S.fromList $ applyAlg rootsOf' gd areWanted = S.intersection wntd rts notRoots = S.difference wntd rts notWanted = S.difference rts wntd -- | Find the nodes that are not reachable from the expected roots -- (i.e. those in 'wantedRootNodes'). inaccessibleNodes :: GraphData n e -> Set Node inaccessibleNodes gd = allNs `S.difference` reachableNs -- We can't use accessibleOnlyFrom' on notWanted from -- classifyRoots, as there might be nodes that are roots but not -- detectable (e.g. a loop). allNs = S.fromList $ applyAlg nodes gd rs = S.fromList $ wantedRootNodes gd reachableNs = applyAlg accessibleFrom' gd rs -- | Only return those chains (see 'chainsIn') where the non-initial -- nodes are /not/ expected roots. interiorChains :: (Eq n, Eq e) => GraphData n e -> [LNGroup n] interiorChains gd = filter (not . interiorRoot) chains chains = applyAlg chainsIn gd rts = wantedRoots gd interiorRoot = any (`elem` rts) . tail -- | As with 'collapseAndReplace', but also update the -- 'wantedRootNodes' to contain the possibly compressed nodes. -- Since the datums they refer to may no longer exist (as they are -- compressed), 'unusedRelationships' is set to @[]@. collapseAndUpdate :: (Ord n) => [AGr n e -> [(NGroup, n)]] -> GraphData n e -> GraphData n e collapseAndUpdate fs = fst . collapseAndUpdate' fs -- | As with 'collapseAndUpdate', but also includes a lookup 'Map' -- from the old label to the new. collapseAndUpdate' :: (Ord n) => [AGr n e -> [(NGroup, n)]] -> GraphData n e -> (GraphData n e, Map n n) collapseAndUpdate' fs gd = (gd', repLookup) gr = graph gd (gr', reps) = collapseAndReplace' fs gr lns' = mkNodeMap $ labNodes gr' reps' = map (first S.fromList) reps rs = S.fromList $ wantedRootNodes gd replace r = maybe r ((M.!) lns' . snd) $ find (S.member r . fst) reps' gd' = gd { graph = gr' , wantedRootNodes = S.toList $ S.map replace rs , unusedRelationships = [] nlLookup = M.fromList $ labNodes gr getLs = mapMaybe (flip M.lookup nlLookup) repLookup = M.fromList . spreadOut $ map (first getLs) reps -- | As with 'levelGraph', but use the expected roots rather than the -- actual roots. levelGraphFromRoot :: (Ord n) => GraphData n e -> GraphData (GenCluster n) e levelGraphFromRoot gd = updateGraph (levelGraphFrom (wantedRootNodes gd)) gd
{"url":"http://hackage.haskell.org/package/Graphalyze-0.10.0.1/docs/src/Data-Graph-Analysis.html","timestamp":"2014-04-16T14:03:29Z","content_type":null,"content_length":"39329","record_id":"<urn:uuid:bc5477e2-b09d-46c6-9976-8c84df5d1c72>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
The Purplemath Forums In this diagram, the points with a green dot are known and the red ones unknown. Could someone explain what formula I need to determine the red dotted points? Re: Equations for finding points on a circle circumference Hi Ahh, so the technique is to use pythagoras - I should have thought of that! I have extended the formula to deal with the fact that the origin (a,b) of my circles are not 0,0 so to get the y coord for my point C I am using: y = b +/- sqrt{r^2 - (L-K)^2} (where points on my circle diagram a...
{"url":"http://www.purplemath.com/learning/search.php?author_id=33479&sr=posts","timestamp":"2014-04-20T18:23:33Z","content_type":null,"content_length":"14922","record_id":"<urn:uuid:8b130aac-48dd-4b06-bbea-5902f4fdcb18>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A flower vase, in the form of a hexagonal prism, is to be filled with 512 cubic inches of water. Find the height of the water if the wet portion of the flower vase and its volume are numerically • one year ago • one year ago Best Response You've already chosen the best response. Is the wet portion a ratio? or does it simple mean wet portion= volume? Best Response You've already chosen the best response. wet portion = volume Best Response You've already chosen the best response. Is there any more numbers? The volume of a hexagonal prism is \(V=\frac{\sqrt 3}{4}r^2 H\) where r is the side of the hexagon. Best Response You've already chosen the best response. that's all.. Best Response You've already chosen the best response. Hmmm....strange question this one. Well, it might mean that H=r. Best Response You've already chosen the best response. the answer was 34.88 inches but i want to know how to get it.. Best Response You've already chosen the best response. Either I misunderstood the question or it lacks something. Sorry, Idk why it is 34.88. Though I can show how to calc the volume of the prism|dw:1352974254691:dw| As such the volume is area of six triangles x height. \(V=(6)(\frac{1}{2} r^2 sin 60)(h)\) \(V=(\frac{3 \sqrt 3}{4} r^2)(h)\) Best Response You've already chosen the best response. ok.. thank you.. i'll try answering it again Best Response You've already chosen the best response. You're welcome :) sorry I can't be of more help. Best Response You've already chosen the best response. that's fine.. thanks for the help :)) Best Response You've already chosen the best response. V = SA = 512 V = ( 3 √3 /2 ) a² * H 512 = 2.6 a² * H ->H = 197.1 / a² SA = 2.6 a² + 6a h 512 = 2.6 a² + 6a ( 197.1 / a²) = 2.6 a² + 1,182.6 / a 512a = 2.6a³ + 1,182.6 ( plug into your calculator to solve it :/ ) Best Response You've already chosen the best response. So the area=volume? wow. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a4ae01e4b0f1696c13a1ca","timestamp":"2014-04-20T08:15:18Z","content_type":null,"content_length":"76136","record_id":"<urn:uuid:aadeea73-83b3-4f38-aa1f-fe481795f6be>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
STATISTICS A Bag Contains 60 Marbles. The Number... | Chegg.com STATISTICS A bag contains 60 marbles. The number of blue marbles, rounded to the nearest 10, is 40, and the number of green marbles in the bag, rounded to the nearest 10, is 20. How many blue marbles are in the bag? (List all answers that satisfy the conditions of the problem.)
{"url":"http://www.chegg.com/homework-help/statistics-bag-contains-60-marbles-number-blue-marbles-round-chapter-1-problem-47x4-solution-9780073384177-exc","timestamp":"2014-04-20T14:30:28Z","content_type":null,"content_length":"93700","record_id":"<urn:uuid:56549abd-57d2-493e-b06e-74695f2b7175>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
completing the square Can someone give me a brief run thru of completing the square. I need a memory jog. Thanks. Here. Follow post #4 first two lines only. Avoid everything else. That is the most general case. this post might be of some help, there are others. i'll try to look them up http://www.mathhelpforum.com/math-he...equations.html As promised: http://www.mathhelpforum.com/math-he...er-radius.html http://www.mathhelpforum.com/math-he...te-square.html http://www.mathhelpforum.com/math-he...ative-qus.html http:// www.mathhelpforum.com/math-he...ng-sqaure.html it's not much, but it should get you started
{"url":"http://mathhelpforum.com/pre-calculus/14692-completing-square-print.html","timestamp":"2014-04-20T17:57:29Z","content_type":null,"content_length":"5496","record_id":"<urn:uuid:daee7ec2-9629-4674-a42b-b47a9fd9bc22>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert kiloyard to foot - Conversion of Measurement Units ›› Convert kiloyard to foot Did you mean to convert kiloyard to foot foot [Egypt] foot [France] foot [iraq] foot [Netherlands] foot [pre-1963 Canada] foot [Rome] foot [survey] ›› More information from the unit converter How many kiloyard in 1 foot? The answer is 0.000333333333333. We assume you are converting between kiloyard and foot. You can view more details on each measurement unit: kiloyard or foot The SI base unit for length is the metre. 1 metre is equal to 0.00109361329834 kiloyard, or 3.28083989501 foot. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between kiloyards and feet. Type in your own numbers in the form to convert the units! ›› Definition: Kiloyard The SI prefix "kilo" represents a factor of 10^3, or in exponential notation, 1E3. So 1 kiloyard = 10^3 yards. ›› Definition: Foot A foot (plural: feet) is a non-SI unit of distance or length, measuring around a third of a metre. There are twelve inches in one foot and three feet in one yard. ›› Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! This page was loaded in 0.0039 seconds.
{"url":"http://www.convertunits.com/from/kiloyard/to/foot","timestamp":"2014-04-21T12:13:53Z","content_type":null,"content_length":"21967","record_id":"<urn:uuid:10a98548-86ec-4b75-95f1-74622ac75cd8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Error: Subscript indices must either be real positive integers or logicals. Replies: 1 Last Post: Jun 25, 2013 9:47 AM Messages: [ Previous | Next ] Error: Subscript indices must either be real positive integers or logicals. Posted: Jun 25, 2013 9:33 AM for i = 1:sizex; for j = 1:sizey; cor = Xc(i,j,1); blank( ([i j 1]*A) ) = cor; Subscript indices must either be real positive integers or logicals. Error in testando (line 35) blank( ([i j 1]*A) ) = cor; Anyone? ): Date Subject Author 6/25/13 Error: Subscript indices must either be real positive integers or logicals. Laryssa Seabra 6/25/13 Re: Error: Subscript indices must either be real positive integers or logicals. Torsten
{"url":"http://mathforum.org/kb/message.jspa?messageID=9145287","timestamp":"2014-04-17T21:37:04Z","content_type":null,"content_length":"17386","record_id":"<urn:uuid:e35da9f5-c00d-4b60-b6ad-6f694362cb87>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Simulation of Power Law and Yield Stress Fluid Flow in a Double Concentric Cylinder Rheometer with a Slotted Rotor (DCCR/SR) and Vane Rheometer Three dimensional steady state flows in a DCCR/SR and a vane rheometer have been numerically simulated. We analyzed and compared the systematic errors in rheological measurements for different test Figure 1: Geometries of (a) the vane rheometer and (b) the DCCR/SR. Figure (1) illustrates the geometries. There are three factors that are responsible for the deviation of the measured apparent viscosity from its actual value: 1) wall slip effects, 2) end effects, and 3) secondary flow effects. We analyze these separately. We define the coefficient ¦'slip due to wall slip effects as the ratio of the fluid torque in a rheometer with slip surfaces of the rotor to the torque in a rheometer in the absence of slip: Inaccurate estimation of the correction length leads to end effects-related errors. We define the coefficient ¦'end due to end effects by: Here, H is the rotor length and superscript '*' refers to the correction length used. We define the coefficient ¦'2nd due to secondary flow effects as the ratio between the measured and true values of the fluid apparent viscosity (no slip) and ¦'end equals one: Here superscript '*' refers to the measured value of the apparent viscosity. The total error due to these effects is: The constitutive model used in the numerical study of yield stress fluids is a modified Bingham model: where ¦" is the extra-stress tensor, is the second invariant of the rate-of-strain tensor , ¦"0 is the yield stress, m, t1 and ¦Ç1 are constants. Commercial CFD software ANSYS Fluent 12.0 (ANSYS, Inc., Canonsburg, PA) was used. We compare the two rheometers under the extreme conditions of free slip and no wall slip. Figure 2: Accuracy coefficients due to (a) wall slip effects (b) end effects and (c) secondary flow effects of a vane rheometer (solid bars) and DCCR/SR (open bars) for power law fluids with different n indices. Figure (2) compares the three accuracy coefficients between the two designs for power law fluids. In Figure (2a), with free slip ¦'slip for both designs less than unity, indicates an underprediction of the rotor torque. Even for n = 0.01, the torque measured with a vane rheometer is only 87.2 % of the corresponding value under no slip conditions. The ¦'slip further decreases for a vane rheometer with an increase in n, reaching a minimum of 68.8 % for a Newtonian fluid (n = 1). With 60 % slot area ratio, the ¦'slip of the DCCR/SR is smaller than that of a vane rheometer but less dependent on n (~ 60%). In Figure (2b), a vane rheometer has a much larger error due to end effects (¦'end ¨C 1) than the DCCR/SR. This result can be explained by the small thickness of the slotted rotor, leading to a small area of the end surfaces in the DCCR/SR. For a vane rheometer, the accuracy coefficient due to end effects varies significantly with n. For Newtonian fluids, the error due to end effects is -10% for the measurements with a vane rheometer. This error can be eliminated only with the calibration of the correction length. For the DCCR/SR design, ¦'end is ~ 1 and is independent of n. Thus, there is no need for the recalibration of the device. In Figure (2c), the apparent viscosity measured with a vane rheometer or DCCR/SR will deviate from the true value due to secondary flow effects even if all end effects are removed. For a Newtonian fluid, the experimental data as well as our numerical results, show that a vane rheometer predicts only 60% of the true apparent viscosity value. However, the DCCR/SR consistently shows less secondary flow effects than a vane rheometer. For Newtonian fluids, it is possible to reach about 87.3 % of the true value with the DCCR/SR, while a vane rheometer gives about 60.3 %. Figure 3: Comparison of the total systematic error in apparent viscosity measurement between a vane rheometer (squares) and a DCCR/SR (circles and triangles) for (a) power-law fluids and (b) a yield stress fluid. Negative values indicate the underprediction of the apparent viscosity. The total error with the two designs is plotted in Figure (3) for no slip (closed symbols) and free slip conditions. For power law fluids (Figure 3a), the error generated by a vane rheometer is more dependent on the power law index n. This variation is much smaller for a DCCR/SR. When there are no wall slip effects, the DCCR/SR will have higher accuracy than a vane rheometer for any power law fluid. Under free slip conditions, the DCCR/SR still has higher accuracy than a vane rheometer when the power law index n > 0.5. For a yield stress fluid, (no slip conditions), a vane rheometer will have ~ 46% underprediction error in the low shear stress region; the DCCR/SR with 60 % slot area ratio has only 10 % underprediction. In the case of free slip, the DCCR/SR design with 90 % slot area ratio will be more accurate than a vane rheometer for the whole range of shear rates. Our results indicate that: (1) a DCCR/SR is able to accurately measure rheological properties of a wider spectrum of test fluids than a vane rheometer due to significant reductions of end and secondary flow effects; (2) the rheometer design can be optimized by analyzing the accuracy coefficients separately which allows us to determine the dominant source of the measurement error and then to provide a solution for reduction/elimination of this source. Impact statement: in this research we explore a variety of aspects of determining the yield stress of complex fluids, via numerical analysis. The project involves a Ph.D. student who is also being trained in CFD (Fluent) and a postdoctoral fellow who recently obtained an industrial position with Schlumberger.
{"url":"https://acswebcontent.acs.org/prfar/2011/Paper11357.html","timestamp":"2014-04-19T04:58:45Z","content_type":null,"content_length":"23753","record_id":"<urn:uuid:916dd97e-b991-4347-8da8-947ad5492711>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Formative Assessment Lessons (beta) Read more about the purpose of the MAP Classroom Challenges… Devising a Measure for Correlation Mathematical goals This lesson unit is intended to help you assess how well students understand the notion of correlation. In particular this unit aims to identify and help students who have difficulty in: • Understanding correlation as the degree of fit between two variables. • Making a mathematical model of a situation. • Testing and improving the model. • Communicating their reasoning clearly. • Evaluating alternative models of the situation. This lesson unit is structured in the following way: • Before the lesson, students work individually on an assessment task Drive-in Movie Theater that is designed to reveal their current understanding and difficulties. You then review their work and create questions for students to answer in order to improve their methods. • At the start of the lesson, students work alone answering your questions about the same problem, then work collaboratively in small groups to produce, in the form of a poster, a better solution to the task Drive-in Movie Theater, than they did individually. • In a whole-class discussion students compare and evaluate the different methods they have used. • Working in the same small groups, students analyze sample responses to the task. • In a whole-class discussion students explain and compare the alternative methods. • Finally, students review what they have learnt. Materials required • Each student will need a copy of the assessment task, Drive-in Movie Theater and Scatter Graphs A, B, and C. • Each small group of students will need a large sheet of paper, a felt tipped pen, copies of all three Sample Responses to Discuss, and a blank sheet of paper. If possible, use a data projector and computer with spreadsheet software to demonstrate the spreadsheet Correlation Measure Spreadsheet.xls. You may also need extra copies of Scatter Graphs A, B, and C, extra sheets of paper for student work, and calculators. There are some projector resources to help with discussions. Time needed 20 minutes before the lesson, a 90-minute lesson (or two 45 minute lessons), and 10 minutes in the next lesson (or for homework). Timings are only approximate and depend on the needs of the class. Mathematical Practices This lesson involves a range of mathematical practices from the standards, with emphasis on: Mathematical Content This lesson asks students to select and apply mathematical content from across the grades, including the content standards: Lesson (complete) Projector Resources A draft Brief Guide for teachers and administrators (PDF) is now available, and is recommended for anybody using the MAP Classroom Challenges for the first time. We have assigned lessons to grades based on the Common Core State Standards for Mathematical Content. During this transition period, you should use your judgement as to where they fit in your current The Beta versions of the MAP Lesson Units may be distributed, unmodified, under the Creative Commons Attribution, Non-commercial, No Derivatives License 3.0. All other rights reserved. Please send any enquiries about commercial use or derived works to map.info@mathshell.org. Can you help us by sending samples of students' work on the Classroom Challenges?
{"url":"http://map.mathshell.org.uk/materials/lessons.php?taskid=420&subpage=problem","timestamp":"2014-04-17T03:53:15Z","content_type":null,"content_length":"16857","record_id":"<urn:uuid:9433e165-df35-4516-bb27-425d6d62eb79>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Go4Expert - Square Root using Newton Iteration StormcasteR 16Jan2009 20:46 Square Root using Newton Iteration So.Dudes heres the deal.Yesterday a friend of mine wanted me to help him with a function that calculates square root of a positive number using Newton's method.So I came up with this: #include <iostream> #include <cmath> using namespace std; int main() const double tol = 0.000005; double value; double old_app, new_app; cout << "Square root of a number" << endl << endl; cout << "Enter a positive number: "; cin >> value; if (value < 0.0) cout << "Cannot find square root of negative number" << endl; if (value == 0.0) cout << "square root of " << value << " is 0.00" << endl; old_app = value; new_app = (old_app + value/old_app)/2; while (fabs((new_app-old_app)/new_app) > tol) old_app = new_app; new_app = (old_app + value/old_app)/2; cout << "square root of " << value << " is " << new_app << endl; system ("pause"); But I'm really new in C++ and I'm not sure if this is the right way to represent this algorithm? #include <iostream> #include <cmath> using namespace std; int main() { const double tol = 0.000005; double value; double old_app, new_app; cout << "Square root of a number" << endl << endl; cout << "Enter a positive number: "; cin >> value; if (value < 0.0) cout << "Cannot find square root of negative number" << endl; else if (value == 0.0) cout << "square root of " << value << " is 0.00" << endl; else { old_app = value; new_app = (old_app + value/old_app)/2; while (fabs((new_app-old_app)/new_app) > tol) { old_app = new_app; new_app = (old_app + value/old_app)/2; } cout << "square root of " << value << " is " << new_app << endl; system ("pause"); } } xpi0t0s 17Jan2009 00:47 Re: Square Root using Newton Iteration Does it work? Have you entered some test values, and did the correct results come out? If not, what results did you get? Does it work? Have you entered some test values, and did the correct results come out? If not, what results did you get? StormcasteR 18Jan2009 23:01 Re: Square Root using Newton Iteration Originally Posted by xpi0t0s (Post 41578) Does it work? Have you entered some test values, and did the correct results come out? If not, what results did you get? Yes.I entered 78.32for example and the program outputs 8.84986 for square root.Another program I wrote uses heron's formula and the output is the same ( 8.84986) while the normal cmath sqrt() function outputs 8.83176.Anyway, the program works properly but I'm not sure if this is the right way for Newton's method and if the formula I'm using is correct... Originally Posted by xpi0t0s (Post 41578) Does it work? Have you entered some test values, and did the correct results come out? If not, what results did you get? asadullah.ansari 19Jan2009 12:24 Re: Square Root using Newton Iteration Check the algorithm for newton's ramson formula .. Actually wahtever you wrote the program that is by Iterative evaluation of a square root check here also... All times are GMT +5.5. The time now is 22:47.
{"url":"http://www.go4expert.com/printthread.php?t=15842","timestamp":"2014-04-16T17:17:51Z","content_type":null,"content_length":"8165","record_id":"<urn:uuid:8f859b43-1532-4a71-a0f2-d7320eb8d790>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Unshuffling A Square Is NP-Complete Unshuffling A Square Is NP-Complete Follow Written by Mike James Saturday, 08 December 2012 New NP complete problems are always interesting because they broaden our conception of what is difficult to compute. Now we have a new result that unshuffling square strings is The idea of a shuffle is very simple. Take two strings u and v then w is a shuffle of u and v if there are a set of strings x[i] and y[i] such that u =x1 x2 · · · xk v = y1 y2 · · · yk w = x1 y1 x2 y2 · · · xk yk In other words, w is a sequential interleaving of substrings of v and w. A string w is called a square if it is the shuffle of a string u with itself. That is, w can be created by shuffling two copies of u. Creating squares is easy; just take a string and shuffle it with itself. Also notice that one string can be shuffled to in many different ways depending on how you split it up into It might be easy to create a square, but it is much more difficult to solve the inverse problem of determining if a string is square. The problem of working out if a given string w can be expressed as a shuffle of a given u and v can be solved in polynomial time. You can even solve the more general problem of whether w is the shuffle of k different strings in polynomial time - but only if k is fixed. If k is unspecified then the problem becomes NP-complete. Later this proof was extended to the case where the k strings are identical but until recently there was no clear answer for square strings i.e. for the case k = 2. That is, can you find a polynomial algorithm for deciding in a string w can be constructed from the shuffle of an unspecified string u with itself? The square problem seems to be easier than the general problem with k allowed to vary, but in fact a recent paper presents the result that it too is NP-complete, even if the alphabet used for the strings is finite but not too small. In fact the paper proves that the task is NP-complete for alphabets as small as 7 characters. It suggests that shuffles with as few as 3 symbols might be NP-complete, but we still need a proof of this. DirectX 12 The Details - Or Not As promised Microsoft explained what they had planed for DirectX 12 at this years GDC but what does it all mean for the average DirectX programmer? + Full Story Computer Science Enrollments On The Up Enrollment for Computer Science bachelor's degree programs in the U.S. jumped by over twenty percent last year. Is the message that a CS degree leads to a rewarding career getting + Full Story More News Last Updated ( Saturday, 08 December 2012 ) RSS feed of news items only Copyright © 2014 i-programmer.info. All Rights Reserved.
{"url":"http://www.i-programmer.info/news/112-theory/5174-unshuffling-a-square-is-np-complete.html","timestamp":"2014-04-20T10:49:09Z","content_type":null,"content_length":"39306","record_id":"<urn:uuid:6fa97fcc-af24-4900-abc9-75f02e8140c5>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Setting up double integral January 27th 2009, 08:33 PM #1 Junior Member Jan 2009 Setting up double integral I was just wondering if the set up for this problem; integrate f(s,t)=e^slnt over the region in the first quadrant of the st-plane that lies above the curve s=lnt from t=1 to t=2 integral(t=1 to t=2)integral(s=ln1 to s=ln2) of e^slnt If that's not the right set up what am I doing wrong. Last edited by Krizalid; January 29th 2009 at 03:23 PM. I was just wondering if the set up for this problem; integrate f(s,t)=e^slnt over the region in the first quadrant of the st-plane that lies above the curve s=lnt from t=1 to t=2 integral(t=1 to t=2)integral(s=ln1 to s=ln2) of e^slnt If that's not the right set up what am I doing wrong. I think it should be $\int_0^1 \int _0^{\ln t} e^s \ln t \, ds\,dt$ January 28th 2009, 10:05 AM #2
{"url":"http://mathhelpforum.com/calculus/70304-setting-up-double-integral.html","timestamp":"2014-04-16T13:54:08Z","content_type":null,"content_length":"33283","record_id":"<urn:uuid:863870a2-7aa1-4dc8-8cce-51791bf1ed04>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Verify the expression. December 19th 2012, 06:13 AM #1 Junior Member Oct 2012 Verify the expression. $T:V \rightarrow W$ is a linear transformation and $S \in L^k (W).$ Is it true that $T^*(S^{\delta})= (T^* (S))^{\delta}, \delta \in S_k$? Re: Verify the expression. Hey vercammen. This might be a dumb question, but is W just a matrix? Also what does raising W to the delta do with regards to the map? Re: Verify the expression. As far as I understood U,W are vector fields, S in a k-tensor on W (can be written using basis); S = sum of all possible k-tuples. Re: Verify the expression. So is delta an Einstein summation? Re: Verify the expression. just a permutation, I guess Re: Verify the expression. The reason I ask is that if it is just a summation, then it should hold (the identity that is). The reason has to do with distributivity of matrix multiplication, Re: Verify the expression. I asked a professor, he told me it's just the matter of using the definitions... Re: Verify the expression. Here is what I did, but unfortunately it was graded as incorrect. On the k-tensor powers the induced map is $T:V^{\otimes k}\to W^{\otimes k}$ which on pure tensors is $T(v_1\otimes v_2\otimes \ldots \otimes v_k)=T(u_1)\otimes T(u_2)\otimes \ldots T(u_k)$ If $\sigma$ is a permutation, then $(u_1\otimes \ldots u_k)^\sigma=u_{\sigma^{-1}(1)}\otimes \ldots \otimes u_{\otimes k}$ and thus we can immediately verify the identity $T(u_1\otimes \ldots u_k)^\sigma=T((u_1\otimes \ldots u_k)^\sigma)$ because both sides will equal $T(u_{\sigma^{-1}(1)})\otimes \ldots T(u_{\sigma^{-1}(k)})$ Because pure tensors span the k-tensor power space, we conclude that $T(v^\sigma)=T(v)^\sigma$ (1) Now let's get back to the problem. By definition, $T^*(S^\sigma)(x)=S^\sigma(T(x))$ which is $S(T(x))^\sigma$ Now by two consecutive applications of (1), $S(T(x))^\sigma=S(T(x)^\sigma)=S(T(x^\sigma))$ and that is, by definition, $(T^*S)(x^\sigma)$ - which again by definition equals $(T^*S)^\sigma(x)$ December 20th 2012, 12:14 AM #2 MHF Contributor Sep 2012 December 20th 2012, 03:46 AM #3 Junior Member Oct 2012 December 20th 2012, 11:46 AM #4 MHF Contributor Sep 2012 December 20th 2012, 01:16 PM #5 Junior Member Oct 2012 December 20th 2012, 01:25 PM #6 MHF Contributor Sep 2012 December 20th 2012, 01:35 PM #7 Junior Member Oct 2012 December 22nd 2012, 02:35 PM #8 Junior Member Oct 2012
{"url":"http://mathhelpforum.com/differential-geometry/210122-verify-expression.html","timestamp":"2014-04-19T04:27:13Z","content_type":null,"content_length":"48665","record_id":"<urn:uuid:ffd84991-b169-4c73-9dd6-61840073183d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help March 12th 2010, 02:31 AM #1 Junior Member Jan 2008 A number is increased by 25% and the resulting number is then decreased by 20%. The final number is what percent of the original number? I know the answer is 100 but how do I show it? Last edited by mr fantastic; March 23rd 2010 at 02:22 AM. Reason: Restored deleted question. Dear donnagirl, Suppose your number is x, then when it is increased by 25% the resulting number is, Then the result is decreased by 20% that is, $\left(x+\frac{25x}{100}\right)-\frac{20}{100}\left(x+\frac{25x}{100}\right)=\frac {5x}{4}-\frac{1}{5}\left(\frac{5x}{4}\right)=x$ So the number is 100% of the original number. Hope this will help you. March 12th 2010, 02:54 AM #2 Super Member Dec 2009
{"url":"http://mathhelpforum.com/algebra/133441-percents.html","timestamp":"2014-04-21T00:44:13Z","content_type":null,"content_length":"32226","record_id":"<urn:uuid:db8f26ca-5021-454a-9e36-453861c046ba>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Acceleration calculation - what am I doing wrong? 2012-Mar-24, 06:33 PM #1 Acceleration calculation - what am I doing wrong? I am trying to figure out how to use the formula for calculating acceleration. First try: Initial speed - 0 metres/s^2 Final speed - 100 metres/s^2 Time - 10 seconds. Answer - 10 meters/s^2 Follow up question: How many g's is 10 meters/s^2? Answer is 10 meters/s^2/9.8 meters/s^2= 1.0204 g's. Second try: Initial speed - 0 metres/s^2 Final speed - 1,000,000 metres/s^2 (1,000 KM) Time - 1000 seconds. Answer - 1000 meters/s^2 Follow up question: How many g's is 1000 meters/s^2? Answer is 1000 meters/s^2/s^2/9.8 meters/s^2 =102.04 g's. Am I doing this correctly? Something "feels" wrong about this and I have no idea what it could be. "Triangles are my favorite shape "Three points where two lines meet" Tessellate, Alt-J appears to be correct Solfe... Rememebr to always start with the formula and simply plug in the knowns and solve for the unknown. V(f) - V(i) = at 100 m/sec - 0 = (a)(10 sec) Now solve for a. "Triangles are my favorite shape "Three points where two lines meet" Tessellate, Alt-J I am trying to figure out how to use the formula for calculating acceleration. First try: Initial speed - 0 metres/s^2 Final speed - 100 metres/s^2 Time - 10 seconds. Answer - 10 meters/s^2 Follow up question: How many g's is 10 meters/s^2? Answer is 10 meters/s^2/9.8 meters/s^2= 1.0204 g's. Second try: Initial speed - 0 metres/s^2 Final speed - 1,000,000 metres/s^2 (1,000 KM) Time - 1000 seconds. Answer - 1000 meters/s^2 Follow up question: How many g's is 1000 meters/s^2? Answer is 1000 meters/s^2/s^2/9.8 meters/s^2 =102.04 g's. Am I doing this correctly? Something "feels" wrong about this and I have no idea what it could be. Hello Solfe, there's a wrong unit in your initial variables: #1 Speed (i.e. velocity) is expressed in distance per time. E.g Miles per hour or metres per second (m/s). Accelerations, i.e. change of velocity over time, is therefore expressed as distance per times squared: (m/s^2) #2 As acceleration is change of velocity over time one can write: a = v/t. Solving for v this yields : v = a*t. So simply multiply your acceleration with time and you'll get your final speed. A more general equation is a = ^v/^t. Acceleration equals change of velocity in a certain tie period. Therefore: a = (v2-v1)/(t2-t1) Or in distance terms v^2 = u^2 + 2.a.s where a is acceleration and s distance in consistent units. u is the starting speed. the link from distance to time is s= u.t + 1/2. a.t^2 FYI, you are dealing with constant accelerations. Whilst motion under gravity tends to have constant acceleration, mechanical devices like cars tend not to have constand accelrations. So a sports car that does 0-60mph in 5 seconds has a average acceleration of about 12mph per second (a rather mixed unit, but it helps understand the principle), in practice its accleration will not be constant, and the distance it travels will not be precisely half-a-t-squared. 2012-Mar-24, 07:17 PM #2 Established Member Join Date Jan 2002 2012-Mar-24, 07:33 PM #3 2012-Mar-24, 07:37 PM #4 2012-Mar-25, 03:26 PM #5 2012-Mar-25, 03:54 PM #6 2012-Mar-26, 12:39 PM #7 Established Member Join Date Apr 2004
{"url":"http://cosmoquest.org/forum/showthread.php?129830-Acceleration-calculation-what-am-I-doing-wrong&p=2002050","timestamp":"2014-04-20T18:59:07Z","content_type":null,"content_length":"76918","record_id":"<urn:uuid:2fc13222-62c3-45b0-b58a-35a18cc05f25>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Frege's error William Tait williamtait at mac.com Sat Jul 30 10:30:47 EDT 2005 On Jul 14, 2005, at 11:27 AM, Neil Tennant wrote: > But Frege would have resisted the use of closed lambda terms > in any explanation of what he might have meant by transforming the > function x>y into the function x>x. For Frege, functions were > inherently > unsaturated. Perhaps *that* was Frege's error---that he thought that this notion of incomplete or unsaturated object was either needed or even could serve as a foundation for analysis. The fact that bound variables are in principle always eliminable shows that it is not necessary (e.g. to understand the semantics of propositions compositionally---which ultimately seems to be the grounds Frege gives for their necessity or "priority", as he puts it). The fact that the notion of saturating an incomplete object is entirely parasitic on that of substituting a closed expression into an open expression would seem to limit incomplete objects to those expressible by open expressions---and so to be countable in number. (Of course, in logic we consider languages with uncountably many constants. But we draw on the fact that we can take the constants to be real numbers or transfinite ordinals, or whatever. And this is not available to Frege; since it is a foundation for the theory of real numbers and other infinite systems that he wants to establish. So I would conclude that it is not possible to found analysis on the notion of incomplete object in Frege's sense. I agree that I am changing the subject, Neil. On the one you addressed, you are entirely right. But it is hot and humid in Chicago today and I feel grumpy. (I first sent this message on july 17. It was returned because some of it was not plain text. The weather has recently improved and I am not at all feeling grumpy. But maybe my message is still of some Bill Tait More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2005-July/009020.html","timestamp":"2014-04-19T09:29:08Z","content_type":null,"content_length":"4181","record_id":"<urn:uuid:b0349371-9ca6-4de6-b77d-46bf62ccbd43>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from May 2009 on The Unapologetic Mathematician I want to mention a topic I thought I’d hit back when we talked about adjoint functors. We know that every poset is a category, with the elements as objects and a single arrow from $a$ to $b$ if $a\ leq b$. Functors between such categories are monotone functions, preserving the order. Contravariant functors are so-called “antitone” functions, which reverse the order, but the same abstract nonsense as usual tells us this is just a monotone function to the “opposite” poset with the order reversed. So let’s consider an adjoint pair $F\dashv G$ of such functors. This means there is a natural isomorphism between $\hom(F(a),b)$ and $\hom(a,G(b))$. But each of these hom-sets is either empty (if $aot\leq b$) or a singleton (if $a\leq b$). So the adjunction between $F$ and $G$ means that $F(a)\leq b$ if and only if $a\leq G(b)$. The analogous condition for an antitone adjoint pair is that $b\ leq F(a)$ if and only if $a\leq G(b)$. There are some immediate consequences to having a Galois connection, which are connected to properties of adjoints. First off, we know that $a\leq G(F(a))$ and $F(G(b))\leq b$. This essentially expresses the unit and counit of the adjunction. For the antitone version, let’s show the analogous statement more directly: we know that $F(a)\leq F(a)$, so the adjoint condition says that $a\leq G (F(a))$. Similarly, $b\leq F(G(b))$. This second condition is backwards because we’re reversing the order on one of the posets. Using the unit and the counit of an adjunction, we found a certain quasi-inverse relation between some natural transformations on functors. For our purposes, we observe that since $a\leq G(F(a))$ we have the special case $G(b)\leq G(F(G(b)))$. But $F(G(b))\leq b$, and $G$ preserves the order. Thus $G(F(G(b)))\leq G(b)$. So $G(b)=G(F(G(b)))$. Similarly, we find that $F(G(F(a)))=F(a)$, which holds for both monotone and antitone Galois connections. Chasing special cases further, we find that $G(F(G(F(a))))=G(F(a))$, and that $F(G(F(G(b))))=F(G(b))$ for either kind of Galois connection. That is, $F\circ G$ and $G\circ F$ are idempotent functions. In general categories, the composition of two adjoint functors gives a monad, and this idempotence is just the analogue in our particular categories. In particular, these functions behave like closure operators, but for the fact that general posets don’t have joins or bottom elements to preserve in the third and fourth Kuratowski axioms. And so elements left fixed by $G\circ F$ (or $F\circ G$) are called “closed” elements of the poset. The images of $F$ and $G$ consist of such closed elements And here’s the post I wrote today: Today, I want to prove two equations that hold in any orthocomplemented lattice. They are the famous DeMorgan’s laws: $\displaystyleeg(x\vee y)=eg x\wedgeeg y$ $\displaystyleeg(x\wedge y)=eg x\veeeg y$ First, we note that $x\leq x\vee y$ by definition. Since our complementation reverses order, we find $eg(x\vee y)\leqeg x$. Similarly, $eg(x\vee y)\leqeg y$. And thus we conclude that $eg(x\vee y)\ leqeg x\wedgeeg y$. On the other hand, $eg x\wedgeeg y\leqeg x$ by definition. Then we find $x=egeg x\leqeg(eg x\wedgeeg y)$ by invoking the involutive property of our complement. Similarly, $y\leqeg(eg x\wedgeeg y)$, and so $x\vee y\leqeg(eg x\wedgeeg y)$. And thus we conclude $eg x\wedgeeg y\leqeg(x\vee y)$. Putting this together with the other inequality, we get the first of DeMorgan’s laws. To get the other, just invoke the first law on the objects $eg x$ and $eg y$. We find \displaystyle\begin{aligned}eg x\veeeg y&=egeg(eg x\veeeg y)\\&=eg(egeg x\wedgeegeg y)\\&=eg(x\wedge y)\end{aligned} Similarly, the first of DeMorgan’s laws follows from the second. Interestingly, DeMorgan’s laws aren’t just a consequence of order-reversal. It turns out that they’re equivalent to order-reversal. Now if $x\leq y$ then $x=x\wedge y$. So $eg x=eg(x\wedge y)=eg x\ veeeg y$. And thus $eg y\leqeg x$. I just noticed in my drafts this post which I’d written last Friday never went up. Let’s say we have a real or complex vector space $V$ of finite dimension $d$ with an inner product, and let $T:V\rightarrow V$ be a linear map from $V$ to itself. Further, let $\left\{v_i\right\}_{i= 1}^d$ be a basis with respect to which the matrix of $T$ is upper-triangular. It turns out that we can also find an orthonormal basis which also gives us an upper-triangular matrix. And of course, we’ll use Gram-Schmidt to do it. What it rests on is that an upper-triangular matrix means we have a nested sequence of invariant subspaces. If we define $U_k$ to be the span of $\left\{v_i\right\}_{i=1}^k$ then clearly we have a $\displaystyle U_1\subseteq\dots\subseteq U_{d-1}\subseteq U_d=V$ Further, the fact that the matrix of $T$ is upper-triangular means that $T(v_i)\in U_i$. And so the whole subspace is invariant: $T(U_i)\subseteq U_i$. Now let’s apply Gram-Schmidt to the basis $\left\{v_i\right\}_{i=1}^d$ and get an orthonormal basis $\left\{e_i\right\}_{i=1}^d$. As a bonus, the span of $\left\{e_i\right\}_{i=1}^k$ is the same as the span of $\left\{e_i\right\}_{i=1}^k$, which is $U_k$. So we have exactly the same chain of invariant subspaces, and the matrix of $T$ with respect to the new orthonormal basis is still In particular, since every complex linear transformation has an upper-triangular matrix with respect to some basis, there must exist an orthonormal basis giving an upper-triangular matrix. For real transformations, of course, it’s possible that there isn’t any upper-triangular matrix at all. It’s also worth pointing out here that there’s no guarantee that we can push forward and get an orthonormal Jordan basis. We know that the poset of subspaces of a vector space $V$ is a lattice. Now we can define complementary subspaces in a way that doesn’t depend on any choice of basis at all. So what does this look like in terms of the lattice? First off, remember that the “meet” of two subspaces is their intersection, which is again a subspace. On the other hand their “join” is their sum as subspaces. But now we have a new operation called the “complement”. In general lattice-theory terms, a complement of an element $x$ in a bounded lattice $L$ (one that has a top element ${1}$ and a bottom element ${0}$) is an element $eg x\in L$ so that $x\veeeg x=1$ and $x\wedgeeg x=0$. In particular, since the top subspace is $V$ itself, and the bottom subspace is $\mathbf{0}$ we can see that the orthogonal complement $U^\perp$ satisfies these properties. The intersection $U\cap U^ \perp$ is trivial, since the inner product is positive-definite as a bilinear form, and the sum $U+U^\perp$ is all of $V$, as we’ve seen. Even more is true. The orthogonal complement is involutive (when $V$ is finite-dimensional), and order-reversing, which makes it an “orthocomplement”. In lattice-theory terms, this means that $egeg x =x$, and that if $x\leq y$ then $eg y\leqeg x$. First, let’s say we’ve got two subspaces $U\subseteq W$ of $V$. I say that $W^\perp\subseteq U^\perp$. Indeed, if $p$ is a vector in $W^\perp$ then it $\langle w,p\rangle=0$ for all $w\in W$. But since any $u\in U$ is also a vector in $W$, we can see that $\langle u,p\rangle=0$, and so $p\in U^\perp$ as well. Thus orthogonal complementation is Now let’s take a single subspace $U$ of $V$, and let $u$ be a vector in $U$. If $v$ is any vector in $U^\perp$, then $\langle v,u\rangle=\overline{\langle u,v\rangle}=0$ by the (conjugate) symmetry of the inner product and the definition of $U^\perp$. Thus $u$ is a vector in $\left(U^\perp\right)^\perp$, and so $U\subseteq U^{\perp\perp}$. Note that this much holds whether $V$ is finite-dimensional or not. On the other hand, if $V$ is finite-dimensional we can take an orthonormal basis $\left\{e_i\right\}_{i=1}^n$ of $U$ and expand it into an orthonormal basis $\left\{e_i\right\}_{i=1}^d$ of all of $V$ . Then the new vectors $\left\{e_i\right\}_{i=n+1}^d$ form a basis of $U^\perp$, so that $V=U\oplus U^\perp$. A vector in $V$ is orthogonal to every vector in $U^\perp$ exactly when it can be written using only the first $n$ basis vectors, and thus lies in $U$. That is, $U^{\perp\perp}=U$ when $V$ is finite-dimensional. So far we’ve been considering the category $\mathbf{Vect}$ of vector spaces (over either $\mathbb{R}$ or $\mathbb{C}$) and adding the structure of an inner product to some selected spaces. But of course there should be a category $\mathbf{Inn}$ of inner product spaces. Clearly the objects should be inner product spaces, and the morphisms should be linear maps, but what sorts of linear maps? Let’s just follow our noses and say “those that preserve the inner product”. That is, a linear map $T:V\rightarrow W$ is a morphism of inner product spaces if and only if for any two vectors $v_1,v_2\in V$ we have $\displaystyle\langle T(v_1),T(v_2)\rangle_W=\langle v_1,v_2\rangle_V$ where the subscripts denote which inner product we’re using at each point. Of course, given any inner product space we can “forget” the inner product and get the underlying vector space. This is a forgetful functor, and the usual abstract nonsense can be used to show that it creates limits. And from there it’s straightforward to check that the category of inner product spaces inherits some nice properties from the category of vector spaces. Most of the structures we get this way are pretty straightforward — just do the same constructions on the underlying vector spaces. But one in particular that we should take a close look at is the biproduct. What is the direct sum $V\oplus W$ of two inner product spaces? The underlying vector space will be the direct sum of the underlying vector spaces of $V$ and $W$, but what inner product should we use? Well, if $v_1$ and $v_2$ are vectors in $V$, then they get included into $V\oplus W$. But the inclusions have to preserve the inner product between these two vectors, and so we must have $\displaystyle\langle\iota_V(v_1),\iota_V(v_2)\rangle_{V\oplus W}=\langle v_1,v_2\rangle_V$ and similarly for any two vectors $w_1$ and $w_2$ in $W$ we must have $\displaystyle\langle\iota_W(w_1),\iota_W(w_2)\rangle_{V\oplus W}=\langle w_1,w_2\rangle_W$ So the only remaining question is what do we do with one vector from each space? Now we use a projection from the biproduct, which must again preserve the inner product. It lets us calculate $\displaystyle\langle\iota_V(v),\iota_W(w)\rangle_{V\oplus W}=\langle\pi_V(\iota_V(v)),\pi_V(\iota_W(w))\rangle_V=\langle v,0\rangle_V=0$ Thus the inner product between vectors from different subspaces must be zero. That is, distinct subspaces in a direct sum must be orthogonal. Incidentally, this shows that the direct sum between a subspace $U\subseteq V$ and its orthogonal complement $U^\perp$ is also a direct sum of inner product spaces. An important fact about the category of vector spaces is that all exact sequences split. That is, if we have a short exact sequence $\displaystyle\mathbf{0}\rightarrow U\rightarrow V\rightarrow W\rightarrow\mathbf{0}$ we can find a linear map from $W$ to $V$ which lets us view it as a subspace of $V$, and we can write $V\cong U\oplus W$. When we have an inner product around and $V$ is finite-dimensional, we can do this canonically. What we’ll do is define the orthogonal complement of $U\subseteq V$ to be the vector space $\displaystyle U^\perp=\left\{v\in V\vert\forall u\in U,\langle u,v\rangle=0\right\}$ That is, $U^\perp$ consists of all vectors in $V$ perpendicular to every vector in $U$. First, we should check that this is indeed a subspace. If we have vectors $v,w\in U^\perp$, scalars $a,b$, and a vector $u\in U$, then we can check $\displaystyle\langle u,av+bw\rangle=a\langle u,v\rangle+b\langle u,w\rangle=0$ and thus the linear combination $av+bw$ is also in $U^\perp$. Now to see that $U\oplus U^\perp\cong V$, take an orthonormal basis $\left\{e_i\right\}_{i=1}^n$ for $U\subseteq V$. Then we can expand it to an orthonormal basis $\left\{e_i\right\}_{i=1}^d$ of $V$. But now I say that $\left\{e_i\right\}_{i=n+1}^d$ is a basis for $U^\perp$. Clearly they’re linearly independent, so we just have to verify that their span is exactly $U^\perp$. First, we can check that $e_k\in U^\perp$ for any $k$ between $n+1$ and $d$, and so their span is contained in $U^\perp$. Indeed, if $u=u^ie_i$ is a vector in $U$, then we can calculate the inner $\displaystyle\langle u^ie_i,e_k\rangle=\bar{u^i}\langle e_i,e_k\rangle=\bar{u^i}\delta_{ik}=0$ since $i\leq n$ and $k\geq n+1$. Of course, we omit the conjugation when working over $\mathbb{R}$. Now, let’s say we have a vector $v\in U^\perp\subseteq V$. We can write it in terms of the full basis $\left\{e_k\right\}_{k=1}^d$ as $v^ke_k$. Then we can calculate its inner product with each of the basis vectors of $U$ as $\displaystyle\langle e_i,v^ke_k\rangle=v^k\langle e_i,e_k\rangle=v^k\delta_{ik}=v^i$ Since this must be zero, we find that the coefficient $v^i$ of $e_i$ must be zero for all $i$ between ${1}$ and $n$. That is, $U^\perp$ is contained within the span of $\left\{e_i\right\}_{i=n+1}^d$ So between a basis for $U$ and a basis for $U^\perp$ we have a basis for $V$ with no overlap, we can write any vector $v\in V$ uniquely as the sum of one vector from $U$ and one from $U^\perp$, and so we have a direct sum decomposition as desired. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"https://unapologetic.wordpress.com/2009/05/page/2/","timestamp":"2014-04-17T21:39:18Z","content_type":null,"content_length":"92322","record_id":"<urn:uuid:6a59a387-f6c9-4cc2-94d7-7942290c19ab>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
On the oscillation of the summatory totient about its average up vote 5 down vote favorite Let $$R(x)=\sum_{n\leq x}\phi(n)-\frac{3x^2}{\pi^2}.$$ Montgomery has shown that $R(x)=\Omega_{\pm}(x\sqrt{\log\log x})$, which is the best known lower bound. It seems interesting therefore that $$\ int_0^{\infty}\frac{R(x)dx}{x^2}=0,$$ because it tells us that the oscillations (which continue indefinitely) are particularly regular. I cannot find any references for this integral, so I am wondering if it is known. I would particularly like to find other work of this nature as I cannot prove anything about the rate of convergence of the improper integral (other than $o(1)$ as $X\rightarrow\infty$ where $X$ is the upper limit of integration). If I may ask, how do you prove this? – quid Apr 8 '13 at 20:10 1 It is quite lengthy, but the essence is that the integral over a finite interval may be written in terms of a uniformly convergent (for $X>1$) sum over the zeros of $\zeta(s)$. The necessary estimates to justify the limit of the contour are available. The uniform convergence and zero free region enables you to arrive at a contradiction supposing the limit as $X\rightarrow\infty$ is not $0$. More can be probably be said- it appears that the Mellin transform converges on the line $\sigma=1$. – Kevin Smith Apr 8 '13 at 20:34 The Mellin transform of $R(x)$, that is. – Kevin Smith Apr 8 '13 at 20:39 Thank you for the explanation! – quid Apr 8 '13 at 23:28 add comment 1 Answer active oldest votes I am not sure if this result is explicitly mentioned in the literature, but it certainly is classical. Let $$R(x) = \sum_{n \leq x}{\varphi(n)} - \frac{3x^2}{\pi^2}, \qquad H(x) = \sum_{n \leq x}{\frac{\varphi(n)}{n}} - \frac{6x}{\pi^2}.$$ Then by partial summation, $$\int^{x}_{0}{\frac{R(t)}{t^2} \: dt} = H(x) - \frac{R(x)}{x}.$$ A classical result of Chowla states that $$H(x) - \frac{R(x)}{x} = O\left((\log x)^{-4}\ right).$$ See Lemma 13 of S. Chowla, "Contributions to the analytic theory of numbers", Mathematische Zeitschrift 35:1 (1932), 279-299. (If you have access to Springer Link then it up vote 5 down is available here.) vote accepted From a cursory glance of Chowla's proof, the negative powers of a logarithm stem from the prime number theorem applied to the summatory function of the Möbius function, so it is likely that this bound could be improved with more modern estimates for this. For what it's worth, I answered a question closely related to this here. Do you mean $R(t)$ rather than $E(t)$ in the integral? – Barry Cipra Apr 8 '13 at 21:03 Yep, thanks. All fixed now. – Peter Humphries Apr 8 '13 at 21:09 Marvellous. I knew the partial summation but not the estimate. I think it is sufficient for my purposes. Thank you. – Kevin Smith Apr 8 '13 at 21:12 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/126890/on-the-oscillation-of-the-summatory-totient-about-its-average","timestamp":"2014-04-18T18:28:52Z","content_type":null,"content_length":"60124","record_id":"<urn:uuid:648953b0-d294-4615-8e21-d6268333800a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Forest I was working with a small experiment which includes families from two Eucalyptus species and thought it would be nice to code a first analysis using alternative approaches. The experiment is a randomized complete block design, with species as fixed effect and family and block as a random effects, while the response variable is growth strain (in \( \mu \epsilon\)). When looking at the trees one can see that the residual variances will be very different. In addition, the trees were growing in plastic bags laid out in rows (the blocks) and columns. Given that trees were growing in bags siting on flat terrain, most likely the row effects are zero. Below is the code for a first go in R (using both MCMCglmm and ASReml-R) and SAS. I had stopped using SAS for several years, mostly because I was running a mac for which there is no version. However, a few weeks ago I started accessing it via their OnDemand for Academics program via a web browser. The R code using REML looks like: # Options options(stringsAsFactors = FALSE) # Packages # Reading data, renaming, factors, etc gs = read.csv('eucalyptus-growth-stress.csv') # Both argophloia and bosistoana gsboth = subset(gs, !is.na(strain)) gsboth = within(gsboth, { species = factor(species) row = factor(row) column = factor(column) fam = factor(fam) ma = asreml(strain ~ species, random = ~ fam + row, rcov = ~ at(species):units, data = gsboth) # gamma component std.error z.ratio constraint #fam!fam.var 27809.414 27809.414 10502.036 2.6480022 Positive #row!row.var 2337.164 2337.164 3116.357 0.7499666 Positive #species_E.argopholia!variance 111940.458 111940.458 26609.673 4.2067580 Positive #species_E.bosistoana!variance 63035.256 63035.256 7226.768 8.7224681 Positive While using MCMC we get estimates in the ballpark by using: # Priors bothpr = list(R = list(V = diag(c(50000, 50000)), nu = 3), G = list(G1 = list(V = 20000, nu = 0.002), G2 = list(V = 20000, nu = 0.002), G3 = list(V = 20000, nu = 0.002))) # MCMC m2 = MCMCglmm(strain ~ species, random = ~ fam + row + column, rcov = ~ idh(species):units, data = gsboth, prior = bothpr, pr = TRUE, family = 'gaussian', burnin = 10000, nitt = 40000, thin = 20, saveX = TRUE, saveZ = TRUE, verbose = FALSE) # Iterations = 10001:39981 # Thinning interval = 20 # Sample size = 1500 # DIC: 3332.578 # G-structure: ~fam # post.mean l-95% CI u-95% CI eff.samp # fam 30315 12211 55136 1500 # ~row # post.mean l-95% CI u-95% CI eff.samp # row 1449 5.928 6274 589.5 # R-structure: ~idh(species):units # post.mean l-95% CI u-95% CI eff.samp # E.argopholia.units 112017 71152 168080 1500 # E.bosistoana.units 65006 52676 80049 1500 # Location effects: strain ~ species # post.mean l-95% CI u-95% CI eff.samp pMCMC # (Intercept) 502.21 319.45 690.68 1500 <7e-04 *** # speciesE.bosistoana -235.95 -449.07 -37.19 1361 0.036 * The SAS code is not that disimilar, except for the clear demarcation between data processing (data step, for reading files, data transformations, etc) and specific procs (procedures), in this case to summarize data, produce a boxplot and fit a mixed model. * termstr=CRLF accounts for the windows-like line endings of the data set; data gs; infile "/home/luis/Projects/euc-strain/growthstresses.csv" dsd termstr=CRLF firstobs=2; input row column species $ family $ strain; if strain ^= .; proc summary data = gs print; class species; var strain; proc boxplot data = gs; plot strain*species; proc mixed data = gs; class row species family; model strain = species; random row family; repeated species / group=species; Covariance Parameter Estimates Cov Parm Group Estimate row 2336.80 family 27808 species species E.argoph 111844 species species E.bosist 63036 I like working with multiple languages and I realized that, in fact, I missed SAS a bit. It was like meeting an old friend; at the beginning felt strange but we were quickly chatting away after a few INLA: Bayes goes to Norway INLA is not the Norwegian answer to ABBA; that would probably be a-ha. INLA is the answer to ‘Why do I have enough time to cook a three-course meal while running MCMC analyses?”. Integrated Nested Laplace Approximations (INLA) is based on direct numerical integration (rather than simulation as in MCMC) which, according to people ‘in the know’, allows: • the estimation of marginal posteriors for all parameters, • marginal posteriors for each random effect and • estimation of the posterior for linear combinations of random effects. Rather than going to the usual univariate randomized complete block or split-plot designs that I have analyzed before (here using REML and here using MCMC), I’ll go for some analyses that motivated me to look for INLA. I was having a look at some reproductive output for Drosophila data here at the university, and wanted to fit a logistic model using MCMCglmm. Unfortunately, I was running into the millions (~3M) of iterations to get a good idea of the posterior and, therefore, leaving the computer running overnight. Almost by accident I came across INLA and started playing with it. The idea is that Sol—a Ph.D. student—had a cool experiment with a bunch of flies using different mating strategies over several generations, to check the effect on breeding success. Therefore we have to keep track of the pedigree too. # Set working directory containing data and code # Packages needed for analysis # This code requires the latest (and updated) version of INLA require(INLA) # This loads INLA require(pedigreemm) # pedigree(), relfactor(), Tinv, D, ... ####### Pedigree and assessment files # Reads pedigree file ped = read.csv('recodedped.csv', header=FALSE) names(ped) = c('id', 'mum', 'dad') # Reads data file dat = read.csv('ggdatall.csv', header=TRUE) dat$cross = factor(dat$cross) # Pedigree object for pedigreemm functions pedPMM = with(ped, pedigreemm::pedigree(sire=dad, dam=mum, label=id)) # Pedigree precision matrix (A inverse) # T^{-1} in A^{-1} = (T^{-1})' D^{-1} T^{-1} Tinv = as(pedPMM, "sparseMatrix") D = Diagonal(x=Dmat(pedPMM)) # D in A = TDT' Dinv = solve(D) Ainv = t(Tinv) %*% Dinv %*% Tinv Up to this point we have read the response data, the pedigree and constructed the inverse of the pedigree matrix. We also needed to build a contrast matrix to compare the mean response between the different mating strategies. I was struggling there and contacted Gregor Gorjanc, who kindly emailed me the proper way to do it. # Define contrasts to compare cross types. Thanks to Gregor Gorjanc # for coding contrasts k = nlevels(dat$cross) tmp = matrix(nrow=(k-1)*k/2, ncol=k) ## 1 2 3 4 5 6 tmp[ 1, ] = c( 1, -1, NA, NA, NA, NA) ## c1-c2 tmp[ 2, ] = c( 1, NA, -1, NA, NA, NA) ## -c3 tmp[ 3, ] = c( 1, NA, NA, -1, NA, NA) ## -c4 tmp[ 4, ] = c( 1, NA, NA, NA, -1, NA) ## -c5 tmp[ 5, ] = c( 1, NA, NA, NA, NA, -1) ## -c6 tmp[ 6, ] = c( NA, 1, -1, NA, NA, NA) ## c2-c3 tmp[ 7, ] = c( NA, 1, NA, -1, NA, NA) ## -c4 tmp[ 8, ] = c( NA, 1, NA, NA, -1, NA) ## -c5 tmp[ 9, ] = c( NA, 1, NA, NA, NA, -1) ## -c6 tmp[10, ] = c( NA, NA, 1, -1, NA, NA) ## c3-c4 tmp[11, ] = c( NA, NA, 1, NA, -1, NA) ## -c5 tmp[12, ] = c( NA, NA, 1, NA, NA, -1) ## -c6 tmp[13, ] = c( NA, NA, NA, 1, -1, NA) ## c4-c5 tmp[14, ] = c( NA, NA, NA, 1, NA, -1) ## -c6 tmp[15, ] = c( NA, NA, NA, NA, 1, -1) ## c5-c6 # Make Linear Combinations LC = inla.make.lincombs(cross=tmp) # Assign names to combinations t = 0 for(i in 1:(k-1)) { for(j in (i+1):k) { t = t + 1 names(LC)[t] = paste("c", i, "-", "c", j, sep="") There is another related package (Animal INLA) that takes care of i- giving details about the priors and ii- “easily” fitting models that include a term with a pedigree (an animal model in quantitative genetics speak). However, I wanted the assumptions to be clear so read the source of Animal INLA and shamelessly copied the useful bits (read the source, Luke!). ###### Analysis for for binomial traits ####### Plain-vanilla INLA Version # Feeling more comfortable with *explicit* statement of assumptions # (rather than hidden behind animal.inla()) # Function to backconvert logits to probabilities back.conv = function(values){ # Function to get posterior of the odds # Thanks to Helena Moltchanova inla.marginal.summary = function(x){ m1 = inla.emarginal(function(z) exp(z), marginal=x) odds = inla.marginal.transform(function(x) exp(x), x) q = inla.qmarginal(p=c(0.025, 0.975), marginal=odds) c("0.025quant"=q[1], "0.5quant"=m1, "0.975quant"=q[2]) # Model for pupae/eggs # Drops a few observations with no reproductive output (trips INLA) no0eggs = subset(dat, eggs>0 & pupae <= eggs) # Actual model mpueg = pupae ~ f(cross, model='iid', constr=TRUE, hyper=list(theta=list(initial=-10, fixed=TRUE))) + f(id, model='generic0', constr=TRUE, Cmatrix=Ainv, hyper=list(theta=list(param=c(0.5,0.5), fixed=FALSE))) # INLA call fpueg = inla(formula=mpueg, family='binomial', data=no0eggs, # Results # Call: # c("inla(formula = mpueg, family = \"binomial\", data = no0eggs, # Ntrials = eggs, ", " lincomb = LC, control.compute = list(dic = FALSE))") # Time used: # Pre-processing Running inla Post-processing Total # 0.2712612 1.1172159 2.0439510 3.4324281 # Fixed effects: # mean sd 0.025quant 0.5quant 0.975quant kld # (Intercept) 1.772438 0.1830827 1.417413 1.770863 2.136389 0.5833235 # Linear combinations (derived): # ID mean sd 0.025quant 0.5quant 0.975quant kld # c1-c2 0 -0.26653572 0.7066540 -1.6558225 -0.26573011 1.11859967 0 # c1-c3 1 0.04150999 0.7554753 -1.4401435 0.04104020 1.52622856 0 # c1-c4 2 -0.08777325 0.6450669 -1.3557501 -0.08713005 1.17693349 0 # c1-c5 3 -1.36702960 0.6583121 -2.6615604 -1.36618274 -0.07690788 0 # c1-c6 4 -1.82037714 0.8193280 -3.4338294 -1.81848244 -0.21714431 0 # c2-c3 5 0.30804735 0.7826815 -1.2248185 0.30677279 1.84852340 0 # c2-c4 6 0.17876229 0.5321948 -0.8654273 0.17859036 1.22421409 0 # c2-c5 7 -1.10049385 0.7466979 -2.5663142 -1.10046590 0.36558211 0 # c2-c6 8 -1.55383673 0.8188321 -3.1640965 -1.55276603 0.05084282 0 # c3-c4 9 -0.12928419 0.7475196 -1.5996080 -0.12817855 1.33522000 0 # c3-c5 10 -1.40854298 0.6016539 -2.5930656 -1.40723901 -0.23103707 0 # c3-c6 11 -1.86189314 0.8595760 -3.5555571 -1.85954031 -0.18100418 0 # c4-c5 12 -1.27925604 0.6998640 -2.6536362 -1.27905616 0.09438701 0 # c4-c6 13 -1.73259977 0.7764105 -3.2600936 -1.73134961 -0.21171790 0 # c5-c6 14 -0.45334267 0.8179794 -2.0618730 -0.45229981 1.14976690 0 # Random effects: # Name Model Max KLD # cross IID model # id Generic0 model # Model hyperparameters: # mean sd 0.025quant 0.5quant 0.975quant # Precision for id 0.08308 0.01076 0.06381 0.08244 0.10604 # Expected number of effective parameters(std dev): 223.95(0.7513) # Number of equivalent replicates : 1.121 # Marginal Likelihood: -1427.59 # ID mean sd 0.025quant 0.5quant 0.975quant kld # 1 1 -0.5843466 0.4536668 -1.47561024 -0.5840804 0.3056632 0.0178780930 # 2 2 -0.3178102 0.4595676 -1.21808638 -0.3184925 0.5865565 0.0009666916 # 3 3 -0.6258600 0.4978254 -1.60536281 -0.6250077 0.3491075 0.0247426578 # 4 4 -0.4965763 0.4071715 -1.29571071 -0.4966277 0.3030747 0.0008791629 # 5 5 0.7826817 0.4389003 -0.07756805 0.7821937 1.6459253 0.0077476806 # 6 6 1.2360387 0.5768462 0.10897529 1.2340813 2.3744368 0.0451357379 # Backtransforms point estimates and credible intervals for odds -> prob for(name in names(fpueg$marginals.lincomb.derived)){ summa = inla.marginal.summary(eval(parse(text=paste("fpueg$marginals.lincomb.derived$\'", name, "\'", sep='')))) cat(name, summa, '\n') # c1-c2 0.1894451 0.9831839 3.019878 # c1-c3 0.2338952 1.387551 4.534581 # c1-c4 0.256858 1.127751 3.204961 # c1-c5 0.0695406 0.3164847 0.9145132 # c1-c6 0.03157478 0.2264027 0.792517 # c2-c3 0.289088 1.850719 6.255175 # c2-c4 0.4213069 1.377848 3.366947 # c2-c5 0.0759222 0.4398384 1.420934 # c2-c6 0.04135211 0.2955985 1.035951 # c3-c4 0.1996085 1.16168 3.747526 # c3-c5 0.0746894 0.2929174 0.7847903 # c3-c6 0.02774805 0.2245797 0.821099 # c4-c5 0.06988459 0.355529 1.084414 # c4-c6 0.03780307 0.2389529 0.7974092 # c5-c6 0.1245211 0.8878682 3.108852 A quick look at the time taken by INLA shows that it is in the order of seconds (versus overnight using MCMC). I have tried a few examples and the MCMCglmm and INLA results tend to be very close; however, figuring out how to code models has been very tricky for me. INLA follows the glorious tradition of not having a ‘proper’ manual, but a number of examples with code. In fact, they reimplement BUGS‘s examples. Personally, I struggle with that approach towards documentation, but you may be the right type of person for that. Note for letter to Santa: real documentation for INLA. I was talking with a student about using Norwegian software and he mentioned Norwegian Black Metal. That got me thinking about how the developers of the package would look like; would they look like Gaahl of Gorgoroth (see interview here)? Talk about disappointment! In fact Håvard Rue, INLA mastermind, looks like a nice, clean, non-black-metal statistician. To be fair, it would be quite hard to code in any language wearing those R, Julia and genome wide selection — “You are a pussy” emailed my friend. — “Sensu cat?” I replied. — “No. Sensu chicken” blurbed my now ex-friend. What was this about? He read my post on R, Julia and the shiny new thing, which prompted him to assume that I was the proverbial old dog unwilling (or was it unable?) to learn new tricks. (Incidentally, with friends like this who needs enemies? Hi, Gus.) I decided to tackle a small—but hopefully useful—piece of code: fitting/training a Genome Wide Selection model, using the Bayes A approach put forward by Meuwissen, Hayes and Goddard in 2001. In that approach the breeding values of the individuals (response) are expressed as a function of a very large number of random predictors (2000, our molecular markers). The dataset (csv file) is a simulation of 2000 bi-allelic markers (aa = 0, Aa = 1, AA = 2) for 250 individuals, followed by the phenotypes (column 2001) and breeding values (column 2002). These models are frequently adjusted using MCMC. In 2010 I attended this course in Ames, Iowa where Rohan Fernando passed us the following R code (pretty much a transliteration from C code; notice the trailing semicolons, for example). P.D. 2012-04-26 Please note that this is teaching code not production code: nmarkers = 2000; # number of markers startMarker = 1981; # set to 1 to use all numiter = 2000; # number of iterations vara = 1.0/20.0; # input data data = matrix(scan("trainData.out0"),ncol=nmarkers+2,byrow=TRUE); nrecords = dim(data)[1]; beg = Sys.time() # x has the mean followed by the markers x = cbind(1,data[,startMarker:nmarkers]); y = data[,nmarkers+1]; a = data[,nmarkers+2]; # inital values nmarkers = nmarkers - startMarker + 1; mean2pq = 0.5; # just an approximation scalea = 0.5*vara/(nmarkers*mean2pq); # 0.5 = (v-2)/v for v=4 size = dim(x)[2]; b = array(0.0,size); meanb = b; b[1] = mean(y); var = array(0.0,size); # adjust y ycorr = y - x%*%b; # MCMC sampling for (iter in 1:numiter){ # sample vare vare = ( t(ycorr)%*%ycorr )/rchisq(1,nrecords + 3); # sample intercept ycorr = ycorr + x[,1]*b[1]; rhs = sum(ycorr)/vare; invLhs = 1.0/(nrecords/vare); mean = rhs*invLhs; b[1] = rnorm(1,mean,sqrt(invLhs)); ycorr = ycorr - x[,1]*b[1]; meanb[1] = meanb[1] + b[1]; # sample variance for each locus for (locus in 2:size){ var[locus] = (scalea*4+b[locus]*b[locus])/rchisq(1,4.0+1) # sample effect for each locus for (locus in 2:size){ # unadjust y for this locus ycorr = ycorr + x[,locus]*b[locus]; rhs = t(x[,locus])%*%ycorr/vare; lhs = t(x[,locus])%*%x[,locus]/vare + 1.0/var[locus]; invLhs = 1.0/lhs; mean = invLhs*rhs; b[locus]= rnorm(1,mean,sqrt(invLhs)); #adjust y for the new value of this locus ycorr = ycorr - x[,locus]*b[locus]; meanb[locus] = meanb[locus] + b[locus]; Sys.time() - beg meanb = meanb/numiter; aHat = x %*% meanb; Thus, we just need defining a few variables, reading the data (marker genotypes, breeding values and phenotypic data) into a matrix, creating loops, matrix and vector multiplication and generating random numbers (using a Gaussian and Chi squared distributions). Not much if you think about it, but I didn’t have much time to explore Julia’s features as to go for something more complex. nmarkers = 2000 # Number of markers startmarker = 1981 # Set to 1 to use all numiter = 2000 # Number of iterations data = dlmread("markers.csv", ',') (nrecords, ncols) = size(data) #this is the mean and markers matrix X = hcat(ones(Float64, nrecords), data[:, startmarker:nmarkers]) y = data[:, nmarkers + 1] a = data[:, nmarkers + 2] nmarkers = nmarkers - startmarker + 1 vara = 1.0/nmarkers mean2pq = 0.5 scalea = 0.5*vara/(nmarkers*mean2pq) # 0.5 = (v-2)/v for v=4 ndesign = size(X, 2) b = zeros(Float64, ndesign) meanb = zeros(Float64, ndesign) b[1] = mean(y) varian = zeros(Float64, ndesign) # adjust y ycorr = y - X * b # MCMC sampling for i = 1:numiter # sample vare vare = dot(ycorr, ycorr )/randchi2(nrecords + 3) # sample intercept ycorr = ycorr + X[:, 1] * b[1]; rhs = sum(ycorr)/vare; invlhs = 1.0/(nrecords/vare); mn = rhs*invlhs; b[1] = randn() * sqrt(invlhs) + mn; ycorr = ycorr - X[:, 1] * b[1]; meanb[1] = meanb[1] + b[1]; # sample variance for each locus for locus = 2:ndesign varian[locus] = (scalea*4 + b[locus]*b[locus])/randchi2(4.0 + 1); # sample effect for each locus for locus = 2:ndesign # unadjust y for this locus ycorr = ycorr + X[:, locus] * b[locus]; rhs = dot(X[:, locus], ycorr)/vare; lhs = dot(X[:, locus], X[:, locus])/vare + 1.0/varian[locus]; invlhs = 1.0/lhs; mn = invlhs * rhs; b[locus] = randn() * sqrt(invlhs) + mn; #adjust y for the new value of this locus ycorr = ycorr - X[:, locus] * b[locus]; meanb[locus] = meanb[locus] + b[locus]; meanb = meanb/numiter; aHat = X * meanb; The code looks remarkably similar and there are four main sources of differences: 1. The first trivial one is that the original code read a binary dataset and I didn’t know how to do it in Julia, so I’ve read a csv file instead (this is why I start timing after reading the file 2. The second trivial one is to avoid name conflicts between variables and functions; for example, in R the user is allowed to have a variable called var that will not interfere with the variance function. Julia is picky about that, so I needed renaming some variables. 3. Julia pases variables by reference, while R does so by value when assigning matrices, which tripped me because in the original R code there was something like: b = array(0.0,size); meanb = b;. This works fine in R, but in Julia changes to the b vector also changed meanb. 4. The definition of scalar vs array created some problems in Julia. For example y' * y (t(y) %*% y in R) is numerically equivalent to dot(y, y). However, the first version returns an array, while the second one a scalar. I got an error message when trying to store the ‘scalar like an array’ in to an array. I find that confusing. One interesting point in this comparison is using rough code, not really optimized for speed; in fact, the only thing that I can say of the Julia code is that ‘it runs’ and it probably is not very idiomatic. Testing runs with different numbers of markers we get that R needs roughly 2.8x the time used by Julia. The Julia website claims better results in benchmarks, but in real life we work with, well, real problems. In 1996-7 I switched from SAS to ASReml for genetic analyses because it was 1-2 orders of magnitude faster and opened a world of new models. Today a change from R to Julia would deliver (in this particular case) a much more modest speed up (~3x), which is OK but not worth changing languages (yet). Together with the embryonic graphical capabilities and the still-to-develop ecosystem of packages, means that I’m still using R. Nevertheless, the Julia team has achieved very impressive performance in very little time, so it is worth to keep an eye on their progress. P.S.1 Readers are welcome to suggesting ways of improving the code. P.S.2 WordPress does not let me upload the binary version of the simulated data. P.S.3 Hey WordPress guys; it would be handy if the sourcecode plugin supported Julia! P.S.4 2012-04-26 Following AL’s recommendation in the comments, one can replace in R: rhs = t(x[,locus])%*%ycorr/vare; lhs = t(x[,locus])%*%x[,locus]/vare + 1.0/var[locus] rhs = crossprod(x[,locus],ycorr)/vare lhs = crossprod(x[,locus],x[,locus])/vare + 1.0/var[locus] reducing execution time by roughly 20%, making the difference between Julia and R even smaller. Mid-January flotsam: teaching edition I was thinking about new material that I will use for teaching this coming semester (starting the third week of February) and suddenly compiled the following list of links: • William Briggs writes It is time to stop teaching Frequentism to non-statisticians in a paper submitted to The American Statistician. Clearly he doesn’t want to be controversial with an abstract that reads We should cease teaching frequentist statistics to undergraduates and switch to Bayes. Doing so will reduce the amount of confusion and over-certainty rife among users of statistics. • Making the most of Google searches or a simple graphical explanation of search options for students doing online research. HT: Rafael Maia. I have to pass this link to our students doing Research • Reading newspapers or other sources of news is often a frustrating endeavor for the scientifically minded person. Tom Scott makes the experience more bearable with his handy design for journalism warning labels. This would be the perfect complement to Stats Chat’s “Stat of the week” competition. • Some honesty in statistics: footnote in a statistics textbook. HT: Vince Buffalo. • R-bloggers, the aggregator of bloggers writing about R, has reached 300 bloggers. Quantum Forest is part of that ever-growing R orgy. • I am currently sending new-to-R colleagues to Quick-R as a starting point. It is particularly useful if they already know how to run stats in another software (and then I’m not a slave on R-support duty). Thanks Robert for putting it together! Incidentally, I’m ordering a couple of copies of his book R in Action for our department. • If you have to sell R to your colleagues, David Smith of Revolutions fame has good news: the popularity of R as a language has increased, overtaking both SAS and Matlab. Here are the TIOBE • Currently re-reading: Experimental Design and Data Analysis for Biologists by Gerry Quinn and Mick Keough. It would be nice to have R code for the whole book; please let me know in the comments if you have seen it somewhere in the internets. • In a double-blind study violinists can’t tell the difference between Stradivarius violins and new ones. HT: Tim Harford. • P.S. Douglas Andrews reminds us about The big mistake: teaching stat as though it were math. HT: @AmstatNews. This commentary does link with Briggs’s rant, but it also smells of professional Enough procrastination. Let’s keep on filling out PBRF forms; it is the right time for that hexennial activity. Doing Bayesian Data Analysis now in JAGS Around Christmas time I presented my first impressions of Kruschke’s Doing Bayesian Data Analysis. This is a very nice book but one of its drawbacks was that part of the code used BUGS, which left mac users like me stuck. Kruschke has now made JAGS code available so I am happy clappy and looking forward to test this New Year present. In addition, there are other updates available for the programs included in the book. First impressions of Doing Bayesian Data Analysis About a month ago I was discussing the approach that I would like to see in introductory Bayesian statistics books. In that post I mentioned a PDF copy of Doing Bayesian Data Analysis by John K. Kruschke and that I have ordered the book. Well, recently a parcel was waiting in my office with a spanking new, real paper copy of the book. A few days are not enough to provide a ‘proper’ review of the book but I would like to discuss my first impressions about the book, as they could be helpful for someone out there. If I were looking for a single word to define the word it would be meaty, not on the “having the flavor or smell of meat” sense of the word as pointed out by Newton, but on the conceptual side. Kruschke has clearly put a lot of thought on how to draw a generic student with little background on the topic to start thinking of statistical concepts. In addition Kruschke clearly loves language and has an interesting, sometimes odd, sense of humor; anyway, Who am I to comment on someone else’s strange sense of humor? One difference between the dodgy PDF copy and the actual book is the use of color, three shades of blue, to highlight section headers and graphical content. In general I am not a big fun of lots of colors and contentless pictures as used in modern calculus and physics undergraduate books. In this case, the effect is pleasant and makes browsing and reading the book more accessible. Most graphics really drive a point and support the written material, although there are exceptions in my opinion like some faux 3D graphs (Figure 17.2 and 17.3 under multiple linear regression) that I find somewhat confusing. The book’s website contains PDF versions of the table of contents and chapter 1, which is a good way to whet your appetite. The book covers enough material as to be the sole text for an introductory Bayesian statistics course, either starting from scratch or as a transition from a previous course with a frequentist approach. There are plenty of exercises, a solutions manual and plenty of R code The mere existence of this book prompts the question: Can we afford not to introduce students to a Bayesian approach to statistics? In turn this sparks the question How do we convince departments to de-emphasize the old way? (this quote is extremely relevant) Verdict: if you are looking for a really introductory text, this is hands down the best choice. The material goes from the ‘OMG do I need to learn stats?’ level to multiple linear regression, ANOVA, hierarchical models and GLMs. P.S. I’m still using a combination of books, including Krushke’s, and Marin and Robert’s for my own learning process. P.S.2 There is a lot to be said about a book that includes puppies on its cover and references to A Prairie Home Companion on its first page (the show is sometimes re-broadcasted down under by Radio New Zealand). Tall big data, wide big data After attending two one-day workshops last week I spent most days paying attention to (well, at least listening to) presentations in this biostatistics conference. Most presenters were R users—although Genstat, Matlab and SAS fans were also present and not once I heard “I can’t deal with the current size of my data sets”. However, there were some complaints about the speed of R, particularly when dealing with simulations or some genomic analyses. Some people worried about the size of coming datasets; nevertheless that worry was across statistical packages or, more precisely, it went beyond statistical software. How will we able to even store the data from something like the Square Kilometer Array, let alone analyze it? In a previous post I was asking if we needed to actually deal with ‘big data’ in R, and my answer was probably not or, better, at least not directly. I still think that it is a valid, although incomplete opinion. In many statistical analyses we can think of n (the number of observations) and p (the number of variables per observation). In most cases, particularly when people refer to big data, n >> p. Thus, we may have 100 million people but we have only 10 potential predictors: tall data. In contrast, we may have only 1,000 individuals but with 50,000 points each coming from a near infrared spectrometry or information from 250,000 SNPs (a type of molecular marker): wide data. Both types of data will keep on growing but are challenging in a different way. In a totally generalizing, unfair and simplistic way I will state that dealing with wide is more difficult (and potentially interesting) than dealing with tall. This from a modeling perspective. As the t-shirt says: sampling is not a crime, and it should work quite well with simpler models and large datasets. In contrast, sampling to fitting wide data may not work at all. Algorithms. Clever algorithms is what we need in a first stage. For example, we can fit linear mixed models to a tall dataset with ten millions records or a multivariate mixed model with 60 responses using ASReml-R. Wide datasets are often approached using Bayesian inference, but MCMC gets slooow when dealing with thousands of predictors, we need other fast approximations to the posterior This post may not be totally coherent, but it keeps the conversation going. My excuse? I was watching Be kind rewind while writing it. If you are writing a book on Bayesian statistics This post is somewhat marginal to R in that there are several statistical systems that could be used to tackle the problem. Bayesian statistics is one of those topics that I would like to understand better, much better, in fact. Unfortunately, I struggle to get the time to attend courses on the topic between running my own lectures, research and travel; there are always books, of course. After we had some strong earthquakes in Christchurch we have had limited access to most part of our physical library (still had full access to all our electronic collection^†). Last week I had a quick visit to the library and picked up three introductory books: Albert’s Bayesian computation with R, Marin and Robert’s Bayesian core: a practical approach to computational Bayesian statistics and Bolstad’s Understanding computational Bayesian statistics (all links to Amazon). My intention was to see if I could use one (or several of them) to start on the topic. What follows are my (probably unfair) comments after reading the first couple of chapters of each book. In my (highly individual and dubious) opinion Albert’s book is the easiest to read. I was waiting to see the doctor while reading—and actually understanding—some of the concepts. The book is certainly geared towards R users and gradually develops the code necessary to run simple analyses from estimating a proportion to fitting (simple) hierarchical linear models. I’m still reading, which is a compliment. Marin and Robert’s book is quite different in that uses R as a vehicle (like this blog) but the focus is more on the conceptual side and covers more types of models than Albert’s book. I do not have the probability background for this course (or maybe I did, but it was ages ago); however, the book makes me want to learn/refresh that background. An annoying comment on the book is that it is “self-contained”; well, anything is self-contained if one asks for enough prerequisites! I’m still reading (jumping between Albert’s and this book), and the book has managed to capture my interest. Finally, Bolstad’s book. How to put this? “It is not you, it is me”. It is much more technical and I do not have the time, nor the patience, to wait until chapter 8 to do something useful (logistic regression). This is going back to the library until an indeterminate future. If you are now writing a book on the topic I would like to think of the following user case: • the reader has little or no exposure to Bayesian statistics, but it has been working for a while with ‘classical’ methods, • the reader is self-motivated, but he doesn’t want to spend ages to be able to fit even a simple linear regression, • the reader has little background on probability theory, but he is willing to learn some in between learning the tools and to run some analyses, • using a statistical system that allows for both classical and Bayesian approaches is a plus. It is hard for me to be more selfish in this description; you are potentially writing a book for me. † After the first quake our main library looked like this. Now it is mostly normal. P.S. After publishing this post I remembered that I came across a PDF copy of Doing Bayesian Data Analysis: A Tutorial with R and BUGS by Kruschke. Setting aside the dodginess of the copy, the book looked well-written, started from first principles and had puppies on the cover (!), so I ordered it from Amazon. P.D. 2011-12-03 23:45 AEST Christian Robert sent me a nice email and wrote a few words on my post. Yes, I’m still plodding along with the book although I’m taking a ten day break while traveling in P.D. 2011-11-25 12:25 NZST Here is a list of links to Amazon for the books suggested in the comments: Surviving a binomial mixed model A few years ago we had this really cool idea: we had to establish a trial to understand wood quality in context. Sort of following the saying “we don’t know who discovered water, but we are sure that it wasn’t a fish” (attributed to Marshall McLuhan). By now you are thinking WTF is this guy talking about? But the idea was simple; let’s put a trial that had the species we wanted to study (Pinus radiata, a gymnosperm) and an angiosperm (Eucalyptus nitens if you wish to know) to provide the contrast, as they are supposed to have vastly different types of wood. From space the trial looked like The reason you can clearly see the pines but not the eucalypts is because the latter were dying like crazy over a summer drought (45% mortality in one month). And here we get to the analytical part: we will have a look only at the eucalypts where the response variable can’t get any clearer, trees were either totally dead or alive. The experiment followed a randomized complete block design, with 50 open-pollinated families in 48 blocks. The original idea was to harvest 12 blocks each year but—for obvious reasons—we canned this part of the experiment after the first year. The following code shows the analysis in asreml-R, lme4 and MCMCglmm: sasreml = asreml(surv ~ 1, random = ~ Fami + Block, data = euc, family = asreml.binomial(link = 'logit')) # gamma component std.error z.ratio #Fami!Fami.var 0.5704205 0.5704205 0.14348068 3.975591 #Block!Block.var 0.1298339 0.1298339 0.04893254 2.653324 #R!variance 1.0000000 1.0000000 NA NA # constraint #Fami!Fami.var Positive #Block!Block.var Positive #R!variance Fixed # Quick look at heritability varFami = summary(sasreml)$varcomp[1, 2] varRep = summary(sasreml)$varcomp[2, 2] h2 = 4*varFami/(varFami + varRep + 3.29) #[1] 0.5718137 slme4 = lmer(surv ~ 1 + (1|Fami) + (1|Block), data = euc, family = binomial(link = 'logit')) #Generalized linear mixed model fit by the Laplace approximation #Formula: surv ~ 1 + (1 | Fami) + (1 | Block) # Data: euc # AIC BIC logLik deviance # 2725 2742 -1360 2719 #Random effects: # Groups Name Variance Std.Dev. # Fami (Intercept) 0.60941 0.78065 # Block (Intercept) 0.13796 0.37143 #Number of obs: 2090, groups: Fami, 51; Block, 48 #Fixed effects: # Estimate Std. Error z value Pr(>|z|) #(Intercept) 0.2970 0.1315 2.259 0.0239 * # Quick look at heritability varFami = VarCorr(slme4)$Fami[1] varRep = VarCorr(slme4)$Block[1] h2 = 4*varFami/(varFami + varRep + 3.29) #[1] 0.6037697 # And let's play to be Bayesians! pr = list(R = list(V = 1, n = 0, fix = 1), G = list(G1 = list(V = 1, n = 0.002), G2 = list(V = 1, n = 0.002))) sb <- MCMCglmm(surv ~ 1, random = ~ Fami + Block, family = 'categorical', data = euc, prior = pr, verbose = FALSE, pr = TRUE, burnin = 10000, nitt = 100000, thin = 10) You may be wondering Where does the 3.29 in the heritability formula comes from? Well, that’s the variance of the link function that, in the case of the logit link is pi*pi/3. In the case of MCMCglmm we can estimate the degree of genetic control quite easily, remembering that we have half-siblings (open-pollinated plants): # Heritability h2 = 4*sb$VCV[, 'Fami']/(sb$VCV[, 'Fami'] + sb$VCV[, 'Block'] + 3.29 + 1) # var1 # lower upper #var1 0.4056492 0.9698148 #[1] 0.95 By the way, it is good to remember that we need to back-transform the estimated effects to probabilities, with very simple code: # Getting mode and credible interval for solutions inv.logit(HPDinterval(sb$Sol, 0.95)) Even if one of your trials is trashed there is a silver lining: it is possible to have a look at survival. Coming out of the (Bayesian) closet: multivariate version This week I’m facing my—and many other lecturers’—least favorite part of teaching: grading exams. In a supreme act of procrastination I will continue the previous post, and the antepenultimate one, showing the code for a bivariate analysis of a randomized complete block design. Just to recap, the results from the REML multivariate analysis (that used ASReml-R) was the following: m4 = asreml(cbind(bden, veloc) ~ trait, random = ~ us(trait):Block + us(trait):Family, data = a, rcov = ~ units:us(trait)) # gamma component std.error #trait:Block!trait.bden:bden 1.628812e+02 1.628812e+02 7.854123e+01 #trait:Block!trait.veloc:bden 1.960789e-01 1.960789e-01 2.273473e-01 #trait:Block!trait.veloc:veloc 2.185595e-03 2.185595e-03 1.205128e-03 #trait:Family!trait.bden:bden 8.248391e+01 8.248391e+01 2.932427e+01 #trait:Family!trait.veloc:bden 1.594152e-01 1.594152e-01 1.138992e-01 #trait:Family!trait.veloc:veloc 2.264225e-03 2.264225e-03 8.188618e-04 #R!variance 1.000000e+00 1.000000e+00 NA #R!trait.bden:bden 5.460010e+02 5.460010e+02 3.712833e+01 #R!trait.veloc:bden 6.028132e-01 6.028132e-01 1.387624e-01 #R!trait.veloc:veloc 1.710482e-02 1.710482e-02 9.820673e-04 # z.ratio constraint #trait:Block!trait.bden:bden 2.0738303 Positive #trait:Block!trait.veloc:bden 0.8624639 Positive #trait:Block!trait.veloc:veloc 1.8135789 Positive #trait:Family!trait.bden:bden 2.8128203 Positive #trait:Family!trait.veloc:bden 1.3996166 Positive #trait:Family!trait.veloc:veloc 2.7650886 Positive #R!variance NA Fixed #R!trait.bden:bden 14.7057812 Positive #R!trait.veloc:bden 4.3442117 Positive #R!trait.veloc:veloc 17.4171524 Positive The corresponding MCMCglmm code is not that different from ASReml-R, after which it is modeled anyway. Following the recommendations of the MCMCglmm Course Notes (included with the package), the priors have been expanded to diagonal matrices with degree of belief equal to the number of traits. The general intercept is dropped (-1) so the trait keyword represents trait means. We are fitting unstructured (us(trait)) covariance matrices for both Block and Family, as well as an unstructured covariance matrix for the residuals. Finally, both traits follow a gaussian distribution: bp = list(R = list(V = diag(c(0.007, 260)), n = 2), G = list(G1 = list(V = diag(c(0.007, 260)), n = 2), G2 = list(V = diag(c(0.007, 260)), n = 2))) bmod = MCMCglmm(cbind(veloc, bden) ~ trait - 1, random = ~ us(trait):Block + us(trait):Family, rcov = ~ us(trait):units, family = c('gaussian', 'gaussian'), data = a, prior = bp, verbose = FALSE, pr = TRUE, burnin = 10000, nitt = 20000, thin = 10) Further manipulation of the posterior distributions requires having an idea of the names used to store the results. Following that, we can build an estimate of the genetic correlation between the traits (Family covariance between traits divided by the square root of the product of the Family variances). Incidentally, it wouldn’t be a bad idea to run a much longer chain for this model, so the plot of the posterior for the correlation looks better, but I’m short of time: rg = bmod$VCV[,'veloc:bden.Family']/sqrt(bmod$VCV[,'veloc:veloc.Family'] * # var1 HPDinterval(rg, prob = 0.95) # lower upper #var1 -0.132996 0.5764006 #[1] 0.95 And that’s it! Time to congratulate Jarrod Hadfield for developing this package.
{"url":"http://www.quantumforest.com/category/bayesian/","timestamp":"2014-04-18T20:49:57Z","content_type":null,"content_length":"179938","record_id":"<urn:uuid:ab36605d-e145-435d-a97e-7b15175da974>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving ordinary differential equations with OpenCL in C++ In this article we show how odeint can be adapted to work with VexCL. odeint is a library for solving ordinary differential equations (ODE) numerically with C++. ODEs are important in many scientific areas and hence numerous applications for odeint can be found. VexCL is a high-level C++ library for OpenCL. Its main feature are expression templates which significantly simplify the way one writes code for numerical problems. By using OpenCL the resulting code can run on a GPU or can be parallelized on multiple cores. Both libraries have been introduced here on the CodeProject: Note: This article does not give an introduction to the odeint and ordinary differential equations. If you are unfamiliar with those read the odeint article first! Note: VexCL needs C++11 features! So you have to compile with C++11 support enabled. odeint provides a mechanism which lets the user change the way how the elementary numerical computations (addition, multiplication, ...) are performed. This mechanism consist of a combination of state_type, algebra and operations. state_type represents here the state of the ODE and it is usually a vector type like std::vector, std::array. An example is: std::array< double , 3 > x1 , x2; // initialize x1, x2 odeint::range_algebra algebra; double dt = 0.1; algebra.for_each2( x1 , x2 , default_operations::scale_sum1( dt ) ); // computes x1 = dt * x2 for all elements of x1 and x2 The algebra is responsible for iterating of all elements of the state whereas the operations are responsible for the elementary operation. In the above example a for_each2 is used, which means that two state types are iterated. The operation is a scale_sum1 which simply calculates x1[i] = dt * x2[i]. odeint provides a set of predefined algebras: • range_algebra: default algebra which works on Boost.Ranges • array_algebra: specialized algebra for boost::array • fusion_algebra: algebra for compile-time sequences like boost::fusion::vector, std::tuple, ..., see Boost.Fusion • vector_space_algebra: algebra for vector space types which redirect all computations directly to the operations. • thrust_algebra: algebra for Thrust Many libraries for vector and matrix types provide expression templates for the elementary operations. Examples are Boost.Ublas, MTL4, and VexCL. Such libraries do not need an own algebra but can be used with the vector_space_algebra and the default_operations which simply calls the operations directly on the matrix or vector type. All you have to do in this case is to adapt odeint resizing mechanism. How this works for VexCL is described in this article. The adaption of Boost.Ublas and MTL4 is then very similar to the adaption of VexCL. Adapting VexCL VexCL introduces several vector types which live on OpenCL devices. The main vector type is vex::vector which is the classical analog to the std::vector. vex::vector can also be split over multiple devices. One of the major points of VexCL is that is supports expression templates. For example, it is possible to write code like vex::vector< double > x() , y() , z(); // initialize x, y, z z = 0.125 * ( x * x + 2.0 * x * y + y * y ); The second line in this example creates lazily an expression template which is evaluated when the assign operator is invoked. The advantage is surely that temporaries are avoided and you do not loose any performance. Since VexCL already supports expression templates it can directly be used with the vector_space_algebra of odeint. There is no need to introduce an additional algebra or new operations. The example from the previous section can be written as vex::vector< double > x , y; vector_space_algebra algebra; double dt = 0.1; algebra.for_each2( x , y , default_operations::scale_sum1( dt ); In order to use vex::vector with odeint only the resizing of VexCL needs to be adapted for odeint. Resizing in odeint is necessary since many solvers need temporary state types. These state types need to be constructed and initialized which is done by the resizing mechanism of odeint. The resizing mechanism of odeint consist of three class templates. These classes are is_resizeable<> which is a simply meta function telling odeint if the type is really resizable. The second class is same_size_impl<> which has a static method same_size taking two state_types as arguments and returning if both types have the same size. The third class is resize_impl<> which performs the actual resizing. These classes have a default implementation and can be specialized for any type. For VexCL the specialization is: template< typename T > struct is_resizeable< vex::vector< T > > : boost::true_type { }; template< typename T > struct resize_impl< vex::vector< T > , vex::vector< T > > static void resize( vex::vector< T > &x1 , const vex::vector< T > &x2 ) x1.resize( x2.queue_list() , x2.size() ); template< typename T > struct same_size_impl< vex::vector< T > , vex::vector< T > > static bool same_size( const vex::vector< T > &x1 , const vex::vector< T > &x2 ) return x1.size() == x2.size(); That is all. Having the specializations one can use VexCL and odeint. Of course, these specializations are already defined in odeint. You only need to include: #include <boost/numeric/odeint/external/vexcl/vexcl_resizing.hpp> VexCL also has a multi-vector, which packs several instances of vex::vector and allows to synchronously operate on all of them. The resizing specializations for the multi-vector are very similar to vex::vector and are also included in the header above. The power of GPUs is only used if one tries to solve large problems, such that many sub-problems can be solved in parallel. For ODEs one needs about 10000 coupled ODEs to gain performance from the GPU compared to the CPU. In this section two typical examples for large ODEs are introduced. In the first example we will use the Lorenz system and study its dependence on one of the parameters. The Lorenz system is a system of three coupled ODEs which shows chaotic behavior for a large range of parameters. The ODE reads dx / dt = -sigma * ( x - y ) dy / dt = R * x - y - x * z dz / dt = - b * z + x * y We will study the dependence on the parameter R. Therefore, we create a large set of these systems (each with a different parameter R), pack them all into one system and solve them simultaneously on the GPU. The Lorenz system is a system of three coupled ordinary differential equations. If we want to solve N of these systems the overall state has 3*N entries. We can pack each component separately into one of VexCL's vectors. But multi-vector consisting of three sub-vectors fits this problem much better. The typedefs are: typedef vex::vector< double > vector_type; typedef vex::multivector< double, 3 > state_type; The vector_type here is needed to store the parameters R. The sub-vectors of the state_type can be accessed via: state_type X; // initialize X auto &x = X(0); auto &y = X(1); auto &z = X(2); So, all x-components of the N Lorenz system are in X(0), all y-components are in X(1), and all z-components are in X(2). Now, we implement the system function. This function represents the ODE and is used from odeint to solve the ODE. The system needs to be a function object (a functor or a plain C-function) with three parameters. Is signature is void( const state_type&, state_type&, time_type ). The first parameter is an input parameter and represents the current state of the ODE, the second one is an output parameter and is used to store the RHS of the ode. The third parameter is simply the time. As said above VexCL supports expression templates for numerical computation. By using expression templates the system function becomes very simple. const double sigma = 10.0; const double b = 8.0 / 3.0; struct sys_func const vector_type &R; sys_func( const vector_type &_R ) : R( _R ) { } void operator()( const state_type &x , state_type &dxdt , double t ) const dxdt(0) = -sigma * ( x(0) - x(1) ); dxdt(1) = R * x(0) - x(1) - x(0) * x(2); dxdt(2) = - b * x(2) + x(0) * x(1); Note that the system function holds a vector for all parameters R. Each line in system functions computes the expression for the whole set of all N elements of the vector. This is in principle all. We can now instantiate one of odeints solvers and solve the ODE. A complete main program might look like this: // setup the opencl context vex::Context ctx( vex::Filter::Type(CL_DEVICE_TYPE_GPU) ); std::cout << ctx << std::endl; // set up number of systems, time step and integration time const size_t n = 1024 * 1024; const double dt = 0.01; const double t_max = 100.0; // initialize R double Rmin = 0.1 , Rmax = 50.0 , dR = ( Rmax - Rmin ) / double( n - 1 ); std::vector<double> r( n ); for( size_t i=0 ; i<n ; ++i ) r[i] = Rmin + dR * double( i ); vector_type R( ctx.queue() , r ); // initialize the state of the lorenz ensemble state_type X(ctx.queue(), n); X(0) = 10.0; X(1) = 10.0; X(2) = 10.0; // instantiate a stepper state_type , double , state_type , double , odeint::vector_space_algebra , odeint::default_operations > stepper; // solve the system integrate_const( stepper , sys_func( R ) , X , 0.0 , t_max , dt ); As you can see, odeint's vector_space_algebra and default operation set are used here. As a second example we choose a chain of coupled phase oscillators. Phase oscillators are a very simplified version of usual oscillator where the state is described by a 2π periodic variable. If a single phase oscillator is uncoupled its phase φ is described by a linear growth dφ / dt = ω where ω is the phase velocity. Therefore, interesting behavior can only be observed if two or more oscillators are coupled. In fact, a system of coupled phase oscillators is a prominent example of an emergent system where the coupled system shows a more complex behavior than its constitutes. The concrete example we analyze here is: dφ(i) / dt = ω(i) + sin( dφ(i+1) - dφ(i) ) + sin( dφ(i) - dφ(i-1) ) Note, that dφ(i) is a function of the time, the argument i denotes here the i.th phase in the chain. To implement such equations efficiently on the GPU Denis did a great job of introducing some kind of generalized stencils. The stencil for our problem is generated by extern const char oscillator_body[] = "return sin(X[-1] - X[0]) + sin(X[0] - X[1]);"; vex::StencilOperator< double, 3, 1, oscillator_body > S( queue_list() ); The first line simply generates an OpenCL string of the elementary operation done in each kernel. The second line instantiates the stencil operator. It can be applied to a vector x by imposing S(x). The complete system function of the chain of phase oscillators is then extern const char oscillator_body[] = "return sin(X[-1] - X[0]) + sin(X[0] - X[1]);"; struct sys_func const state_type ω vex::StencilOperator< double, 3, 1, oscillator_body > S; sys_func( const state_type &_omega ) : omega( _omega ) , S( _omega.queue_list() ) { } void operator()( const state_type &x , state_type &dxdt , value_type t ) const dxdt = omega + S( x ); Note again, how compact the code is. An equivalent version of the above system with Thrust is much larger. In fact the Lorenz system implementation introduced here as 78 lines of code, whereas the Thrust version has 145. Furthermore, Thrust also needed a separate algebra as well as a separate operations type. Performance comparison In this section we compare the performance of VexCL against Thrust. Thrust is a high level library for CUDA which provides a STL like interface to vectors on the CUDA devices. Furthermore, it can easily be used to put the computations on one or more cores of your CPU by using OpenMP. The Lorenz example for Thrust is described in the tutorial of odeint. Thrust does not provide expression templates, but is has an advanced iterator system which lets you program the numerical expression in an easy manner. Nevertheless it is more complicated than VexCL. The image below shows the performance of the Lorenz system example for several configurations of Thrust and VexCL. In detail it shows the performance of VexCL on one GPU, two GPUs, three GPUs, and on one CPU core. Furthermore, the performance for Thrust on the GPU (only single GPU computations are supported) and on one CPU are shown shown. It is clearly visible that Thrust outperforms VexCL on one GPU. But if more than one GPUs are used (and are installed on your computer) VexCL becomes faster. The same holds for the CPU version. Furthermore, one can clearly see that for small system sizes where the computation time is relatively small VexCL has a constant run-time. This is due to the fact that the OpenCL compiles the kernels at run-time for the GPU. (The right panel shows the performance of VexCL relative to Thrust on GPU. So if the curves are above 1 then VexCL is faster otherwise it is slower.) The performance results for the phase oscillator chain are shown in the next figure. For the Thrust version of the chain the system function has been implemented with Thrust's iterator system. Interestingly, VexCL here outperforms Thrust. Of course, for small system sizes the constant overhead is present and here Thrust performs better. But for large systems VexCL becomes faster. This might be due to the fact that the iterators in Thrust have their price but it is not exactly clear why which version is faster. We have shown how VexCL can be adapted to odeint and how it can be used to increase the performance when solving large ordinary differential equations. Large here means that the system size (number of coupled coupled ODEs) should be of the order 10000-100000 to see a reasonable performance gain compared to usual CPUs. For large systems this gain can be about 20 times. The performance of VexCL has also been compared against Thrust. In one example a large ensemble of ODEs has been solved. It turns out that Thrust is about 10% faster in this case compared to VexCL. In the second example a system of coupled phase oscillators has been studied. Here VexCL is faster by a factor of 1.2. Nevertheless, the expression templates of VexCL are a big plus for this library. It lets the user solve complicated problems within minutes where the development with native CUDA or OpenCL code or even with Thrust requires much more time. • 26.7.2012 - Initial version
{"url":"http://www.codeproject.com/Articles/429183/Solving-ordinary-differential-equations-with-OpenC","timestamp":"2014-04-16T07:45:21Z","content_type":null,"content_length":"93639","record_id":"<urn:uuid:4ef41c04-b3c5-4999-a128-399fcf9ecb1f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Thomas Callister Hales is an American mathematician working on the Langlands program. He is known in the area for having worked on the fundamental lemma, and proving a special case of it over the group Sp(4). Many of his ideas were incorporated into the final proof, due to Ngô Bao Châu. He is also known for his 1998 computer-aided proof of the Kepler conjecture, a centuries-old problem in discrete geometry which states that the most space-efficient way to pack spheres is in a pyramid shape. Hales also proved the honeycomb conjecture. back to top
{"url":"http://www.fields.utoronto.ca/programs/scientific/12-13/Fields-Carleton/index.html","timestamp":"2014-04-19T19:49:37Z","content_type":null,"content_length":"11801","record_id":"<urn:uuid:74174e44-666b-4981-8388-1c30abc55820>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Le Monde puzzle [#814] he #814 Le Monde math puzzle was to find 100 digits (between 1 and 10) such that their sum is equal to their product. Given the ten possible values of those digits, this is equivalent to finding integers a[1],…,a[10] such that which reduces the number of unknowns from 100 to 10 (or even 9). Furthermore, the fact that the (first) sum of the a[i]‘s is less than 100 implies that the (second) sum of the ia[i]‘s is less than 1000, hence i^a[i] is less than 1000. This reduces the number of possible ten-uplets enough to allow for an enumeration, hence the following R code: for (i2 in 0:bounds[2]) for (i3 in 0:bounds[3]) for (i4 in 0:bounds[4]) for (i5 in 0:bounds[5]) for (i6 in 0:bounds[6]) for (i7 in 0:bounds[7]) for (i8 in 0:bounds[8]) for (i9 in 0:bounds[9]) for (i10 in 0:bounds[10]){ if (sum(A)<101){ if (sum((1:10)*A)==prod((1:10)^A)) that produces two answers [1] 97 0 0 2 0 0 1 0 0 0 [1] 95 2 3 0 0 0 0 0 0 0 i.e. either 97 1′s, 2 4′s and 1 7, or 95 1′s, 2 2′s and 3 3′s. I would actually love to see a coding solution that does not involve this pedestrian pile of “for”. And a mathematical solution based on Diophantine equations. Rather than the equally pedestrian solution given by Le Monde this weekend. 2 Responses to “Le Monde puzzle [#814]” 1. You can use recursion to get rid of the big pile of for loops – each call to the recursive function steps through one level of the loop. It is much slower, however. □ Thanks, Martyn. I was wondering whether a list could help, instead…
{"url":"http://xianblog.wordpress.com/2013/04/02/le-monde-puzzle-814/","timestamp":"2014-04-21T05:09:11Z","content_type":null,"content_length":"39166","record_id":"<urn:uuid:8fea0310-1a6e-458a-999a-992f79b9dd5b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Accept and Reject Null Hypothesis The given hypothesis is tested with the help of the sample data. A simple random sample has the full freedom of giving any value to its statistic. The sample is not aware of our plans. We decide about our hypothesis on the basis of the sample statistic. If the sample does not support the null hypothesis, we reject it on probability basis and accept the alternative hypothesis. If the sample does not oppose the hypothesis, the hypothesis is accepted. But here ‘accept’ does not mean the acceptance of null hypothesis but only means that the sample has not strongly opposed it. “Not opposed” does not mean that the sample has strongly supported the hypothesis. The support of the sample in favor of the hypothesis cannot be established. When the hypothesis is rejected, it is rejected with a high probability. Thus rejection of There is a modern approach in which the terms rejection and acceptance are not used. This modern approach is beyond the level of this book. But it remains true in its place that acceptance of a null hypothesis is a weak decision whereas rejection is a strong evidence of the sample against the null hypothesis. When the null hypothesis is rejected, it means the sample has done some statistical work but when the null hypothesis is accepted, it means the sample is almost silent. This behavior of the sample should not be used in favor of the null hypothesis.
{"url":"http://www.emathzone.com/tutorials/basic-statistics/accept-and-reject-null-hypothesis.html","timestamp":"2014-04-19T11:56:20Z","content_type":null,"content_length":"19385","record_id":"<urn:uuid:5748ac00-dc79-4122-98e7-d2ee9635e140>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Wallpaper Symmetry Exploration From EscherMath Objective: Learn to recognize the lattice in a wallpaper pattern. Learn to use the flow chart to identify wallpaper symmetry groups. • Printed copy of the Wallpaper Symmetry Exploration. 1. Mark the lattice of translations on this geometric pattern: 2. For the following geometric wallpaper patterns • Mark all centers of rotation symmetry. Clearly indicate the order of rotation. • Identify and mark mirror lines and glide reflections. A. B. File:Honeycomb.svg C. D. E. F. Symmetry Groups 3. Identify the symmetry group of each figure in the previous part. 4. For each of these sketches from Escher's regular division drawings, use the flow chart to identify the symmetry group of each pattern (ignore colors). Handin: This page with the geometric patterns marked, and all symmetry groups identified. Instructor:Wallpaper Symmetry Solutions (Instructors only)
{"url":"http://euler.slu.edu/escher/index.php?title=Wallpaper_Symmetry_Exploration&direction=prev&oldid=9281","timestamp":"2014-04-20T03:37:22Z","content_type":null,"content_length":"21997","record_id":"<urn:uuid:3ce7dc95-435c-41bc-99ac-7fec9d55cee4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Christian Goldbach From Exampleproblems Christian Goldbach (March 18, 1690 - November 20, 1764), was a Prussian mathematician, who was born in Królewiec (Königsberg), in Prussia, as son of a pastor. Goldbach studied law and mathematics. He traveled widely throughout Europe and met with many famous mathematicians, such as Leibniz, Leonhard Euler, and Nicholas I Bernoulli. Goldbach went to work at the newly opened St Petersburg Academy of Sciences and became tutor to the later Tsar Peter II. Goldbach did important work in the mathematical field. He is remembered today for Goldbach's conjecture. External link • O'Connor, John J., and Edmund F. Robertson. "Christian Goldbach". MacTutor History of Mathematics archive. Template:Mathbiostubde:Christian Goldbach es:Christian Goldbach fr:Christian Goldbach it:Christian Goldbach he:כריסטיאן גולדבך nl:Christian Goldbach pt:Christian Goldbach scn:Christian Goldbach sv:Christian Goldbach
{"url":"http://www.exampleproblems.com/wiki/index.php/Christian_Goldbach","timestamp":"2014-04-19T22:29:46Z","content_type":null,"content_length":"20232","record_id":"<urn:uuid:2cec2798-c534-44ae-b00b-29ff12756ba2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. • Index • » Help Me ! • » Laplace Transform of sin^2t Post a reply Topic review (newest first) 2013-05-13 08:58:08 Helsaint wrote: Sin^2(t) = 1/2 - cos2t That is not correct. 2013-05-13 08:41:54 Shouldn't this be Sin^2(t) = 1/2 - cos2t L{sin^2(t)} = 1/2s - s/(s^2+4) 2012-10-26 15:38:46 Welcome to the forum. That is not correct. 2012-10-26 09:18:26 to be clear I am talking about (sin(t))^2 2012-10-26 09:17:22 I get that the laplace transform of sin^2t = -(sin^2te^-st)/s + 2/s^3+4s evaluated from 0...infinity. when I evaluate the limit from 0..infinity I get that the transform to equal 0. Did I evaluate that right? 2011-08-10 22:52:21 Wolfram is going to put it in terms of the Dirac delta function, which I think is a step function. There are diiferent definitions for a fourier transform, that page will partly explain that. 2011-08-10 22:50:01 I just tried the Fourier transform of f(x) = 1 and got ... is that correct? I'll check on Wolfram. 2011-08-10 22:43:50 There are FFT and DFT's. Wikipedia can be a horror story at times. To me that is exactly what that is saying. I have never seen their notation. They are using small f with a cap ( borrowed from statistics) 2011-08-10 22:43:40 Also what is the notation for a Fourier transform? For Laplace it's a fancy L, is it a fancy F for Fourier transforms? 2011-08-10 22:38:37 Sorry if it's a bother but do you know how to compute Fourier transforms? I'm trying to learn how, I've seen the Wikipedia article and saw this: for every real number ξ. Does this mean that if I put in some function of x, such as sin(x), I'll get f(ξ) where ξ is a real number? Not sure, I'll post my working in a second. Sorry if I sound stupid... 2011-08-10 22:21:32 Zeroing the LHS will leave you with just the Laplace term. That should be your answer. I was just asking to see what you thought about it. Since t approaches infinity it will drown out s no matter how small as long as s > 0. That is nice, spotting the Laplace Transform there. In addition zetafunc, welcome to the forum! Why not consider becoming a member here? 2011-08-10 22:20:29 I wasn't given an interval for s, sorry. I am just waiting for my GCSE results (I turn 16 in August) and I'm just trying to extend my knowledge of calculus. I want to learn about Fourier transforms too hopefully but I need some practice with that. 2011-08-10 22:18:46 What I meant is that I get Then evaluate RHS at 0 and subtract that from the evaluation at infinity. I got 0... so then we have Therefore assuming s > 0. I also tried the Laplace transformation for sin^a(t) and got . 2011-08-10 22:11:21 I am glad to help but we are not done yet. The LHS has to be evaluated at infinity and then you subtract the evaluation of it at 0. The RHS is untouched. How are you getting 0 for the LHS? If s is very small then the LHS is not zero. Were you given some interval for s? 2011-08-10 22:03:07 Thanks for the response again and confirming that my IBP was correct -- I think I get it now - subtract the rightmost term from both sides to get y(s) - 2/(s^3 + 4s), evaluate the RHS at 0 and infinity to get 0 (0 - 0 = 0), then add 2/(s^3 + 4s) to both sides to get the completed Laplace transform? Is that correct? Phew, thanks for your help. Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° Shouldn't this beSin^2(t) = 1/2 - cos2tL{sin^2(t)} = 1/2s - s/(s^2+4) Hello, I get that the laplace transform of sin^2t = -(sin^2te^-st)/s + 2/s^3+4s evaluated from 0...infinity. when I evaluate the limit from 0..infinity I get that the transform to equal 0. Did I evaluate that right? Wolfram is going to put it in terms of the Dirac delta function, which I think is a step function.There are diiferent definitions for a fourier transform, that page will partly explain that. Thanks...I just tried the Fourier transform of f(x) = 1 and got Hi;There are FFT and DFT's. Wikipedia can be a horror story at times. To me that is exactly what that is saying.I have never seen their notation. They are using small f with a cap ( borrowed from Also what is the notation for a Fourier transform? For Laplace it's a fancy L, is it a fancy F for Fourier transforms? Sorry if it's a bother but do you know how to compute Fourier transforms? I'm trying to learn how, I've seen the Wikipedia article and saw this: Hi;Zeroing the LHS will leave you with just the Laplace term. That should be your answer.I was just asking to see what you thought about it. Since t approaches infinity it will drown out s no matter how small as long as s > 0.That is nice, spotting the Laplace Transform there.In addition zetafunc, welcome to the forum! Why not consider becoming a member here? I wasn't given an interval for s, sorry. I am just waiting for my GCSE results (I turn 16 in August) and I'm just trying to extend my knowledge of calculus. I want to learn about Fourier transforms too hopefully but I need some practice with that. Hi;I am glad to help but we are not done yet. Hi,Thanks for the response again and confirming that my IBP was correct -- I think I get it now - subtract the rightmost term from both sides to get y(s) - 2/(s3 + 4s), evaluate the RHS at 0 and infinity to get 0 (0 - 0 = 0), then add 2/(s3 + 4s) to both sides to get the completed Laplace transform? Is that correct? Phew, thanks for your help.
{"url":"http://www.mathisfunforum.com/post.php?tid=16166&qid=237181","timestamp":"2014-04-20T08:42:55Z","content_type":null,"content_length":"25246","record_id":"<urn:uuid:5a93bce6-ef9c-4c48-93f7-2b407d6dacdf>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
MBase: Representing knowledge and context for the integration of mathematical software systems Results 1 - 10 of 32 - IN PROCEEDINGS AISC'2000 , 2000 "... In this paper we present an extension OMDoc to the Open-Math standard that allows to represent the semantics and structure of various kinds of mathematical documents, including articles, textbooks, interactive books, courses. It can serve as the content language for agent communication of mathematic ..." Cited by 42 (5 self) Add to MetaCart In this paper we present an extension OMDoc to the Open-Math standard that allows to represent the semantics and structure of various kinds of mathematical documents, including articles, textbooks, interactive books, courses. It can serve as the content language for agent communication of mathematical services on a mathematical software bus. - CADE17, LNAI 1831 , 2000 "... In this paper we describe the MBase system, a web-based, distributed mathematical knowledge base. This system is a mathematical service in MathWeb that offers ... ..." Cited by 21 (9 self) Add to MetaCart In this paper we describe the MBase system, a web-based, distributed mathematical knowledge base. This system is a mathematical service in MathWeb that offers ... - In Proceedings of the IJCAI Workshop on Knowledge Representation , 2003 "... The representation of knowledge for a mathematical proof assistant is generally used exclusively for the purpose of proving theorems. Aiming at a broader scope, we examine the use of mathematical knowledge in a mathematical tutoring system with flexible natural language dialog. Based on an analysis ..." Cited by 19 (16 self) Add to MetaCart The representation of knowledge for a mathematical proof assistant is generally used exclusively for the purpose of proving theorems. Aiming at a broader scope, we examine the use of mathematical knowledge in a mathematical tutoring system with flexible natural language dialog. Based on an analysis of a corpus of dialogs we collected with a simulated tutoring system for teaching proofs in naive set theory, we identify several interesting problems which lead to requirements for mathematical knowledge representation. This includes resolving reference between natural language expressions and mathematical formulas, determining the semantic role of mathematical formulas in context, and determining the contribution of inference steps specified by the user. 1 - Proc. of Artificial Intelligence and Symbolic Computation, number 4120 in LNAI , 2006 "... Abstract. We present a search engine for mathematical formulae. The MathWebSearch system harvests the web for content representations (currently MathML and OpenMath) of formulae and indexes them with substitution tree indexing, a technique originally developed for accessing intermediate results in a ..." Cited by 18 (1 self) Add to MetaCart Abstract. We present a search engine for mathematical formulae. The MathWebSearch system harvests the web for content representations (currently MathML and OpenMath) of formulae and indexes them with substitution tree indexing, a technique originally developed for accessing intermediate results in automated theorem provers. For querying, we present a generic language extension approach that allows constructing queries by minimally annotating existing representations. First experiments show that this architecture results in a scalable application. 1 - BULLETIN OF THE ACM SPECIAL INTEREST GROUP ON SYMBOLIC AND AUTOMATED MATHEMATICS (SIGSAM , 2000 "... The OpenMath framework for transmitting mathematical objects over the Internet relies on the concept of Content Dictionaries (CDs) to define the semantics of mathematical objects. This is an essential measure for establishing a meaningful communication amongst mathematical software systems (and huma ..." Cited by 12 (3 self) Add to MetaCart The OpenMath framework for transmitting mathematical objects over the Internet relies on the concept of Content Dictionaries (CDs) to define the semantics of mathematical objects. This is an essential measure for establishing a meaningful communication amongst mathematical software systems (and humans). Currently, the infrastructure for conceiving, administering, viewing CDs is limited to a file-based almost flat repository. In this paper, we propose to use the OMDoc extension of the OpenMath Xml encoding as an infrastructure to express and manipulate content dictionary information. OMDoc extends OpenMath by adding support for document markup (making the CDs more readable to the human user) and structured specification (making them more explicit, formal, and allow the user to reuse, and inherit CD information in a flexible, but well-defined way). - Mathematical Knowledge Management, MKM’03, number 2594 in LNCS , 2003 "... Abstract. We propose an infrastructure for collaborative content management and version control for structured mathematical knowledge. This will enable multiple users to work jointly on mathematical theories with minimal interference. We describe the API and the functionality needed to realize a cvs ..." Cited by 11 (0 self) Add to MetaCart Abstract. We propose an infrastructure for collaborative content management and version control for structured mathematical knowledge. This will enable multiple users to work jointly on mathematical theories with minimal interference. We describe the API and the functionality needed to realize a cvs-like version control and distribution model. This architecture extends the cvs architecture in two ways, motivated by the specific needs of distributed management of structured mathematical knowledge on the Internet. On the one hand the one-level client/server model of cvs is generalized to a multi-level graph of client/server relations, and on the other hand the underlying change-detection tools take the math-specific structure of the data into account. 1 - In Proc. Transgressive Computing 2006: A conference in honor of Jean Della Dora (TC 2006 , 2006 "... We examine the problem of notation selection in mathematical computing environments. Users of mathematical software may require different notations for the same expression in a variety of settings. How this can be managed in a general way is the subject of this paper. We describe a software tool tha ..." Cited by 10 (2 self) Add to MetaCart We examine the problem of notation selection in mathematical computing environments. Users of mathematical software may require different notations for the same expression in a variety of settings. How this can be managed in a general way is the subject of this paper. We describe a software tool that can be configured to allow mathematical packages to provide output according to specified notation preferences. We explore how the choice of a set of notations can be used to disambiguate mathematical input and output in a variety of settings, including mathematical handwriting recognition, mathematical knowledge management and computer algebra systems. 1 - Proc. of the 3rd Int. Conference on Mathematical Knowledge Management, MKM’04 , 2004 "... Proving is an activity that makes use of mathematical knowledge. ..." - JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY , 2007 "... Natural language interaction between a student and a tutoring or an assistance system for mathematics is a new multi-disciplinary challenge that requires the interaction of (i) advanced natural language processing, (ii) flexible tutorial dialog strategies including hints, and (iii) mathematical dom ..." Cited by 8 (5 self) Add to MetaCart Natural language interaction between a student and a tutoring or an assistance system for mathematics is a new multi-disciplinary challenge that requires the interaction of (i) advanced natural language processing, (ii) flexible tutorial dialog strategies including hints, and (iii) mathematical domain reasoning. This paper provides an overview on the current research in the multi-disciplinary research project Dialog, whose goal is to build a prototype dialog-enabled system for teaching to do mathematical proofs. We present the crucial sub-systems in our architecture: the input understanding component and the domain reasoner. We present an interpretation method for mixed-language input consisting of informal and imprecise verbalization of mathematical content, and a proof manager that supports assertion-level automated theorem proving that is a crucial part of our domain reasoning module. Finally, we briefly report on an implementation of a demo system. - In Proceeding of the Third International Conference on Mathematical Knowledge Management, MKM 2004. Bialowieza, Poland. LNCS 3119 , 2004 "... Abstract. The paper describes an innovative technique for efficient retrieval of mathematical statements from large repositories, developing and substantially improving the metadata-based approach introduced in [13]. 1 ..." Cited by 8 (2 self) Add to MetaCart Abstract. The paper describes an innovative technique for efficient retrieval of mathematical statements from large repositories, developing and substantially improving the metadata-based approach introduced in [13]. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=170262","timestamp":"2014-04-19T18:09:23Z","content_type":null,"content_length":"37672","record_id":"<urn:uuid:d7ff2a49-902d-4a8a-bf69-8594542f2cdc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Appendix C Field Manual 3-34.331 TOPOGRAPHIC SURVEYING 16 January 2001 TOC Chap1 2 3 4 5 6 7 8 9 10 11 AppA AppB AppC AppD Gl Bib Appendix C Basic Survey Computations This appendix contains recommended procedures for performing basic survey computations. Until recently, three different forms were used to compute a two-point intersection. Army units have developed a one-sheet format (Figure C-1) to use when computing a two-point intersection. This one-sheet format is broken down into three parts and combines portions of DA Forms 1920, 1938, and 1947. Part I is from DA Form 1920, Part II is from DA Form 1938, and Part III is from DA Form 1947. Figure C-1. Format for Basic Survey C-1. Tabulate data (known and field) for a two-point intersection on DA Form 1962 (Figure C-2) or on a blank piece of paper with an identifying heading. Include the following information: • A properly oriented sketch of the triangle with the known baseline stations, an unknown station, and any other information that may be needed to organize computations. Label the unknown point as number 1 and the known points (clockwise from the unknown point) as number 2 and number 3. • The position and elevation of known stations. • The grid azimuth and grid distance of the known baseline. • The observed horizontal angles, ZDs, and HIs. NOTE: The grid azimuth (denoted by t) and the grid distance may be computed on DA Form 1934 by using UTM coordinates. If needed, conversions can be computed on DA Forms 1932 and 1933. C-2. Perform the following steps to complete Part I (Figure C-1): Step 1. Abstract all pertinent information from DA Form 1962 onto Part I. Include the following information: • Record the □ Project name. □ Project location. □ Organization performing the survey. □ Date of computation. • Record the station names (under the station column) opposite their respective numbers. Station 1 (unknown station) and Stations 2 and 3 (known stations). • Record the observed horizontal angles opposite their respective numbers under the observed-angle column. • Record the distance of the given side (side 2-3) that serves as the baseline under the distance column. • Record the station names that correspond to each side under the side column. Step 2. Complete the following items in Part I: • Compute the unknown angle (number 1) by subtracting the two observed angles from 180�. • Compute the sine of angle number 1 and record to nine decimal places with the sign (round the answer). • Compute the side/sine ratio (denoted by D) by dividing the distance of the given side (side 2-3) by the sine of angle number 1 and record to six decimal places (round the answer). • Compute the sine of angle number 2 and record to nine decimal places with the sign (round the answer). • Compute side 1-3 by multiplying the sine of angle number 2 by D and record to three decimal places (round the answer). • Compute the sine of angle number 3 and record to nine decimal places with the sign (round the answer). • Compute side 1-2 by multiplying the sine of angle number 3 by D and record to three decimal places (round the answer). C-3. Perform the following steps to complete Part II (Figure C-1): Step 1. Abstract all necessary information from DA Forms 1962 and Part I onto Part II. Record the • Project name. • Project location. • Organization performing the survey. • Ellipsoid name. • Zone number. • Meridian designation. • t (2 to 3). • t (3 to 2). • Angle at Station 2 (� 2). • Angle at Station 3 (� 3). • Northing and easting of Station 2 (N^2 and E^2). • Northing and easting of Station 3 (N^3 and E^3). • Station names opposite their appropriate numbers (for example, 2 ABE, 1 Pole, or 3 CAT). • Grid distance of side 1-2 (from Part I). • Grid distance of side 1-3 (from Part I). Step 2. Complete the following items: • Compute t (2 to 1) by adding � 2 to t (2 to 3). If the sum exceeds 360�, subtract 360�. • Compute the sine of t (2 to 1). Record to nine decimal places with the sign (round the answer). • Compute the dE by multiplying the sine of t (2 to 1) by the grid distance of side 2-1. Record to three decimal places with the sign (round the answer). • Compute E[1] by algebraically adding dE and E[2]. Record to three decimal places. • Compute the cosine of t (2 to 1). Record to nine decimal places with the sign (round the answer). • Compute the dN by multiplying the cosine of t (2 to 1) by the grid distance of side 2-1. Record to three decimal places with the sign (round the answer). • Compute N[1] by algebraically adding dN and N[2]. Record to three decimal places. • Compute t (3 to 1) by subtracting the � 3 from t (3 to 2). If � 3 is larger than t (3 to 2), add 360� before subtracting. • Compute the sine of t (3 to 1). Record to nine decimal places with the sign (round the answer). • Compute dE by multiplying the sine of t (3 to 1) by the grid distance of side 3-1. Record to three decimal places with the sign (round the answer). • Compute E[1] by algebraically adding dE and E[3]. Record to three decimal places. • Compute the cosine of t (3 to 1). Record to nine decimal places with the sign (round the answer). • Compute dN by multiplying the cosine of t (3 to 1) by the grid distance of side 3-1. Record to three decimal places with the sign (round the answer). • Compute N[1] by algebraically adding dN and N[3]. Record to three decimal places. NOTE: Compare the two sets of N[1] and E[1]. They must agree to within 0.001. If they do not, then a math or abstraction error was made, and Part II must be recomputed. C-4. Perform the following steps to complete Part III (Figure C-1): Step 1. Abstract all information from DA Forms 1962 and Part II onto Part III. Record the • Project name. • Project location. • Organization performing the survey. • Date of computation. • Name of the station whose elevation is known (Station 1, occupied). • Name of the station whose elevation is unknown (Station 2, observed). • Object sighted (for example, target or obstruction light). • Mean observed ZD (denoted by • Mean latitude (denoted by f) and the azimuth of a line (denoted by a). NOTE: The azimuth of a line is recorded to the nearest minute and is obtained from Part II. The mean latitude is obtained by converting the northings and eastings computed on Part II to geographic positions and then taking the mean of the latitudes. • Weighted mean coefficient of refraction (0.5 - m). When this is not observed, use 0.4290. • Grid distance (denoted by s) (from Part II). • Elevation of the occupied station (denoted by h1) (from DA Form 1962). • HI of the station occupied (from DA Form 1962). Step 2. Compute the elevation by using the following formulas: • Compute rho (denoted by r) sine 1". Record to three decimal places (round the answer). r is the mean radius of curvature in the plane of the distance and will be given (it can be found on DA Form R = radius of curvature in the plane of the meridian (obtained from NIMA's table generating software) N = radius of curvature in the plane of the prime vertical (obtained from NIMA's table generating software) • Compute the correction for the earth's curvature (denoted by k) in seconds (denoted by secs). Record to one decimal place (round the answer). k (in secs) m = mean coefficient of refraction • Compute (90� - • Compute the tangent of (90� - • Compute h[2] - h[1]. Record to three decimal places with the sign (round the answer). h[2] - h[1] = s �tan(90� - [1] + k) • Compute the corrected elevation by algebraically adding (h[2] - h[1]), h1, and HI. • Repeat Part III, steps 1 and 2, for observations taken from the other end of the baseline. • Sign and date the form. NOTE: Compute the DE between the two computed elevations. Use the following formula to determine the AE: Use the shortest of the two distances to the unknown point. If the DE is larger than the AE, check for math and abstraction errors. If none are found, the intersection does not meet specifications and needs to be reobserved. C-5. DA Form 1940 is used to compute a grid traverse. Tabulate known and field data for the traverse on a DA Form 1962 (Figure C-3) or on a blank piece of notepaper with an identifying heading. Include the following: • A sketch of the traverse. Include the starting and ending stations, the intermediate stations, and any other information that may be needed to organize the computations. • The position, the elevation, and the azimuth (if known) for the starting and ending stations. • The observed angles and distances. C-6. Figure C-4 shows a completed DA Form 1940. This figure is further broken down into separate figures to demonstrate the computation process. Refer to Figures C-5 and C-6 when working step 1 and Figure C-7 when working steps 2 through 7. Step 1. Transfer the information from DA Form 1962 to DA Form 1940. Record the following information: • Project name. • Project location. • Organization performing the survey. • From station (starting station). • To station (ending station). • Number of angle stations (number of observed field angles). • Grid zone. • Traverse station names (the first and last columns of DA Form 1940). • Observed angles (corrected mean station angles). • Corrected field distances. • Starting and ending projected geodetic azimuths (denoted by T). • Mean elevation. • Starting and ending UTM grid coordinates. NOTE: The starting and ending T may be obtained from UTM coordinates by computing t and (t - T) on DA Form 1934. Step 2. Compute the summation of angles (� �s) by adding all of the observed angles to the starting back azimuth. Leave the sum in decimal degrees. Record on DA Form 1940 to six decimal places (round the answer). Step 3. Compute the ending azimuth by subtracting 180� from the � �s until it is as close as possible to the known ending azimuth. Record on DA Form 1940 in degrees, minutes, and seconds. Record seconds to one decimal place (round the answers). Step 4. Compute the AEC by subtracting the fixed (known) ending azimuth from the computed ending azimuth. Compute to one decimal place with the sign. Record in the "Total Angular Closure" block on DA Form 1940. NOTE: The AEC is always equal to the computed values minus the fixed values as shown in the following formula: Step 5. Compute the allowable AEC by using the formula from DMS Special Text (ST) 031. Since this is a third-order, Class I traverse, the formula used for computing the AE is �10" NOTE: The AE is always truncated. Do not round up the AE, because rounding will allow more error. Record to one decimal place. Step 6. Compute the correction per station by dividing the AEC by the number of observed angles, then change the sign of the answer. Record to two decimal places with the sign, and truncate the NOTE: No one angle contains more of the error than another since the angular error is accidental. The error must be distributed evenly among the station angles. Step 7. Compute the correction per observed angle and properly assign corrections to be applied to the observed angles. Record to one decimal place with the sign. After computing the correction per station, if the division does not result evenly to 0.1", produce a group of corrections that are within 0.1" of each other as in the following example. +1.94" +2.0" +1.94" +2.0" 2 @ +2.0" = +4.0" +1.94" or +1.9" or +1.94" +1.9" +1.94" +1.9" 3 @ +1.9" = +5.7" +9.7" +9.7" 9.7" total correction C-7. After computing the correction per angle, assign the proper correction to each angle. For uniformity, apply the larger corrections to the larger angles. Record the correction per station in the "Angular Closure Per Station" block on DA Form 1940 (for example, 2 @ +2.0" and 3 @ +1.9"). Sum the corrections. Record in the appropriate block on DA Form 1940. NOTE: The sum of the corrections must equal the AEC, with the opposite sign. For example, if the AEC is negative, the corrections will be positive. If the AEC is positive, the corrections will be C-8. Refer to Figure C-8 for working steps 1 through 4. Step 1. Compute the adjusted angles by algebraically adding the correction per angle to the observed angle. Record to one decimal place. Step 2. Compute the azimuth of each traverse section by adding the first adjusted angle to the starting back azimuth. If the azimuth is over 360�, subtract 360�. This is the azimuth to the forward station. The azimuth of all lines must always be stated in the direction that the traverse is being computed. Starting back azimuth = 63�54'20.3" Adjusted angle at TILDON = 263�24'15.5" Forward azimuth: TILDON to AIR FORCE = 327�18'35.8" Step 3. Convert the forward azimuth of the line to a back azimuth by either adding or subtracting 180� from the forward azimuth. The forward azimuth to the next station is then computed by adding the back azimuth from the previous line to the adjusted angle of the next station. If the new forward azimuth to the station is greater than 360�, subtract 360�. Forward azimuth: TILDON to AIR FORCE = 327�18'35.8" - 180�00'00.0" Back azimuth: TILDON to AIR FORCE = 147�18'35.8" Adjusted angle at AIR FORCE = + 149�47'14.1" Forward azimuth: AIR FORCE to ARMY = 297�05'49.9" Step 4. Repeat this procedure until the final station obtains a perfect check. The computed closing azimuth must agree exactly with the known closing azimuth. If not, a math error has been made and must be corrected. NOTE: It is very important that particular attention be given to the direction of the azimuth. An error of 180� may go undetected, and two errors of 180� will cancel out (providing a final azimuth check). This will result in some sections being reversed in direction. Always refer to the sketch provided with the surveyor's field notes. C-9. Refer to Figure C-9 when working steps 1 through 10. Step 1. Compute the SLC. Record to six decimal places. h = the mean elevation R = the mean radius of the earth (If h is in feet, use R = 20,906,000 feet. If h is in meters, use R = 6,372,000 meters.) Step 2. Compute the middle northing (denoted by MID N) and the middle easting (denoted by MID E). To compute the MID N, add the northing of the beginning traverse station to the northing of the ending traverse station. Then divide by two. Record to the nearest 1,000 meters. To compute the MID E, add the easting of the beginning traverse station to the easting of the ending traverse station. Then divide by two. Record to the nearest 1,000 meters. Northing for Tildon = 4,283,839.177 m Easting for Tildon = 314,225.115 m Northing for Abbot = + 4,287,595.893 m Easting for Abbot = + 310,461.502 m m m = 4,285,717.535 m = 312,343.3085 m MID N = 4,286,000 m MID E = 312,000 m NOTE: A scale factor (denoted by K) is required to convert a measured distance to a grid distance. A mean K may be computed for the entire traverse or for each section in the traverse. For this example, a single K will be used since the traverse's total length is 8,000 meters or less. Traverses over 8,000 meters require a K to be computed for each section. Compute the northing and easting of the midpoint for the desired traverse or section to the nearest 1,000 meters. Record the formula in the appropriate block on DA Form 1940. Step 3. Compute K. Record to six decimal places (round the answer). K = K[o][1 + (XVIII)q2 + 0.00003 q4] K[o] = the scale factor at the CM (0.9996) XVIII = the Table 18 value q = a factor used to convert E' to millionths Step 4. Obtain the Table 18 (denoted by XVIII) value. The XVIII value is extracted from the tables in DMS ST 045, using the MID N as the argument. Interpolate to compute the XVIII value to six decimal places (round the answer). An example follows: MID N XVIII Value 1) 4,200,000 1) 0.012321 2) 4,286,000 2) Unknown 3) 4,300,000 3) 0.012318 � 0.012318 Step 5. Compute E' by subtracting 500,000 from the MID E. Record to 1,000 meters as an absolute value. E' = MID E - 500,000 = 312,000 - 500,000 = -188,000 m E' = absolute value of MID E Step 6. Compute q by multiplying E' by 0.000001. Record to six decimal places (round the answer). q = E' � 0.000001 = 188,000 � 0.000001 = 0.188000 q = a factor used to convert E' millionths Step 7. Compute q^2 and q^4. Record to six decimal places (round the answers). q^2 = 0.1880002 = 0.035344 q^4 = 0.1880004 = 0.001249 Step 8. Compute K. Record to six decimal places (round the answer). K = Ko[1 + (XVIII) q^2 + 0.00003 q^4] = 0.9996[1 + 0.012318 � 0.035344 + 0.00003 � 0.001249] = 1.000035 Ko = the scale factor at the CM (0.9996) q = a factor used to convert E' millionths Step 9. Compute a scale factor used to reduce the grid distance (denoted by K�) by multiplying K by the SLC. Record to six decimal places (round the answer). K� = K � SLC = 1.000035 � 0.999989 = 1.000024 NOTE: After computing K and K�, record the values in the "Scale Factor x SLC" blocks on DA Form 1940 beside the appropriate corrected field distance. Step 10. Compute grid distances as follows. • Taped distances (corrected horizontal field distances) are reduced to grid distances by multiplying the taped distance by K�. G = H � K� G = grid distance H = taped distance • EDME distances (reduced geodetic distances) are corrected by multiplying the geodetic distance by K. G = S � K G = grid distance S = geodetic distance NOTE: Compute the total length of the traverse. Record to three decimal places in the "Length of Traverse" block on DA Form 1940 (Figure C-10). C-10. Refer to Figure C-11 when working steps 1 through 3: Step 1. Compute the cosines and sines of the azimuths. Record to seven decimal places with the sign (round the answer). Step 2. Compute the dNs and the dEs. • The dN is computed by multiplying the grid distance by the cosine of the azimuth. Record to three decimal places with the sign (round the answer). dN = grid distance � cos (t) • The dE is computed by multiplying the grid distance by the sine of the azimuth. Record to three decimal places with the sign (round the answer). dE = grid distance � sin (t) Step 3. Compute errors in the dN and the dE (denoted by En and Ee). • Compute the En by using the following formula. Record to three decimal places with the sign. En = computed dN - fixed dN Algebraically add the column of dNs to get the computed dN. Record to three decimal places with the sign. Computed dN = +3,756.391 Subtract the fixed starting northing from the fixed ending northing to get the fixed dN. Record to three decimal places with the sign. Fixed ending northing = +4,287,595.893 Fixed starting northing = +4,283,839.177 Fixed dN = +3,756.716 En = computed dN - fixed dN = +3,756.391 - (+3,756.716) = -0.325 C-11. Refer to Figure C-12 when working steps 1 through 5. Step 1. Compute the LEC. Record to four decimal places in the "Linear Closure Ratio" block on DA Form 1940. Compute the LEC by using the following formula: Step 2. Compute the RC. Round down to the nearest 100. Record in the "Linear Closure Ratio" block on DA Form 1940. Compute the RC by dividing the length of traverse (in meters) by the LEC. Use the following formula: Step 3. Compute the AE for position closure. Since this is a third-order, Class I traverse, the AE for position closure is equal to 0.4 times the square root of the distance of the traverse in kilometers. Compute the AE for position closure by using the following formula (found in DMS ST 031) (truncate and record the answer to four decimal places): k = the distance of the traverse in kilometers NOTE: The LEC must be compared to the AE. If the LEC is equal to or less than the AE, the traverse has met specifications. If the LEC is greater than the AE, no further computations are necessary. Step 4. Compute the correction factors (correction to northing [denoted by KN] and correction to easting [denoted by KE]) to be used in adjusting the traverse. • KN is computed by dividing the En by the length of traverse in meters then changing the sign of the answer. Record to seven decimal places with the sign (round the answer). ● KE is computed by dividing the Ee by the length of traverse in meters then changing the sign of the answer. Record to seven decimal places with the sign (round the answer). NOTE: A correction factor will always have the opposite sign of the En and the Ee. Step 5. Compute corrections to dNs and dEs. • Corrections to dNs are computed by multiplying KN by the grid distance. This is done for each section of the traverse. Record to three decimal places with the sign (round the answer). Correction to dN = KN � grid distance = +0.0000522 � 1,613.534 (first distance) = +0.084 • Corrections to dEs are computed by multiplying KE by the grid distance. This is done for each section of the traverse. Record to three decimal places with the sign (round the answer). Correction to dE = KE � grid distance = -0.0000459 � 1,613.534 (first distance) = -0.074 • After all the corrections are recorded, sum the columns. The sum of the corrections must equal the errors of dN and dE with the opposite sign. If, because of rounding errors, the sum does not exactly equal the error of dN or dE, this difference must be distributed. For uniformity, the largest corrections are changed by one unit (third decimal place) until the correct sum is obtained. dN dE New dE to dE +0.084 -0.074 -0.074 +0.197 -0.173 -0.001 -0.174 +0.038 -0.033 -0.033 +0.006 -0.005 -0.005 +0.325 -0.285 -0.286 • The sum of the dN corrections is exactly equal to the error (-0.325) with the opposite sign. • The sum of the dE corrections is different by 0.001 from the error (+0.286). Therefore, an additional 0.001 is applied to the largest correction (0.173). C-12. Refer to Figure C-13 when working steps 1 and 2. Step 1. Compute the adjusted grid coordinates (northings and eastings). • To compute the adjusted northing, algebraically add the dN and the correction of dN to the northing of the preceding station. Record to three decimal places. dN = +1,357.957 Correction to dN = +0.084 Northing for Tildon = +4,283,839.177 Northing for Air Force = +4,285,197.218 • To compute the adjusted easting, algebraically add the dE and the correction of dE to the easting of the preceding station. Record to three decimal places. dE = -871.461 Correction to dE = -0.074 Easting for Tildon = +314,225.115 Easting for Air Force = +313,353.580 NOTE: Continue in a like manner for each station. As a math check, apply the last dN and the last correction of dN to the northing of the preceding station. The answer must equal the fixed northing of the closing station. The same is true for the easting. Step 2. Sign and date the form. C-13. Compute the C-factor. Record on DMS Form 5820-R. Refer to Figure C-14 and Figure C-15 when working steps 1 through 15. The step numbers correspond to the numbered blocks on Figure C-14. Figure C-15 shows a completed DMS Form 5820-R. Step 1. Complete the heading information (1). Step 2. Record the stadia constant for the instrument (2). Step 3. Record the backsight-rod (near-rod) readings (in millimeters) (3a). • Compute and record stadia intervals (in millimeters) (3b). If the difference is greater than 3, reobserve. • Compute and record the sum of the intervals (3c). • Compute and record the mean middle-wire reading (in millimeters) to one decimal place (3d). • Compute and record the sum of the three-wire readings (in millimeters) (3e). Step 4.Record the foresight-rod (far-rod) readings (in millimeters) (4a). • Compute and record the stadia intervals (in millimeters) (4b). If the difference is greater than 3, reobserve. • Compute and record the sum of the intervals (4c). • Compute and record the mean middle-wire reading (in millimeters) to one decimal place (4d). • Compute and record the sum of the three-wire readings (in millimeters) (4e). Step 5. Record the backsight-rod (near-rod) readings (in millimeters) (5a). • Compute and record the stadia intervals (in millimeters) (5b). If the difference is greater than 3, reobserve. • Compute and record the sum of the intervals (5c). • Compute and record the mean middle-wire reading (in millimeters) to one decimal place (5d). Step 6. Record the foresight-rod (far-rod) readings (in millimeters) (6a). • Compute and record the stadia intervals (in millimeters) (6b). If the difference is greater than 3, reobserve. • Compute and record the sum of the intervals (6c). • Compute and record the mean middle-wire reading (in millimeters) to one decimal place (6d). Step 7. Compute and record the cumulative totals as follows: 3e + the sum of the second set of near-rod readings from 5a (7a) 3d + 5d (7b) (perform a page check 7a � 3) 3c + 5c (7c) 4e + the sum of the second set of near-rod readings from 6a (7d) 4d + 6d (7e) (perform a page check 7d � 3) 4c + 6c (7f) 7f - 7c (7g) Step 8. Apply the correction for C&R. Due to the short distance from the instrument to the near rod, no corrections are required to the near-rod readings. • Use the far-rod distance (4c � 10) as an argument to determine the second correction. Table C-1 shows correction factors for C&R according to the observed distance. Record the correction from Table C-1 in the C&R number 1 block (8b). • Use the far-rod distance (6c � 10) as an argument to determine the correction. Record the correction from Table C-1 in the C&R number 2 block (8c). • Correct the sum of the far-rod mean middle-wire readings for C&R. Algebraically add the sum of 8b and 8c to 7e. Since the correction is always negative, just subtract 8b and 8c from 7e (8d). • Algebraically add 8d and 7b. Record the sum with the sign (8e). 8d is always negative. Table C-1. Correction Factors for C&R │ Distance (m) │ Correction to Rod (m) │ │ 0 to 27.0 │ - 0.0 │ │ 27.1 to 46.8 │ - 0.1 │ │ 46.9 to 60.4 │ - 0.2 │ │ 60.5 to 71.4 │ - 0.3 │ │ 71.5 to 81.0 │ - 0.4 │ │ 81.1 to 89.5 │ - 0.5 │ │ 89.6 to 97.3 │ - 0.6 │ │ 97.4 to 104.5 │ - 0.7 │ Step 9. Compute the C-value by dividing 8e by 7g. Truncate and record to four decimal places with the sign (9). NOTE: If the sum of the far-rod mean middle-wire readings (8d) is larger than the sum of the near-rod mean middle-wire readings (7b), the C-value is negative. Step 10. Compare the C-value with that allowed for the instrument. The allowable C-value in most instruments is +0.004. If the C-value is within specifications, no further computations are required. Step 11. Correct the C-value if it is not within the specifications. • The correction to the middle wire (in millimeters) is computed by multiplying the sum of the rod intervals of the last foresight (shown in 6c) by the C-value (shown in 9). Compute to one decimal place (round the answer) (11a). • The correction to the middle wire (11a) is added algebraically to the last foresight middle-wire rod reading (shown in 6a) to obtain the corrected rod reading. Compute to three decimal places (divide the correction by 1,000 to convert to meters before applying) (round the answer) (11b). Step 12. Initial the form (12). Step 13. Perform field adjustments. Step 14. Repeat steps 1 through 13 until the C-value is within specifications. Step 15. Give the recording form to the instrument operator once it has been determined that the instrument is within specifications. The instrument operator will check the form for completeness and the computations for correctness and initial the form (15). C-14. Compute a level line on DA Form 1942. Refer to Figure C-16 when working steps 1 through 20 (the step numbers correspond to the numbered blocks). Figure C-17 shows a completed >DA Form 1942. Data will be required from the field notes (DA Form 5820-R) shown in Figures C-18 through C-21. Step 1. Complete the headings (1). Step 2. Record the name of the- • Beginning BM (2a). • BM whose elevation is being computed (2b). • Ending BM (2c). Step 3. Record the name of the- • Beginning BM for each section (3a). • Ending BM for each section (3b). Step 4. Record the name of the beginning BM (4). Step 5. Record the direction of the run (forward [F] or backward [B]) (5). Step 6. Abstract the length of the forward and backward runs per section from the level field notes. Record to the nearest 0.001 kilometer, in their respective directions (6). Step 7. Compute the length of the line by adding the shortest distance of each section of the level line (7a). Record the total length of the line (7b). Step 8. Compute the observed DE of the forward and backward runs per section from the level field notes. Record to four decimal places with the sign (in their respective running directions) (8). Step 9. Compute the DE between the forward and the backward runs per section. Record to four decimal places as an absolute value (no algebraic signs) (9). Step 10. Determine the mean DE by computing the absolute mean of the forward and the backward DE. Give the mean DE the algebraic sign of the forward run. Record to four decimal places (round the answer) (10). Step 11. Record the known elevation of the beginning BM (11). Step 12. Record the known elevation of the ending BM (12). Step 13. Compute the observed elevation by algebraically adding the mean difference (shown in 10) and the elevation of the beginning BM (shown in 11). Record to four decimal places (13a). Compute each successive observed elevation by algebraically adding it to the preceding elevation and the respective section's mean DE. Record to four decimal places (13b). NOTE: The last entry will be the observed elevation of the ending BM. This entry must be compared to the fixed ending elevation. Step 14. Record the known elevation of the ending BM (from step 12) (14). Step 15. Compute the closure by subtracting the known elevation of the ending BM (shown in 14) from the computed observed elevation of the ending BM (shown in 13b). Record to four decimal places with the sign (15). Step 16. Compute the AE. Truncate and record to four decimal places (16). For third-order specifications, use the following formula: Km = length of line in kilometers (from 7b) Compare the AE (16) to the closure (shown in 15). If the numerical value of the closure is equal to or smaller than the AE, the level line meets third-order specifications. If it does not, there is no need to continue with the computations on DA Form 1942. Step 17. Compute the correction per kilometer. Divide the closure (shown in 15) by the total length of the line (shown in 7b) and change the sign. Record to six decimal places with the sign (round the answer) (17). Step 18. Compute the correction for each section. Multiply the length of the line (shown in 7a) of each section by the correction per kilometer (shown in 17). Record to four decimal places with the sign (round the answer) (18). NOTE: The correction to the final section must be equal to the closure (15), with the opposite sign. Step 19. Compute the adjusted elevation. Algebraically add the correction (shown in 18) to the observed elevation (shown in 13a) of each station. Record to four decimal places (round the answer) Step 20. Sign and date the form (20).
{"url":"http://cryptome.org/cartome/FM3-34/AppendixC.htm","timestamp":"2014-04-19T22:13:02Z","content_type":null,"content_length":"91663","record_id":"<urn:uuid:d992e5d6-8403-4ab4-8b98-4e520bd4b7ed>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Fourier series problem November 25th 2009, 06:09 PM Fourier series problem Please give me a help to find the answer for the following Fourier series question: Find the steady state oscillation corresponding to (d^2y)/(dx^2)y + c dy/dx + y = r(t) , where c > 0 and r(t)=(pi*t)/4 if (-pi)/2<t<(pi/2) r(t)=[pi*(pi-t)]/4 if (pi/2)<t<[(3*pi)/2] and r(t)=r(t+2*pi) {the same is you can see the attached file here} please be kind enough to help this my homework question.
{"url":"http://mathhelpforum.com/differential-geometry/116769-fourier-series-problem-print.html","timestamp":"2014-04-20T12:03:54Z","content_type":null,"content_length":"3590","record_id":"<urn:uuid:9314ee10-94a6-4bc3-8c98-66f42f018374>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Classical Technical Patterns May 21, 2012 By systematicinvestor In my presentation about Seasonality Analysis and Pattern Matching at the R/Finance conference, I used examples that I have previously covered in my blog: • Month of the Year Seasonality – I introduced the Seasonality charts in the Historical Seasonality Analysis: What company in DOW 30 is likely to do well in January? post. I also developed the Seasonality Tool a free, user-friendly, point and click, application to create Seasonality charts, statistics, and reports. The Seasonality Tool can be downloaded here. • What seasonally happens in the first 20 days in May – I introduced intra-month Seasonality charts in the Happy Holidays and Best Wishes for 2012. The intra-month Seasonality charts are also available in the Seasonality Tool, and you can also see the current month intra-month Seasonality chart for SPY at the top right corner of this blog. • Find historical Matches similar to the last 90 days of price history – I introduced Time Series Matching charts in the: The only subject in my presentation that I have not discussed previously was about Classical Technical Patterns. For example, the Head and Shoulders pattern, most often seen in up-trends and generally regarded as a reversal pattern. Below I implemented the algorithm and pattern definitions presented in the Foundations of Technical Analysis by A. Lo, H. Mamaysky, J. Wang (2000) paper. To identify a price pattern I will follow steps as described in the Foundations of Technical Analysis paper: • First, fit a smoothing estimator, a kernel regression estimator, to approximate time series. • Next, determine local extrema, tops and bottoms, using fist derivative of the kernel regression estimator. • Define classical technical patterns in terms of tops and bottoms. • Search for classical technical patterns throughout the tops and bottoms of the kernel regression estimator. Let’s begin by loading historical prices for SPY: # Load Systematic Investor Toolbox (SIT) con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb')) # Load historical data ticker = 'SPY' data = getSymbols(ticker, src = 'yahoo', from = '1970-01-01', auto.assign = F) data = adjustOHLC(data, use.Adjusted=T) # Find Classical Techical Patterns, based on # Pattern Matching. Based on Foundations of Technical Analysis # by A.W. LO, H. MAMAYSKY, J. WANG plot.patterns(data, 190, ticker) The first step is to fit a smoothing estimator, a kernel regression estimator, to approximate time series. I used sm package to fit kernel regression: y = as.vector( last( Cl(data), 190) ) t = 1:len(y) # fit kernel regression with cross-validatio h = h.select(t, y, method = 'cv') temp = sm.regression(t, y, h=h, display = 'none') # find estimated fit mhat = approx(temp$eval.points, temp$estimate, t, method='linear')$y The second step is to find local extrema, tops and bottoms, using first derivative of the kernel regression estimator. (more details in the paper on page 15): temp = diff(sign(diff(mhat))) # loc - location of extrema, loc.dir - direction of extrema loc = which( temp != 0 ) + 1 loc.dir = -sign(temp[(loc - 1)]) I put the logic for the first and second step into the find.extrema() function. The next step is to define classical technical patterns in terms of tops and bottoms. The pattern.db() function implements the 10 patterns described in the paper on page 12. For example, let’s have a look at the Head and Shoulders pattern. The Head and Shoulders pattern: • is defined by 5 extrema points (E1, E2, E3, E4, E5) • starts with a maximum (E1) • E1 and E5 are within 1.5 percent of their average • E2 and E4 are within 1.5 percent of their average The R code below that corresponds to the Head and Shoulders pattern is a direct translation, from the pattern description in the paper on page 12, and is very readable: # Head-and-shoulders (HS) pattern = list() pattern$len = 5 pattern$start = 'max' pattern$formula = expression({ avg.top = (E1 + E5) / 2 avg.bot = (E2 + E4) / 2 # E3 > E1, E3 > E5 E3 > E1 & E3 > E5 & # E1 and E5 are within 1.5 percent of their average abs(E1 - avg.top) < 1.5/100 * avg.top & abs(E5 - avg.top) < 1.5/100 * avg.top & # E2 and E4 are within 1.5 percent of their average abs(E2 - avg.bot) < 1.5/100 * avg.bot & abs(E4 - avg.bot) < 1.5/100 * avg.bot patterns$HS = pattern The last step is a function that searches for all defined patterns in the kernel regression representation of original time series. I put the logic for this step into the find.patterns() function. Below is a simplified version: find.patterns <- function obj, # extrema points patterns = pattern.db() data = obj$data extrema.dir = obj$extrema.dir data.extrema.loc = obj$data.extrema.loc n.index = len(data.extrema.loc) # search for patterns for(i in 1:n.index) { for(pattern in patterns) { # check same sign if( pattern$start * extrema.dir[i] > 0 ) { # check that there is suffcient number of extrema to complete pattern if( i + pattern$len - 1 <= n.index ) { # create enviroment to check pattern: E1,E2,...,En; t1,t2,...,tn envir.data = c(data[data.extrema.loc][i:(i + pattern$len - 1)], data.extrema.loc[i:(i + pattern$len - 1)]) names(envir.data) = c(paste('E', 1:pattern$len, sep=''), paste('t', 1:pattern$len, sep='')) envir.data = as.list(envir.data) # check if pattern was found if( eval(pattern$formula, envir = envir.data) ) { cat('Found', pattern$name, 'at', i, '\n') I put the logic for the entire process in to the plot.patterns() helper function. The plot.patterns() function first call find.extrema() function to determine extrema points, and next it calls find.patterns() function to find and plot patterns. Let’s find classical technical patterns in the last 150 days of SPY history: # Load historical data ticker = 'SPY' data = getSymbols(ticker, src = 'yahoo', from = '1970-01-01', auto.assign = F) data = adjustOHLC(data, use.Adjusted=T) # Find Classical Techical Patterns, based on # Pattern Matching. Based on Foundations of Technical Analysis # by A.W. LO, H. MAMAYSKY, J. WANG plot.patterns(data, 150, ticker) It is very easy to define you own custom patterns and I encourage everyone to give it a try. To view the complete source code for this example, please have a look at the pattern.test() function in rfinance2012.r at github. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/classical-technical-patterns/","timestamp":"2014-04-18T10:53:48Z","content_type":null,"content_length":"47257","record_id":"<urn:uuid:ec043d00-2e84-461e-86a3-214728099cee>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
XXIX International Symposium on Lattice Field Theory Plenary Sessions Determination of alpha_s from lattice QCD PoS(Lattice 2011)001 pdf Topological insulators, superconductors and their connections to lattice gauge theory PoS(Lattice 2011)002 Large-N volume independence, conformality and confinement PoS(Lattice 2011)003 Holography and colliding gravitational shock waves PoS(Lattice 2011)004 Asymptotic Safety and Lattice Quantum Gravity. PoS(Lattice 2011)005 pdf Flavor blindness and patterns of flavor sym. breaking in 3-flavor simulations PoS(Lattice 2011)006 Large extra dimensions and lattices PoS(Lattice 2011)007 Signals in the TeV era and the Lattice PoS(Lattice 2011)008 pdf Exploring Models for New Physics on the Lattice PoS(Lattice 2011)009 pdf SU(3) gauge theory with sextet fermions PoS(Lattice 2011)010 pdf QCD at Nonzero Temperature and Density PoS(Lattice 2011)011 pdf Connecting the Lattice Points : What Lattice QCD Calculations can tell us about the Quark Gluon Plasma PoS(Lattice 2011)012 Fluctuations, correlations: from lattice QCD to heavy ion collisions PoS(Lattice 2011)013 Looking for U(1)_A in Hot QCD Matter with Domain-Wall Fermions PoS(Lattice 2011)014 pdf GPUs for the Lattice PoS(Lattice 2011)015 Hadron interactions PoS(Lattice 2011)016 pdf Listening to Noise PoS(Lattice 2011)017 pdf Flavor Physics in the LHC era: the role of the lattice PoS(Lattice 2011)018 pdf Standard Model Flavor physics on the Lattice PoS(Lattice 2011)019 pdf Lattice QCD with Classical and Quantum Electrodynamics PoS(Lattice 2011)020 pdf Progress on Excited Hadrons in Lattice QCD PoS(Lattice 2011)021 pdf Nonperturbative QCD corrections to electroweak observables PoS(Lattice 2011)022 pdf Reweighting in the quark mass PoS(Lattice 2011)023 Direct and Indirect Kaon Physics Directly Below KT-22: A Lattice 2011 Review PoS(Lattice 2011)024 Low Energy Particle Physics and Chiral Extrapolations PoS(Lattice 2011)025 pdf Algorithms and Machines Application of Quadrature Methods for Re-Weighting in Lattice QCD PoS(Lattice 2011)026 pdf A CG method for multiple right hand sides and multiple shifts in lattice QCD calculations PoS(Lattice 2011)027 pdf A method for resummation of perturbative expansions based on the stochastic solution of Schwinger-Dyson equations PoS(Lattice 2011)028 pdf Multigrid Algorithms for Domain-Wall Fermions PoS(Lattice 2011)030 pdf Status of the AuroraScience Project PoS(Lattice 2011)031 pdf Accurate error bounds and estimates for the sign function PoS(Lattice 2011)032 pdf Progress on the QUDA code suite PoS(Lattice 2011)033 pdf Interface tension of 3d 4-states Potts model using the Wang-Landau algorithm PoS(Lattice 2011)034 pdf Adaptive Algebraic Multigrid in QCD computations PoS(Lattice 2011)036 Finite Volume Effects in B_K with improved staggered fermions PoS(Lattice 2011)037 pdf Fermions as global correction in lattice QCD PoS(Lattice 2011)038 pdf QCD calculations with optical lattices? PoS(Lattice 2011)040 pdf Determinant reweighting for O(a) improved Wilson fermions PoS(Lattice 2011)041 pdf Modified Block BiCGSTAB for Lattice QCD PoS(Lattice 2011)042 pdf Automated LQCD code generation for future architectures PoS(Lattice 2011)043 pdf LatticeQCD using OpenCL PoS(Lattice 2011)044 pdf APEnet+ project status PoS(Lattice 2011)045 pdf Aggregation-based Multilevel Methods for Lattice QCD PoS(Lattice 2011)046 pdf The scaling of the Hybrid Monte Carlo algorithm PoS(Lattice 2011)047 Data analysis using the Gnu R system for statistical computation PoS(Lattice 2011)048 pdf Symmetric Partitioned Runge-Kutta Methods for Differential Equations on Lie-Groups PoS(Lattice 2011)049 pdf Accelerating QDP++ using GPUs PoS(Lattice 2011)050 pdf Improving DWF Simulations: Force Gradient Integrator and the Mobius Accelerated DWF Solver PoS(Lattice 2011)051 pdf Applications beyond the Standard Model KMI project on many flavor QCD with N_f=12 and 16 PoS(Lattice 2011)053 pdf Chiral symmetry restoration in monolayer graphene induced by Kekule distortion PoS(Lattice 2011)054 pdf Supersymmetric Yang-Mills theory: a first step towards the continuum PoS(Lattice 2011)055 pdf Hybrid Monte Carlo simulation on the graphene hexagonal lattice PoS(Lattice 2011)056 pdf The generalized fermion-bag approach PoS(Lattice 2011)058 pdf Exploring the Phase Diagram for Lattice Quantum Gravity PoS(Lattice 2011)059 Gauge theories with fermions in the two-index symmetric representation PoS(Lattice 2011)060 pdf RG flows in 3D scalar field theory. PoS(Lattice 2011)061 pdf The unitary Fermi gas at finite temperature: momentum distribution and contact. PoS(Lattice 2011)062 pdf Infrared conformality and lattice simulations PoS(Lattice 2011)063 Numerical results regarding the sign problem in 2 dimensional Supersymmetric Yang-Mills theories with 4 and 16 supercharges PoS(Lattice 2011)064 pdf MCRG study of 12 fundamental flavors with mixed fundamental-adjoint gauge action PoS(Lattice 2011)065 pdf Lattice QCD with 12 Degenerate Quark Flavors PoS(Lattice 2011)066 pdf Exploring the conformal window: SU(2) gauge theory on the lattice PoS(Lattice 2011)067 pdf Systematic Errors of the MCRG Method PoS(Lattice 2011)068 pdf Lattice study of 4d ${\\cal N}=1$ super Yang-Mills theory with dynamical overlap gluino PoS(Lattice 2011)069 pdf Dimensional reduction from five-dimensional gauge theories PoS(Lattice 2011)070 pdf Preliminary study of two-dimensional SU(N) Yang-Mills theory with adjoint matter with Hybrid Monte Carlo approach PoS(Lattice 2011)071 pdf Large-N reduction in QCD with two adjoint Dirac fermions PoS(Lattice 2011)072 pdf Twelve fundamental and two sextet fermion flavors PoS(Lattice 2011)073 pdf Strongly coupled Graphene on the Lattice PoS(Lattice 2011)074 pdf Study of the Higgs-Yukawa Theory at the Strong-Yukawa Regime PoS(Lattice 2011)075 pdf Lattice simulations of SU(3) gauge theory with ten flavors of Dirac fermions PoS(Lattice 2011)076 The Higgs mass, bound states, and gauge invariance PoS(Lattice 2011)077 pdf Sign problem for supersymmetric Yang-Mills theories on the lattice PoS(Lattice 2011)078 pdf Anomalous scaling in the random-force-driven Burgers equation: A Monte Carlo study PoS(Lattice 2011)079 pdf KMI (Nagoya) project; Many flavor QCD as exploration of the walking behavior with approximate IR fixed point PoS(Lattice 2011)080 pdf The Infrared behavior of SU(3) Nf=12 gauge theory -about the existence of conformal fixed point- PoS(Lattice 2011)081 pdf Study of the infrared behavior in SU(2) gauge theory with eight flavors PoS(Lattice 2011)082 On the spectrum of many-flavor QCD PoS(Lattice 2011)083 pdf Finite volume effects in SU(2) with two adjoint fermions PoS(Lattice 2011)084 pdf Scalar mass corrections from compact extra dimensions on the lattice PoS(Lattice 2011)086 pdf S parameter and parity doubling below the conformal window PoS(Lattice 2011)087 pdf Running coupling from gluon exchange in the Schrodinger functional PoS(Lattice 2011)089 pdf The chiral phase transition for QCD with sextet quarks PoS(Lattice 2011)090 pdf Perturbative lattice artefacts in the SF coupling for technicolor-inspired models PoS(Lattice 2011)091 pdf Fermion RG blocking transformations and conformal windows PoS(Lattice 2011)092 pdf Lattice Study of the Extent of the Conformal Window in Two-Color Yang-Mills Theory PoS(Lattice 2011)093 pdf The Lattice Mean-Field Approximation of Gauge-Higgs Unification on the Orbifold PoS(Lattice 2011)094 pdf Exploring the Phase Diagram for Lattice Quantum Gravity PoS(Lattice 2011)334 pdf Thermodynamic Study for Conformal Phase in Large Nf Gauge Theory PoS(Lattice 2011)207 pdf QCD Phase Diagram in Strong Coupling Lattice QCD with Polyakov Loops PoS(Lattice 2011)318 pdf Chiral Symmetry Determination of the Wilson ChPT low energy constant c2 PoS(Lattice 2011)095 pdf Chiral Properties of the Pseudoscalar Meson in Two Flavors QCD PoS(Lattice 2011)096 pdf Hard pion chiral perturbation theory PoS(Lattice 2011)097 pdf Gell Mann Oakes Renner relation for multiple chiral symmetries. PoS(Lattice 2011)098 pdf Topological susceptibility with Wilson fermions PoS(Lattice 2011)099 pdf Spectral Flow and Index Theorem for Staggered Fermions PoS(Lattice 2011)100 pdf Chiral interpolation in a finite volume PoS(Lattice 2011)101 pdf Topological susceptibility and chiral condensate with $N_f=2+1+1$ dynamical flavors of maximally twisted mass fermions. PoS(Lattice 2011)102 pdf Low-lying Dirac operator eigenvalues, lattice effects and random matrix theory PoS(Lattice 2011)103 pdf Topological fluctuations in Two flavors Lattice QCD PoS(Lattice 2011)104 pdf Non-Goldstone pion masses with NLO in Staggered Chiral Perturbation Theory PoS(Lattice 2011)105 pdf Index Theorem and Overlap Formalism with Naive and Minimally Doubled Fermions PoS(Lattice 2011)106 pdf Staggered chiral perturbation theory fits to light pseudoscalar masses and decay constants from HISQ ensembles PoS(Lattice 2011)107 pdf Aoki Phases in Staggered-Wilson Fermions PoS(Lattice 2011)108 pdf Light quarks correlators in a mixed action setup PoS(Lattice 2011)109 pdf Chiral random matrix theory for staggered fermions PoS(Lattice 2011)110 pdf Effects of the low lying Dirac modes on the spectrum of ground state mesons PoS(Lattice 2011)111 pdf Symmetries and vacuum structure inside the Aoki phase PoS(Lattice 2011)112 pdf Progress on the Microscopic Spectrum of the Dirac Operator for QCD with Wilson Fermions PoS(Lattice 2011)113 pdf Evidence for chiral logarithms in the baryon spectrum PoS(Lattice 2011)114 pdf Hadron Spectroscopy Cutoff effects of heavy quark vacuum polarization at one-loop order. PoS(Lattice 2011)115 pdf Charm baryon spectroscopy PoS(Lattice 2011)116 pdf Lattice study on glueballs in J/psi radiative decays PoS(Lattice 2011)117 pdf B and bottomonium physics from lattice QCD including c quarks in the sea PoS(Lattice 2011)118 pdf Excited meson spectroscopy with two chirally improved quarks PoS(Lattice 2011)119 pdf Group-theoretical construction of finite-momentum and multi-particle operators for lattice hadron spectroscopy PoS(Lattice 2011)120 pdf The eta' meson with staggered fermions PoS(Lattice 2011)121 pdf Radiative improvement of the lattice NRQCD action using the background field method and application to the hyperfine splitting of quarkonium states PoS(Lattice 2011)122 pdf Potentials between pairs of static-light mesons PoS(Lattice 2011)123 pdf Bound H-dibaryon from Full QCD Simulation on the Lattice PoS(Lattice 2011)124 pdf Rho meson decay width from 2+1 flavor lattice QCD PoS(Lattice 2011)125 pdf Interquark potential for the charmonium system with almost physical quark masses PoS(Lattice 2011)126 pdf Nucleon Mass Spectrum in Full QCD PoS(Lattice 2011)127 pdf The 1405MeV Lambda Resonance in Full-QCD PoS(Lattice 2011)129 pdf Charmed meson spectroscopy on the lattice PoS(Lattice 2011)130 pdf Excited-state hadron masses using the stochastic LapH method PoS(Lattice 2011)131 pdf Charm quark system on the physical point in 2+1 flavor lattice QCD PoS(Lattice 2011)132 pdf Rho Resonance on the Lattice PoS(Lattice 2011)134 pdf Spectra of heavy-light and heavy-heavy mesons containing charm quarks, including higher spin states for N_f = 2+1 configurations PoS(Lattice 2011)135 pdf Systematic errors in partially-quenched QCD plus QED lattice simulations PoS(Lattice 2011)136 pdf Decay of the rho and a1 mesons on the lattice using distillation PoS(Lattice 2011)137 pdf Charmonium spectroscopy from an anisotropic lattice study PoS(Lattice 2011)140 pdf SU(2) chiral perturbation theory low-energy constants from staggered 2+1 flavor simulations PoS(Lattice 2011)142 pdf Excited light isoscalar mesons from lattice QCD PoS(Lattice 2011)143 pdf 1+1+1 flavor QCD+QED simulation at the physical point PoS(Lattice 2011)144 pdf Scale setting via the Omega baryon mass PoS(Lattice 2011)145 pdf Excited state baryon spectroscopy from lattice QCD PoS(Lattice 2011)146 pdf Bound state of two-nucleon systems in quenched lattice QCD PoS(Lattice 2011)147 pdf Masses of eta, eta' Mesons from 2+1+1 Twisted Mass Lattice QCD PoS(Lattice 2011)336 pdf Hadron Structure An improved method for extracting matrix elements from lattice three-point functions PoS(Lattice 2011)148 pdf Medium effects in parton distributions PoS(Lattice 2011)149 pdf Excited state Effects in Nucleon Matrix Element Calculations PoS(Lattice 2011)150 pdf Three-Nucleon Forces explored by Lattice QCD Simulations PoS(Lattice 2011)151 pdf Nucleon scalar matrix elements with N_f=2+1+1 twisted mass fermions PoS(Lattice 2011)152 pdf Exploration of the electric spin polarizability of the neutron in lattice QCD PoS(Lattice 2011)153 pdf Two-photon decays of neutral pion from 2+1 flavor lattice QCD PoS(Lattice 2011)154 pdf Rising total cross sections and soft high-energy scattering on the lattice PoS(Lattice 2011)155 pdf Strange and charm quark contents of nucleon from chiral fermions PoS(Lattice 2011)156 pdf Excited state contamination in nucleon structure calculations PoS(Lattice 2011)157 pdf Nucleon sigma terms for 2+1 quark flavours PoS(Lattice 2011)158 pdf S-wave meson-baryon potentials with strangeness from Lattice QCD PoS(Lattice 2011)159 pdf Time-dependent effective Schroedinger equation for lattice nuclear potentials PoS(Lattice 2011)160 pdf Lattice Determination of the Anomalous Magnetic Moment of the Muon PoS(Lattice 2011)161 pdf DWF calculation of the leading order hadronic vacuum polarisation to g-2 of the muon. PoS(Lattice 2011)162 pdf Radiative transitions in charmonium from $N_f=2$ twisted mass lattice QCD PoS(Lattice 2011)163 pdf Quark and glue momenta and angular momentum in the nucleon PoS(Lattice 2011)164 pdf Electric polarizability with overlap fermions PoS(Lattice 2011)165 pdf Axial couplings of heavy hadrons from domain-wall lattice QCD PoS(Lattice 2011)166 pdf Baryon-Baryon Interaction of Strangeness S=-1 Sector PoS(Lattice 2011)167 pdf Nucleon structure from 2+1f dynamical DWF lattice QCD at nearly physical pion mass PoS(Lattice 2011)168 pdf Magnetic Properties of the Nucleon PoS(Lattice 2011)170 pdf Octet baryon sigma terms PoS(Lattice 2011)171 Hyperon vector form factors with 2+1 flavor dynamical domain-wall fermions. PoS(Lattice 2011)172 pdf Strangeness S=-2 baryon-bayon interactions from lattice QCD PoS(Lattice 2011)173 pdf Disconnected Contibutions for nucleon 3-point functions. PoS(Lattice 2011)174 pdf An Update on Distribution Amplitudes of the Nucleon and its Parity Partner PoS(Lattice 2011)175 pdf Electric Dipole Moment of the Neutron PoS(Lattice 2011)176 First moments of the nucleon generalized parton distributions from lattice QCD PoS(Lattice 2011)177 pdf Quark contribution to nucleon momentum and spin from calculations with Domain Wall fermions PoS(Lattice 2011)178 pdf First Calculation of Nuclear Parity Violation from Lattice QCD PoS(Lattice 2011)179 pdf Nucleon Form Factors - Closing in on the physical point PoS(Lattice 2011)180 Nonzero Temperature and Density Towards an Effective Importance Sampling in Monte Carlo Simulations of a System with a Complex Action PoS(Lattice 2011)181 pdf Determination of the transition temperature T_c in 2+1 flavor QCD: combined result with the p4, asqtad and HISQ/tree actions PoS(Lattice 2011)182 pdf SU(3) Deconfining phase transition with lower boundary temperatures in the scaling region PoS(Lattice 2011)183 pdf Evading the sign problem in random matrix simulations PoS(Lattice 2011)184 pdf Thermodynamics from Twisted Mass Lattice QCD PoS(Lattice 2011)185 The finite temperature phase transition from domain wall fermions PoS(Lattice 2011)186 pdf Phase diagram of QCD with two degenerate staggered quarks PoS(Lattice 2011)187 pdf Topological susceptibility and axial symmetry at finite temperature PoS(Lattice 2011)188 pdf Constraints on the two-flavor QCD phase diagram from imaginary chemical potential PoS(Lattice 2011)189 pdf Worm Algorithms for the QCD Phase Diagram with Effective Theories PoS(Lattice 2011)190 pdf Exploring phase diagram of $N_f=3$ QCD at $\mu=0$ with HISQ fermions PoS(Lattice 2011)191 pdf The finite temperature QCD transition in external magnetic fields PoS(Lattice 2011)192 pdf Quark number susceptibility at finite density and low temperature PoS(Lattice 2011)193 pdf Thermal momentum distribution from shifted boundary conditions PoS(Lattice 2011)194 pdf Inter-quark potentials from NBS amplitudes and their applications PoS(Lattice 2011)195 pdf Chiral Magnetic Effect on the domain-wall fermion PoS(Lattice 2011)196 Complex Langevin dynamics: criteria for correctness PoS(Lattice 2011)197 pdf Correlations and fluctuations at finite temperature PoS(Lattice 2011)198 Scaling behavior in two-flavor QCD, finite quark masses and finite volume effects PoS(Lattice 2011)199 pdf Quark localization by Polyakov loops in high temperature QCD PoS(Lattice 2011)200 pdf The QCD equation of state and the effects of the charm PoS(Lattice 2011)201 pdf Towards a non-perturbative measurement of the heavy quark momentum diffusion coefficient PoS(Lattice 2011)202 pdf Extended study for unitary fermions on lattice using cumulant expansion technique PoS(Lattice 2011)203 pdf Dirac Eigenvalue Spectrum at Finite Temperature Using Domain Wall Fermions PoS(Lattice 2011)204 pdf Strong coupling effective theory with heavy fermions PoS(Lattice 2011)205 pdf Electric and magnetic screening masses around the deconfinement transition PoS(Lattice 2011)206 pdf Histogram method in finite density QCD with phase quenched simulations PoS(Lattice 2011)208 pdf QCD thermodynamics with Wilson fermions PoS(Lattice 2011)209 pdf Eigenvalue distribution of the Dirac operator at finite temperature with (2+1)-flavor dynamical quarks using the HISQ action PoS(Lattice 2011)210 pdf Renormalization of Polyakov loops in different representations and the large-N limit PoS(Lattice 2011)211 pdf Corrections to the strong coupling limit of staggered QCD PoS(Lattice 2011)212 pdf Poisson statistics in the high temperature QCD Dirac spectrum PoS(Lattice 2011)213 pdf Finite density QCD phase transition in the heavy quark mass region PoS(Lattice 2011)214 pdf Complex Langevin simulation applied to chiral random matrix model at finite density PoS(Lattice 2011)215 Universal critical behavior in three flavor QCD PoS(Lattice 2011)216 pdf On the phase of quark determinant in lattice QCD with finite chemical potential PoS(Lattice 2011)217 pdf Continuous Time Monte Carlo for QCD in the Strong Coupling Limit PoS(Lattice 2011)218 pdf Towards finite density QCD with Taylor expansions PoS(Lattice 2011)219 pdf Lattice QCD simulation at finite chiral chemical potential PoS(Lattice 2011)220 pdf Universality of phase diagrams in QCD and QCD-like theories PoS(Lattice 2011)221 pdf Standard Model Parameters and Renormalization mc/ms with Brillouin improved Wilson fermions PoS(Lattice 2011)230 pdf RI/SMOM schemes for Delta S=1 and Delta S=2 operators PoS(Lattice 2011)231 Strange quark mass and Lambda parameter by the ALPHA collaboration PoS(Lattice 2011)232 pdf Renormalization constants of quark bilinears in lattice QCD with four dynamical Wilson quarks PoS(Lattice 2011)233 pdf Current-Current correlators in Twisted Mass Lattice QCD PoS(Lattice 2011)234 pdf Quark masses from Nf=2 Clover fermions - an update PoS(Lattice 2011)235 pdf Determination of Light Quark Masses PoS(Lattice 2011)236 Non-perturbative renormalization for general improved staggered bilinears PoS(Lattice 2011)237 pdf The static quark self-energy at large orders from NSPT PoS(Lattice 2011)222 pdf RI-MOM scheme renormalization constants (Nf=4) and the running coupling constant (Nf=2+1+1) using twisted-Wilson quarks PoS(Lattice 2011)223 pdf Kaon bag parameter B_K at the physical mass point PoS(Lattice 2011)224 pdf Light quark masses PoS(Lattice 2011)225 Lattice QCD at the physical point PoS(Lattice 2011)226 NPR of K\to\pi\pi operators with a step scaling matrix PoS(Lattice 2011)227 pdf Renormalization constants for Iwasaki action PoS(Lattice 2011)228 pdf Automated lattice perturbation theory applied to HQET PoS(Lattice 2011)229 pdf Theoretical Developments Backward running from Creutz ratios PoS(Lattice 2011)238 pdf Supersymmetry on the lattice: Exact results for supersymmetric quantum mechanics PoS(Lattice 2011)239 pdf Dressed Wilson loops as dual condensates in response to magnetic fields PoS(Lattice 2011)240 pdf Topology and chiral perturbation theory from the Wilson Dirac spectrum PoS(Lattice 2011)241 pdf Flavor symmetry breaking in mixed-action QCD PoS(Lattice 2011)242 pdf Recent progress of lattice and non-lattice super Yang-Mills PoS(Lattice 2011)243 pdf Testing the AdS/CFT correspondence by Monte Carlo calculation of BPS and non-BPS Wilson loops in N=4 super-Yang-Mills theory PoS(Lattice 2011)244 pdf A new lattice SUSY formulation for D=N=2 Wess-Zumino model with species doublers as supermultiplet PoS(Lattice 2011)245 Volume Effects in Discrete Beta Functions PoS(Lattice 2011)246 pdf Numerical study of large-N phase transition of smeared Wilson loops in 4D pure YM theory PoS(Lattice 2011)247 pdf Loop formulation of the O(N) Gross-Neveu model: results for the Thirring model PoS(Lattice 2011)248 Continuous smearing of Wilson Loops. PoS(Lattice 2011)249 pdf On the spectral density of the Wilson operator PoS(Lattice 2011)250 pdf Reflection Positivity of N=1 Wess-Zumino model on the lattice with exact U(1)_R symmetry PoS(Lattice 2011)251 pdf Phase transitions in center-stabilized lattice gauge theories PoS(Lattice 2011)252 pdf Supersymmetry on the lattice: the N=1 Wess-Zumino model PoS(Lattice 2011)253 pdf Confinement in multiparton sectors of SYM_2 with adjoint fermions PoS(Lattice 2011)254 pdf Comparison of improved perturbative methods PoS(Lattice 2011)255 pdf Vacuum Structure and Confinement Gluonic Profile of the static baryon at finite temperature PoS(Lattice 2011)256 pdf Impact of center vortex removal on chiral symmetry breaking in SU(3) gauge field theory PoS(Lattice 2011)257 pdf Topology of dynamical lattice configurations including results from overlap fermions PoS(Lattice 2011)258 pdf Chiral Quark Dynamics and the Ramond-Ramond U(1) Gauge Field PoS(Lattice 2011)259 pdf Chiral Properties of Strong Interactions in a Magnetic Background PoS(Lattice 2011)260 pdf Vacuum Manifold Projection: a technique for calculating the effective Hamiltonian for low-energy vacuum gauge fields, using Lattice calculations PoS(Lattice 2011)261 Dual Meissner effect and non-Abelian dual superconductivity in SU(3) Yang-Mills theory PoS(Lattice 2011)262 pdf Colour flux-tubes in static Pentaquark and Tetraquark systems PoS(Lattice 2011)263 pdf Fractional electric charge and quark confinement PoS(Lattice 2011)264 pdf Lattice Landau Gauges without Frontiers PoS(Lattice 2011)265 Phase diagram of the G(2) Higgs model PoS(Lattice 2011)266 pdf k-string tensions and the 1/N expansion PoS(Lattice 2011)267 pdf Absolute X-distribution and self-duality PoS(Lattice 2011)268 pdf Intersections of thick Center Vortices, Dirac Eigenmodes and Fractional Topological Charge in SU(2) Lattice Gauge Theory PoS(Lattice 2011)269 pdf Weak Decays M_b and f_B from non-perturbatively renormalized HQET with Nf=2 light quarks PoS(Lattice 2011)280 pdf Extraction of |V_{us}| from the calculation of K->pi l nu form factors with N_f=2+1 flavors of staggered quarks PoS(Lattice 2011)281 pdf EM corrections to pseudoscalar decay constants PoS(Lattice 2011)282 pdf Disconnected contributions to D-meson semi-leptonic decay form factors PoS(Lattice 2011)283 pdf Kaon semileptonic form factors in QCD with exact chiral symmetry PoS(Lattice 2011)284 pdf Continuum Results for Light Hadronic Quantities using Domain Wall Fermions with the Iwasaki and DSDR Gauge Actions PoS(Lattice 2011)285 pdf The D to K and D to pi semileptonic decay form factors from Lattice QCD PoS(Lattice 2011)286 pdf Practical methods for a direct calculation of \Delta I=1/2 K to \pi\pi Decay PoS(Lattice 2011)287 pdf Heavy-light meson semileptonic decays and precision tests of the Standard Model PoS(Lattice 2011)288 Semileptonic B to D decays with 2+1 flavors PoS(Lattice 2011)289 pdf Lattice QCD calculation of isospin breaking effects due to the up-down mass difference PoS(Lattice 2011)290 pdf Studies of B and B_s Meson Leptonic Decays with NRQCD Bottom and HISQ Light/Strange Quarks PoS(Lattice 2011)291 pdf Pion and kaon decay constants and B_K from mixed-action lattice QCD PoS(Lattice 2011)293 pdf B-meson physics with dynamical domain-wall light quarks and nonperturbatively tuned relativisitc b-quarks PoS(Lattice 2011)294 Radiative decay of \eta_{c2} to \gamma J/\psi PoS(Lattice 2011)295 Covariance fitting of highly correlated B_K data PoS(Lattice 2011)296 pdf Long distance contribution to K_{L} K_{S} mass difference PoS(Lattice 2011)297 pdf Form factors for B to Kll semileptonic decay from three-flavor lattice QCD PoS(Lattice 2011)298 pdf D semileptonic form factors and |V_cs(d)| from 2+1 flavor lattice QCD PoS(Lattice 2011)270 pdf Probing TeV scale physics via ultra cold neutron decays and calculating non-standard baryon matrix elements PoS(Lattice 2011)271 pdf Theoretical Bounds on New Four-Fermion Interactions and TeV Scale Physics PoS(Lattice 2011)272 pdf Probing TeV Physics through Lattice Neutron-Decay Matrix Elements PoS(Lattice 2011)273 pdf Neutral B mixing from 2+1 flavor lattice QCD: the Standard Model and beyond PoS(Lattice 2011)274 pdf Kaon oscillations in the Standard Model and Beyond using Nf=2+1+1 dynamical sea quarks PoS(Lattice 2011)276 pdf Computing the long-distance contribution to the kaon mixing parameter epsilon_K PoS(Lattice 2011)277 pdf Axial vector form factors in Ds to phi semileptonic decays from lattice QCD. PoS(Lattice 2011)278 pdf Semileptonic form-factor ratio f_0(B\to D)/f_0(B_s\to D_s) and its application to BR(B_s\to\mu^+\mu^-) PoS(Lattice 2011)279 pdf Delta I=3/2 K to pi-pi decays with nearly physical kinematics PoS(Lattice 2011)335 pdf Fisher's zeros, complex RG flows and confinement in LGT models. PoS(Lattice 2011)299 pdf String tension at finite temperature Lattice QCD PoS(Lattice 2011)300 pdf Upper and lower Higgs mass bounds in the presence of a 4th generation PoS(Lattice 2011)301 pdf Efficiency on multi-core CPUs: the Wilson Dirac operator on Aurora PoS(Lattice 2011)302 pdf Critical properties of 2D Z(N) vector models for N>4 PoS(Lattice 2011)304 pdf A new usesr interface for the Gauge Connection lattice data archive PoS(Lattice 2011)305 pdf Flavor-singlet Z_A from Overlap Fermions on 2+1 flavor DWF configurations PoS(Lattice 2011)306 Lattice Planar QED in external magnetic field PoS(Lattice 2011)307 pdf Glueball masses from ratios of path integrals PoS(Lattice 2011)308 pdf Multi GPU Performance of Conjugate Gradient Solver with Staggered Fermions in Mixed Precision PoS(Lattice 2011)309 pdf Universal properties of 3d O(4) symmetric models: The scaling function of the free energy density and its derivatives PoS(Lattice 2011)310 pdf Partial spectrum of large hermitean matrices PoS(Lattice 2011)311 pdf Random Matrix Models for Dirac Operators at finite Lattice Spacing PoS(Lattice 2011)312 pdf SU(3) Analysis of B_K with improved staggered quarks PoS(Lattice 2011)313 pdf The 't Hooft vertex for staggered fermions and flavor-singlet mesons PoS(Lattice 2011)314 pdf The static potential with dynamical fermions from Wilson loops PoS(Lattice 2011)315 pdf Discritization error and fitting in B_K PoS(Lattice 2011)316 pdf Spin Polarizabilties on the Lattice PoS(Lattice 2011)317 pdf Nuclear forces in the odd parity sector and the LS forces PoS(Lattice 2011)319 pdf B and D meson decay constants from 2+1 flavor improved staggered simulations PoS(Lattice 2011)320 pdf The strong coupling bulk transition of twelve flavors PoS(Lattice 2011)321 pdf On the Extraction of the Strong Coupling Constant from Hadronic Tau Decay PoS(Lattice 2011)322 pdf Flux tubes in the SU(3) vacuum PoS(Lattice 2011)323 pdf MCRG study of the SU(2) pure gauge model with mixed fundamental-adjoint action PoS(Lattice 2011)324 Lattice QCD with Qlua PoS(Lattice 2011)325 The center magnetic vortex and its influence on physical quantities in the gluon plasma PoS(Lattice 2011)326 pdf Geometric Numerical Integration Structure-Preserving Algorithms for QCD Simulations PoS(Lattice 2011)327 pdf Investigations of QCD at non-zero isospin density PoS(Lattice 2011)328 pdf Proton decay matrix elements in 2+1 domain-wall fermion PoS(Lattice 2011)329 pdf Challenges of hadronic weak decays of B-mesons on the lattice PoS(Lattice 2011)330 pdf Renormalization factor of four fermi operators with clover fermion and Iwasaki gauge action PoS(Lattice 2011)331 pdf Exploring infrared fixed point in SU(N) gauge theories PoS(Lattice 2011)332
{"url":"http://pos.sissa.it/cgi-bin/reader/conf.cgi?confid=139","timestamp":"2014-04-24T10:57:19Z","content_type":null,"content_length":"137722","record_id":"<urn:uuid:f9e222cd-39e8-44eb-b4ab-d04bb4043489>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
Guttenberg, NJ Algebra 1 Tutor Find a Guttenberg, NJ Algebra 1 Tutor ...So the subject is not only fresh in my memory but one of my favorite subjects! Being in the dental hygiene program at NYU, I have to be a certified nutritional counselor with all my patients since it's extremely important to oral health. I have taken an advanced nutrition class and received an A-, which included a full nutritional analysis project. 20 Subjects: including algebra 1, reading, chemistry, algebra 2 ...I have experience teaching high school and university students in STEM and non-STEM fields and have helped people struggling at different levels. I believe everyone is capable of reaching a high level of proficiency in physics and math given enough dedication and the right mentoring to minimize ... 17 Subjects: including algebra 1, chemistry, Spanish, calculus I am a highly motivated, passionate math teacher who has taught in high performing schools in four states and two countries. I have previously taught all grades from 5th to 10th and am extremely comfortable teaching all types of math to all level learners. I am a results driven educator who motivates and educates in a fun, focused atmosphere. 7 Subjects: including algebra 1, geometry, accounting, algebra 2 ...I have taught at the high school level and currently teach at the college level to undergraduates taking subjects like trigonometry and calculus. I have two graduate degrees. I have a graduate degree in physics and electrical engineering. 10 Subjects: including algebra 1, calculus, physics, geometry ...I was a part of this program for 2 years. Also, when I was in the 8th grade, I used to give homework help to some of my friends that were in the 7th grade. I specifically helped them with math and science subjects. 16 Subjects: including algebra 1, statistics, precalculus, elementary math Related Guttenberg, NJ Tutors Guttenberg, NJ Accounting Tutors Guttenberg, NJ ACT Tutors Guttenberg, NJ Algebra Tutors Guttenberg, NJ Algebra 2 Tutors Guttenberg, NJ Calculus Tutors Guttenberg, NJ Geometry Tutors Guttenberg, NJ Math Tutors Guttenberg, NJ Prealgebra Tutors Guttenberg, NJ Precalculus Tutors Guttenberg, NJ SAT Tutors Guttenberg, NJ SAT Math Tutors Guttenberg, NJ Science Tutors Guttenberg, NJ Statistics Tutors Guttenberg, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Guttenberg_NJ_algebra_1_tutors.php","timestamp":"2014-04-20T08:57:04Z","content_type":null,"content_length":"24254","record_id":"<urn:uuid:3b04d1dd-1f70-4e71-9815-b56568659d6a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Positive Solutions for Some Beam Equation Boundary Value Problems A new fixed point theorem in a cone is applied to obtain the existence of positive solutions of some fourth-order beam equation boundary value problems with dependence on the first-order derivative 1. Introduction It is well known that beam is one of the basic structures in architecture. It is greatly used in the designing of bridge and construction. Recently, scientists bring forward the theory of combined beams. That is to say, we can bind up some stratified structure copings into one global combined beam with rock bolts. The deformations of an elastic beam in equilibrium state, whose two ends are simply supported, can be described by following equation of deflection curve: According to the forms of supporting, various boundary conditions should be considered. Solving corresponding boundary value problems, one can obtain the expression of deflection curve. It is the key in design of constants of beams and rock bolts. Owing to its importance in physics and engineering, the existence of solutions to this problem has been studied by many authors, see [1–10]. However, in practice, only its positive solution is significant. In [1, 9, 11, 12], Aftabizadeh, Del Pino and Manásevich, Gupta, and Pao showed the existence of positive solution for under some growth conditions of The lower and upper solution method has been studied for the fourth-order problem by several authors [2, 3, 7, 8, 13, 14]. However, all of these authors consider only an equation of the form with diverse kind of boundary conditions. In [10], Ehme et al. gave some sufficient conditions for the existence of a solution of with some quite general nonlinear boundary conditions by using the lower and upper solution method. The conditions assume the existence of a strong upper and lower solution pair. Recently, Krasnosel'skii's fixed point theorem in a cone has much application in studying the existence and multiplicity of positive solutions for differential equation boundary value problems, see [ 3, 6]. With this fixed point theorem, Bai and Wang [6] discussed the existence, uniqueness, multiplicity, and infinitely many positive solutions for the equation of the form In this paper, via a new fixed point theorem in a cone and concavity of function, we show the existence of positive solutions for the following problem: We point out that positive solutions of (1.7) are concave and this concavity provides lower bounds on positive concave functions of their maximum, which can be used in defining a cone on which a positive operator is defined, to which a new fixed point theorem in a cone due to Bai and Ge [5] can be applied to obtain positive solutions. 2. Fixed Point Theorem in a Cone Lemma 2.1 (see [5]). are two open subsets in 3. Existence of Positive Solutions In this section, we are concerned with the existence of positive solutions for the fourth-order two-point boundary value problem (1.7). and (2.1) hold. Denote by However, (1.7) has a solution It is well know that Theorem 3.1. Suppose there are four constants Then, (1.7) has at least one positive solution be two bounded open subsets in Combined with Now, Lemma 2.1 implies there exists that is, The proof is complete. Theorem 3.2. Suppose there are five constants Then, (1.7) has at least one positive solution We just need notice the following difference to the proof of Theorem 3.1. The rest of the proof is similar to Theorem 3.1 and the proof is complete. Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2009/1/393259","timestamp":"2014-04-19T17:30:37Z","content_type":null,"content_length":"73655","record_id":"<urn:uuid:19a74324-9eb0-445a-b68d-c3d0822d96d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
CGTalk - getting rid of "e" in x,y,z coordinates. 01-31-2004, 05:24 PM getting rid of "e" in x,y,z coordinates Wondering if someone could help me out with a Format problem. I'm printing out points to a file, but I don't want the exponential printed out. I've notice this occuring with very small numbers in Max. For example, [-86.6734,6.41911e-007,16.7711] where the y component in the list has an exponent. I was hoping there was some easy command to get rid of this exponent and force the number to print out in decimal form, but I haven't found anything. I was wondering if someone could point me in the right direction. Am I missing something big here? I'm currently using a simple loop to decide if the number is smaller than a certain amount (let's say .00001 for example) if the number is smaller than that, then print out "0" in place of the exponential number. if MyPoint.y < .00001 do MyPoint.y = 0 print MyPoint.y Surely, there must be an easier way than this. Or is everyone else using a loop to convert their exponential numbers. Anyone know of a way to completely convert without losing any precision in the exponential number given different exponential values. Say a function that takes [7.24311e-008, 6.41911e-007, 12] and converts to [0.000000724311, 0.00000641911, 12]. Any help would be greatly appreciated. Thanks.
{"url":"http://forums.cgsociety.org/archive/index.php/t-119689.html","timestamp":"2014-04-20T23:37:23Z","content_type":null,"content_length":"13680","record_id":"<urn:uuid:37d0a157-aa84-4c9c-8177-b0ca5c32774d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00266-ip-10-147-4-33.ec2.internal.warc.gz"}