content
stringlengths
86
994k
meta
stringlengths
288
619
Woodhaven Algebra 2 Tutor Find a Woodhaven Algebra 2 Tutor ...I am familiar with AP Calculus AB and BC. I received a score of "5" on both exams. My work in 16 AP classes has expanded my vocabulary tremendously; I received a 740 on the SAT Reading section, a section which tests your vocabulary skills. 43 Subjects: including algebra 2, English, calculus, reading ...I have a solid foundation in math, biology, chemistry, physics, and physiology with an emphasis in neurobiology. I love the process of teaching and learning, and would look forward to tutoring you if you are the student, or your child if you are the parent. I wanted to start out tutoring things more outside of my direct area of expertise, because I wanted the challenge of learning to 11 Subjects: including algebra 2, chemistry, physics, biology ...It is interesting how none of the education methods is perfect and each has its loop holes. I often deal with students having problems understanding math. I give my best to the students and it is working. 9 Subjects: including algebra 2, chemistry, calculus, algebra 1 ...In my math and physics tutoring experience, I've helped students from high school through college levels and beyond! That includes the following subjects: algebra I & II, geometry, trigonometry, pre-calculus, and calculus (including AP, AB and BC). I've helped students at many different public a... 12 Subjects: including algebra 2, physics, MCAT, trigonometry ...My name is Lawrence and I would like to teach you math! Since 2004, I have been tutoring students in mathematics one-on-one. My approach to mathematics tutoring is creative and 9 Subjects: including algebra 2, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/Woodhaven_algebra_2_tutors.php","timestamp":"2014-04-21T11:06:15Z","content_type":null,"content_length":"23882","record_id":"<urn:uuid:7dca870b-8012-4267-b894-e7bdb163d747>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Isabelle/Isar—A generic framework for human-readable proof documents Results 1 - 10 of 17 "... Sledgehammer is a highly successful subsystem of Isabelle/HOL that calls automatic theorem provers to assist with interactive proof construction. It requires no user configuration: it can be invoked with a single mouse gesture at any point in a proof. It automatically finds relevant lemmas from all ..." Cited by 19 (5 self) Add to MetaCart Sledgehammer is a highly successful subsystem of Isabelle/HOL that calls automatic theorem provers to assist with interactive proof construction. It requires no user configuration: it can be invoked with a single mouse gesture at any point in a proof. It automatically finds relevant lemmas from all those currently available. An unusual aspect of its architecture is its use of unsound translations, coupled with its delivery of results as Isabelle/HOL proof scripts: its output cannot be trusted, but it does not need to be trusted. Sledgehammer works well with Isar structured proofs and allows beginners to prove challenging theorems. 1 "... In this paper we will discuss the fundamental ideas behind proof assistants: What are they and what is a proof anyway? We give a short history of the main ideas, emphasizing the way they ensure the correctness of the mathematics formalized. We will also briefly discuss the places where proof assista ..." Cited by 3 (0 self) Add to MetaCart In this paper we will discuss the fundamental ideas behind proof assistants: What are they and what is a proof anyway? We give a short history of the main ideas, emphasizing the way they ensure the correctness of the mathematics formalized. We will also briefly discuss the places where proof assistants are used and how we envision their extended use in the future. While being an introduction into the world of proof assistants and the main issues behind them, this paper is also a position paper that pushes the further use of proof assistants. We believe that these systems will become the future of mathematics, where definitions, statements, computations and proofs are all available in a computerized form. An important application is and will be in computer supported modelling and verification of systems. But their is still along road ahead and we will indicate what we believe is needed for the further proliferation of proof assistants. , 2011 "... Isabelle/HOL is a popular interactive theorem prover based on higherorder logic. It owes its success to its ease of use and powerful automation. Much of the automation is performed by external tools: The metaprover Sledgehammer relies on resolution provers and SMT solvers for its proof search, the c ..." Cited by 3 (1 self) Add to MetaCart Isabelle/HOL is a popular interactive theorem prover based on higherorder logic. It owes its success to its ease of use and powerful automation. Much of the automation is performed by external tools: The metaprover Sledgehammer relies on resolution provers and SMT solvers for its proof search, the counterexample generator Quickcheck uses the ML compiler as a fast evaluator for ground formulas, and its rival Nitpick is based on the model finder Kodkod, which performs a reduction to SAT. Together with the Isar structured proof format and a new asynchronous user interface, these tools have radically transformed the Isabelle user experience. This paper provides an overview of the main automatic proof and disproof tools. "... This paper presents an algorithm that redirects proofs by contradiction. The input is a refutation graph, as produced by an automatic theorem prover (e.g., E, SPASS, Vampire, Z3); the output is a direct proof expressed in natural deduction extended with case analyses and nested subproofs. The algori ..." Cited by 2 (2 self) Add to MetaCart This paper presents an algorithm that redirects proofs by contradiction. The input is a refutation graph, as produced by an automatic theorem prover (e.g., E, SPASS, Vampire, Z3); the output is a direct proof expressed in natural deduction extended with case analyses and nested subproofs. The algorithm is implemented in Isabelle’s Sledgehammer, where it enhances the legibility of machine-generated proofs. 1 "... Sledgehammer integrates external automatic theorem provers (ATPs) in the Isabelle/HOL proof assistant. To guard against bugs, ATP proofs must be reconstructed in Isabelle. Reconstructing complex proofs involves translating them to detailed Isabelle proof texts, using suitable proof methods to justif ..." Cited by 1 (1 self) Add to MetaCart Sledgehammer integrates external automatic theorem provers (ATPs) in the Isabelle/HOL proof assistant. To guard against bugs, ATP proofs must be reconstructed in Isabelle. Reconstructing complex proofs involves translating them to detailed Isabelle proof texts, using suitable proof methods to justify the inferences. This has been attempted before with little success, but we have addressed the main issues: Sledgehammer now transforms the proofs by contradiction into direct proofs (as described in a companion paper [3]); it reconstructs skolemization inferences correctly; it provides the right amount of type annotations to ensure formulas are parsed correctly without marring them with types; and it iteratively tests and compresses the output, resulting in simpler and faster working proofs. "... Abstract. Traditionally a rigorous mathematical document consists of a sequence of definition – statement – proof. Taking this basic outline as starting point we investigate how these three categories of text can be represented adequately in the formal language of Isabelle/Isar. Proofs represented i ..." Add to MetaCart Abstract. Traditionally a rigorous mathematical document consists of a sequence of definition – statement – proof. Taking this basic outline as starting point we investigate how these three categories of text can be represented adequately in the formal language of Isabelle/Isar. Proofs represented in human-readable form have been the initial motivation of Isar language design 10 years ago. The principles developed here allow to turn deductions of the Isabelle logical framework into a format that transcends the raw logical calculus, with more direct description of reasoning using pseudo-natural language elements. Statements describe the main result of a theorem in an open format as a reasoning scheme, saying that in the context of certain parameters and assumptions certain conclusions can be derived. This idea of turning Isar context elements into rule statements has been recently refined to support the dual form of elimination rules as well. Definitions in their primitive form merely name existing elements of the logical environment, by stating a suitable equation or logical equivalence. Inductive definitions provide a convenient derived principle to describe a new predicate as the closure of given natural deduction rules. Again there is a direct connection to Isar principles, rules stemming from an inductive characterization are immediately available in structured reasoning. All three sub-categories benefit from replacing raw logical encodings by native Isar language elements. The overall formality in the presented mathematical text is reduced. Instead of manipulating auxiliary logical connectives and quantifiers, the mathematical concepts are emphasized. 1 "... A good proof is a proof that makes us wiser. Manin [41, p. 209]. Abstract. Hilbert’s concept of formal proof is an ideal of rigour for mathematics which has important applications in mathematical logic, but seems irrelevant for the practice of mathematics. The advent, in the last twenty years, of pr ..." Add to MetaCart A good proof is a proof that makes us wiser. Manin [41, p. 209]. Abstract. Hilbert’s concept of formal proof is an ideal of rigour for mathematics which has important applications in mathematical logic, but seems irrelevant for the practice of mathematics. The advent, in the last twenty years, of proof assistants was followed by an impressive record of deep mathematical theorems formally proved. Formal proof is practically achievable. With formal proof, correctness reaches a standard that no pen-and-paper proof can match, but an essential component of mathematics — the insight and understanding — seems to be in short supply. So, what makes a proof understandable? To answer this question we first suggest a list of symptoms of understanding. We then propose a vision of an environment in which users can write and check formal proofs as well as query them with reference to the symptoms of understanding. In this way, the environment reconciles the main features of proof: correctness and understanding. 1 "... Abstract. Automated theorem provers (ATPs) struggle to solve problems with large sets of possibly superfluous axiom. Several algorithms have been developed to reduce the number of axioms, optimally only selecting the necessary axioms. However, most of these algorithms consider only single problems. ..." Add to MetaCart Abstract. Automated theorem provers (ATPs) struggle to solve problems with large sets of possibly superfluous axiom. Several algorithms have been developed to reduce the number of axioms, optimally only selecting the necessary axioms. However, most of these algorithms consider only single problems. In this paper, we describe an axiom selection method for series of related problems that is based on logical and textual proximity and tries to mimic a human way of understanding mathematical texts. We present first results that indicate that this approach is indeed useful. Key words: formal mathematics, automated theorem proving, axiom selection 1 "... Abstract. Sledgehammer integrates automatic theorem provers in the proof assistant Isabelle/HOL. A key component, the relevance filter, heuristically ranks the thousands of facts available and selects a subset, based on syntactic similarity to the current goal. We introduce MaSh, an alternative that ..." Add to MetaCart Abstract. Sledgehammer integrates automatic theorem provers in the proof assistant Isabelle/HOL. A key component, the relevance filter, heuristically ranks the thousands of facts available and selects a subset, based on syntactic similarity to the current goal. We introduce MaSh, an alternative that learns from successful proofs. New challenges arose from our “zero-click ” vision: MaSh should integrate seamlessly with the users ’ workflow, so that they benefit from machine learning without having to install software, set up servers, or guide the learning. The underlying machinery draws on recent research in the context of Mizar and HOL Light, with a number of enhancements. MaSh outperforms the old relevance filter on large formalizations, and a particularly strong filter is obtained by combining the two filters. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4100682","timestamp":"2014-04-20T21:41:05Z","content_type":null,"content_length":"36159","record_id":"<urn:uuid:b74f322e-5c1f-461c-98a4-ce40c6fc6589>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Useful Math Properties: Associative, Commutative & MoreGradeAmathHelp.comUseful Math Properties: Associative, Commutative & MoreGradeAmathHelp.com Common Math Properties The following math properties are formally introduced in algebra classes, but they are taught in many elementary schools. You probably don't even realize that you already know many of these properties. For example, the commutative property basically states you can add in any order: 6 + 5 is the same as 5 + 6. In this page you will learn the following properties: Associative Property Commutative Property Distributive Property Identity Property Zero Property Not what you are looking for? Vist our pages dedicated to the math property of equality or math clue words. You should also be sure to understand the order of operations before attempting to understand these math properties. Each property is listed below. On the left side of the table we show the general form – using all letters. We know properties can be confusing when too many variables are use, so we also give an example with numbers on the right side of the table as well. Aim to learn the general form, but use the numeric form as your "training wheels." The associative property indicates that the grouping of numbers does not matter. By "grouping" we simply mean where the parentheses are placed. Take a look: │ Addition (+) │ │ │ │Multiplication (x)│(a · b) · c = a · (b · c) │(3 · 5) · 1 = 3 · (5 ·1)│ │ │ │Try it! Both sides = 15 │ Notice how the order of the numbers did not change. In the examples with numbers, the order always goes 3, 5, 1. How can we remember the name of this math property? One possibility is to think of the word associate – which is another word for friends. You probably have different groups of friends and you hang out with them at different times. The associative property deals with changing groups (parentheses). You don’t change the order, you just change the groups. The commutative property (like we described at the top of the math properties page) deals with the order that add or multiply numbers. │ Addition (+) │ │ 4 +2 = 2 + 4 │ │ │ │Try it! Both sides = 6 │ │Multiplication (x)│a · b = b · a│ 4 · 2 = 2 · 4 │ │ │ │Try it! Both sides = 8 │ In the commutative property you do change the order of the numbers. In our example above, the 4 was first originally, and then it was switched to second. How can we remember this property? The word commute means to travel: “A half hour commute to work.” When you see the word commutative, think of travel – or of moving the order of the numbers. Tip to remember: Commutative also sounds like com-move-ative. The distributive property applies when you are multiplying a number (or variable) times a quantity. You can multiply the number by each of the values inside the quantity seperately, and add them Take a look at the distributive property below: The word distribute means to give out. In the example at the right, we are giving out the 3 to both the 4 and the 1 – see the arrows shown below? Because you are multiplying 3 times (4+1), that means you have three (4+1)’s. Instead of multiplying, you can add all 3 of them up. Look at the figure with the 3 arrows. You might be thinking: I could just add up 4+1 to get 5, and then multiply 3 times 5 to get 15. That is certainly true. The distributive property will be most useful when one of the numbers inside the parentheses is a variable. Certain math properties are only useful in some situations. he word identity means “who you are.” You may have heard of identity theft. In math, we want a number to keep its same identity – in other words, stay as the same number. What number would you have to add to a number to keep it the same? What about multiplication? │ Addition (+) │a + 0 = a│6 + 0 = 6│ │Multiplication (x) │a · 1 = a│6 · 1 = 6│ he identity operator of addition is 0 because any number plus 0 is always equal to that number – and yes, you can switch the order! The identity operator of multiplication is 1 because any number times 1 is always equal to that number – again you can use the commutative prop! Zero Product Property (of Multiplication) This math property states that any number multiplied by zero will always be equal to zero. You probably already knew this one. │Multiplication (x) │a · 0 = 0 │8· 0 = 0│ Return to other pre algebra math problems or visit the GradeA homepage.
{"url":"http://www.gradeamathhelp.com/math-properties.html","timestamp":"2014-04-21T07:03:50Z","content_type":null,"content_length":"31341","record_id":"<urn:uuid:7163c662-eb6d-4fda-9b83-fccea8c56396>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Rearranging a fomula?? April 18th 2009, 02:43 AM #1 Apr 2009 Rearranging a fomula?? How do i rearrange this fomula so N is the answer, i knw the answer is 11, but i'am struggling to rearrange the formula, and advise? D = (I-S)/ N (3995-145) / ? = 350 You have to use general arithmetic rules. If you are not able to work it out then it's best to do it step by step. Remember, if you do one arithmetic operation on one side, it has to be done on the other side. $D = \frac{I-S}{N}$ Multiply Both sides by $N$: $DN = I - S$ Divide Both sides by $D$: $N = \frac{I-S}{D}$ From here, you are able to substitute your numbers. It's best to re-arrange your formula first before inserting the numbers as if the numbers are decimals or fractions, it can make the re-arrangement more complicated. April 18th 2009, 02:50 AM #2
{"url":"http://mathhelpforum.com/algebra/84259-rearranging-fomula.html","timestamp":"2014-04-17T19:26:59Z","content_type":null,"content_length":"33515","record_id":"<urn:uuid:1fdfd7b9-1b65-4d6c-aaaf-b7b6f6a97a97>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Civil Engineering - Solved Problems The following solved examples will be helpful in teaching and learning of civil engineering. More problems have recently been added. Problem 9-3^New Determine the Strength of Doubly Reinforced Concrete beam when compression steel is not yielding. Problem 9-2 ^New Calculate the Nominal Flexural Strength of Doubly Reinforced Concrete beam Problem 9-1 ^New Compute the Nominal Moment Strength of Singly Reinforced Concrete beam having rectangular section Problem 8-2^New Solution of Indeterminate frame by moment distribution method. Problem 8-1 Solution of Indeterminate beam by moment distribution method. Problem 7-3 Deflection of beam by unit load method Problem 7-4 Solution of Indeterminate Structure by slope-deflection equations Problem 7-5 Deflection of pin-jointed plane truss by unit load method. Problem 7-2 Solution of Indeterminate Structure - Continuous Beam Problem 7-1 Solution of Indeterminate Structure - Propped cantilever Problem 6-2 Calculate the slope and deflection at a point of a simply supported beam. Problem 6-1 Calculate the slope and deflection at a point of cantilever Problem 3-1 Calculation of truss member forces by method of joints. Featured Links Overhanging Beam Calculator^New Bending Moment & Shear Force Calculation for Overhanging beam with different loads Fixed Beam Calculator^New Fixed end moment, bending moment & reaction calculation for fixed beam Moment of Inertia calculator^New For different sections including I-section and T-section. Deflection Calculator^New Easy to use calculator for different loads on beams RC Beam Calculator^New Calculate the strength of reinforced concrete beams Bending Moment calculator^New Calculate Bending moments for simply supported beams CE QUIZ^New A collection of quiz in different areas of civil engineering CE Horizon^New Online Civil Engineering Journal and Magazine Profile of Civil Engineers^New Get to know about distinguished civil engineers Problem 3-2 Calculation of truss member forces by method of sections. Problem 5-4 Shear force and Bending moment calculations for a simply supported beam with overhang (having a point of contra-flexure) Problem 5-3 Calculate and draw the diagram for Shear force and bending moment for a simply supported beam with overhang. Problem 5-2 Calculate and draw the diagram for Shear force and bending moment for a simply supported beam subjected to point load and udl. Problem 5-1 Shear force and bending moment calculations for a cantilever subjected to point load and udl. Problem 4-2 Reactions of a beam with overhang on one side. Problem 4-1 Reactions of a simply supported beam more problems to be added soon. send your feedback You can also use our following calculators for problem solving. Moment Distribution Calculator^New Easy to use calculator for solving Indeterminate beams with different load. Bending Moment calculator to Calculate Bending moments for simply supported beams RC Beam Calculator to Calculate the strength of reinforced concrete beam. Strumech Computer programs for applied mechanics
{"url":"http://civilengineer.webinfolist.com/problemsolver.htm","timestamp":"2014-04-17T18:38:13Z","content_type":null,"content_length":"36108","record_id":"<urn:uuid:51ed3f77-2da4-414d-b028-29c4da0ca9ce>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic K-theory of the group ring of the fundamental group up vote 16 down vote favorite I know of two places where $K_{*}(\mathbb{Z}\pi_{1}(X))$ (the algebraic $K$-theory of the group ring of the fundamental group) makes an appearance in algebraic topology. The first is the Wall finiteness obstruction. We say that a space $X$ is finitely dominated if $id_{X}$ is homotopic to a map $X \rightarrow X$ which factors through a finite CW complex $K$. The Wall finiteness obstruction of a finitely dominated space $X$ is an element of $\tilde{K_{0}}(\mathbb{Z}\pi_{1}(X)) $ which vanishes iff $X$ is actually homotopy equivalent to a finite CW complex. The second is the Whitehead torsion $\tau(W,M)$, which lives in a quotient of $K_{1}(\mathbb{Z}\pi_{1}(W))$. According to the s-cobordism theorem, if $(W; M, M')$ is a cobordism with $H_{*}(W, M) = 0$, then $W$ is diffeomorphic to $M \times [0, 1]$ if and only if the Whitehead torsion $\tau(W, M)$ vanishes. For more details, see the following: http://arxiv.org/abs/math/0008070 (A survey of Wall's finiteness obstruction) http://www.maths.ed.ac.uk/~aar/books/surgery.pdf (Algebraic and Geometric Surgery. See Ch. 8 on Whitehead Torsion) My question is twofold. First, is there a high-concept defense of $K_{*}(\mathbb{Z}\pi_{1}(X))$ as a reasonable place for obstructions to topological problems to appear? I realize that $\mathbb{Z}\pi_{1}(X)$ appears because the (cellular, if $X$ is a cell complex) chain groups of the universal cover $\tilde{X}$ are modules over $\mathbb{Z}\pi_{1}(X)$. Is it the case that when working with chain complexes of $R$-modules, we expect obstructions to appear in $K_{*}(R)$? Second, is there an enlightening explanation of the formal similarity between these two obstructions? (Both appear from considering the cellular chain complex of a universal cover and taking an alternating sum.) kt.k-theory-homology at.algebraic-topology 1 Interesting, 1+. Related question: Are there interesting obstructions in even higher $K$-groups, say $K_2(\mathbb{Z}\pi)$? – Martin Brandenburg Dec 6 '11 at 7:31 11 Martin, the answer is yes. These have to do with the homotopy-type of concordance / pseudo-isotopy spaces. Hatcher, Wagoner, Igusa and Waldhausen are relevant authors. – Ryan Budney Dec 6 '11 at Another kind of obstruction appears in the Rothenberg sequence:The surgery-relevant L-theory of the group ring is only "well-behaved" if the lower K-groups of the group ring vanish. – Fabian Lenhardt Dec 6 '11 at 11:20 add comment 2 Answers active oldest votes To add to Tim Porter's excellent answer: The story of what we now call $K_1$ of rings begins with Whitehead's work on simple homotopy equivalence, which uses what we now call the Whitehead group, a quotient of $K_1$ of the group ring of the fundamental group of a space. On the other hand, the story of $K_0$ of rings probably begins with Grothendieck's work on generalized Riemann-Roch. What he did with algebraic vector bundles proved to be a very useful to do with other kinds of vector bundles, and with finitely generated projective modules over a ring, and with some other kinds of modules. I don't know who it was who recognized that these two constructions deserved to be named $K_0$ and $K_1$, and viewed as two parts of something larger to be called algebraic $K$-theory. But Milnor gave the right definition of $K_2$, and Quillen and others gave various equivalent right definitions of $K_n$. Let me try to lay out the parallels between the topological significances of Whitehead's quotient of $K_1(\mathbb ZG)$ and Wall's quotient of $K_0(\mathbb ZG)$. My main point is that both of them have their uses in both the theory of cell complexes and the theory of manifolds. The Whitehead group of $G$ is a quotient of $K_1(\mathbb ZG)$. Its significance for cell complexes is that it detects what you might call non-obvious homotopy equivalences between finite cell complexes. An obvious way to exhibit a homotopy equivalence between finite complexes $K$ and $L$ is to by attaching a disk $D^n$ to $K$ along one half of its boundary sphere and obtain $L$. Roughly, a homotopy equivalence between finite complexes is called simple if it is homotopic to one that can be created by a finite sequence of such operations. The big theorem is that a homotopy equivalence $h:K\to L$ between finite complexes determines an element (the torsion) of the Whitehead group of $\pi_1(K)$, which is $0$ if and only if $h$ is simple, and that for any $K$ and any element of its Whitehead group there is an $(L,h:K\to L)$, unique up to simple homotopy equivalence, leading to this element in this way, and that this invariant of $h$ has various formal properties that make it convenient to compute. up vote 14 down vote One reason why you might care about the notion of simple homotopy equivalence is that for simplicial complexes it is invariant under subdivision, so that one can in fact ask whether $h$ is accepted simple even if $K$ and $L$ are merely piecewise linear spaces, with no preferred triangulations. This means that, for example, a homotopy equivalence between compact PL manifolds (or smooth manifolds) cannot be homotopic to a PL (or smooth) homeomorphism if its torsion is nontrivial. (Later, the topological invariance of Whitehead torsion allowed one to eliminate the "PL" and "smooth" in all of that, extending these tools to, for example, topological manifolds without using triangulations.) But the $h$-cobordism theorem says more: it applies Whitehead's invariant to manifolds in a different and deeper way. Meanwhile on the $K_0$ side Wall introduced his invariant to detect whether there could be a finite complex in a given homotopy type. Note that where $K_0$ is concerned with existence of a finite representative for a homotopy type, $K_1$ is concerned with the (non-)uniqueness of the same. Siebenmann in his thesis applied Wall's invariant to a manifold question in a way that corresponds very closely to the $h$-cobordism story: The question was, basically, when can a given noncompact manifold be the interior of a compact manifold-with-boundary? Note that there is a uniqueness question to go with this existence question: If two compact manifolds $M$ and $M'$ have isomorphic interiors then this leads to an $h$-cobordism between their boundaries, which will be a product cobordism if $M$ and $M'$ are really the same. One can go on: The question of whether a given $h$-cobordism admits a product structure raises the related question of uniqueness of such a structure, which is really the question of whether a diffeomorphism from $M\times I$ to itself is isotopic to one of the form $f\times 1_I$. This is the beginning of pseodoisotopy theory, and yes $K_2$ comes into it. But from here on, the higher Quillen $K$-theory of the group ring $\mathbb Z\pi_1(M)$ is not the best tool. Instead you need the Waldhausen $K$-theory of the space $M$, in which basically $\mathbb Z$ gets replaced by the sphere spectrum and $\pi_1(M)$ gets replaced by the loopspace $\Omega M$. It's a long story! Presumably it was Milnor that noticed the connection, from your 4th paragraph. – Ryan Budney Dec 6 '11 at 21:15 1 Others, I think, had seen that $K_0$ and $K_1$ were part of the same story and named them accordingly. And maybe Bass had already invented $K_{-n}$? – Tom Goodwillie Dec 6 '11 at 22:03 Very enlightening. I'm going to go ahead and accept this answer. – Sam Nolen Dec 8 '11 at 6:13 add comment First a slight quibble: the Whitehead group is the origin of algebraic K-theory, and predates the general stuff by quite a time, so your wording is not quite fair to Whitehead! On your first question, one direction to look is at Waldhausen's K-theory. (The original paper is worth studying, but you should also look at the more recent stuff relating to the connections between that and model categories.) Waldhausen has a section on the links between his groups and the Whitehead simple homotopy theory. (I can provide some references if you need them. There are some useful comments in the nLab if you search on 'Waldhausen'.) For the second question, it is probably a good idea to look at some books on Simple Homotopy Theory and to take a slightly historical perspective, (also to glance at the original articles not just surveys). Whitehead's and then Milnor's papers released a set of tools for studying finite CW-complexes. The role of the chains on the universal cover can be viewed in various ways, but both constructions are part of the general idea at the time of making homotopy theory more 'constructive' and the taming of the fundamental group action in non-Abelian cases was a first step. The origins of simple homotopy theory are pre-world war II with Riedemeister and note that Whitehead's two part paper in 1949 was called 'Combinatorial Homotopy Theory' and was intended to up vote mirror 'Combinatorial Group Theory'. 12 down vote This does not give the enlightenment on the 'reasons' for the similarity but may help you to gain some knowledge of the origins of that stuff and sometimes that helps to see what side alleys have been left unexplored and to provide an overview of the area. I have a sneaky idea that there is a K-theory for homotopy 2-types (and beyond) and that the Waldhausen construction is a way into that, but I may be wrong on that. On a completely different tack, have a look at Kapranov and Saito's article: Hidden Stasheff polytopes in algebraic K-theory and in the space of Morse functions , in Higher homotopy structure in topology and mathematical physics (Poughkeepsie, N.Y. 1996) , volume 227 of Contemporary Mathematics , 191–225 This links up the Steinberg group (which has a neat motivation from linear algebra) with a lot of homotopical and homological algebra. I will not say more on this as this is already getting a bit long, but do ask if you find these references useful but need further pointers. Cohen's A course in simple-homotopy theory is good, but out of print, I think. Also, look at some of Tom Chapman's (a former professor of mine) work from the 70's. – jd.r Dec 6 '11 at Hi there, Josh. I had been thinking of mentioning Joel Cohen's book which is a gem and I knew of Tom Chapman's work as I used to work with Shape Theory so that was a good source to mention. I think that some of the ideas from Simple Homotopy are very relevant to problems in rewriting theory and hence theoretical computer science, also to the current ideas on homotopy type theory perhaps. I also wonder a bit what it is that algebraic K-theory is actually telling one about various contexts. hence my interest in this question. – Tim Porter Dec 6 '11 at 20:58 Tim, you might be permuting your Cohens around. The Cohen that wrote "A course in simple homotopy theory" is Marshall Cohen. Although it's out of print, used copies can be found on standard used book websites. – Ryan Budney Dec 6 '11 at 21:14 Thanks, Ryan. I am always doing that! They both wrote some nice stuff back when I was learning some low dimensional topology and combinatorial group theory. The correct reference is: Marshall M. Cohen, A Course in Simple-homotopy Theory, Graduate Texts in Mathematics 10, Springer Verlag, 1973 – Tim Porter Dec 7 '11 at 6:52 add comment Not the answer you're looking for? Browse other questions tagged kt.k-theory-homology at.algebraic-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/82770/algebraic-k-theory-of-the-group-ring-of-the-fundamental-group/82773","timestamp":"2014-04-19T22:29:22Z","content_type":null,"content_length":"77176","record_id":"<urn:uuid:eec66598-41a7-4d71-9d15-76b514f4e8ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
What is being braided in SL(2,Z)? up vote 3 down vote favorite The braid group on 3 strands is a central extension of the modular group. By definition, \[ B_3 = \langle \sigma_1, \sigma_2: \sigma_1\sigma_2\sigma_1=\sigma_2\sigma_1\sigma_2 \rangle \] This group has a central element (commuting with both $\sigma1$ and $ \sigma_2$): \[ \sigma_1\sigma_2\sigma_1\sigma_2\sigma_1\sigma_2\] The coset get mapped to elements of PSL(2,Z) (which can act on the hyperbolic plane). \[ [\sigma_1] = \left[ \begin{array}{cc} 1 & 1 \\\\ 0 & 1\end{array}\right] \text{ and } [\sigma_2] = \left[ \begin{array}{cc} 1 & 0 \\\\ -1 & 1\end{array}\right] \] I wonder, in terms of the hyperbolic plane, what is being braided here (modulo the garside elements). If something were being braided, I'd think the relevant group would have a map to $B_3$, rather than from $B_3$. – S. Carnahan♦ Nov 1 '11 at 18:28 Related: mathoverflow.net/questions/20281/… – Qiaochu Yuan Nov 1 '11 at 21:03 add comment 1 Answer active oldest votes Without thinking about this too carefully: I think what's getting braided are three of the Weierstrass points of an elliptic curve. More precisely: consider the space of distinct 3-tuples of points p,q,r on A^1. On the one hand, you can braid these points around. On the other hand, every path in this space (i.e. every braid) gives a family of elliptic curves up vote 8 y^2 = (x-p)(x-q)(x-r) down vote and you can ask what the braid does to the homology of the elliptic curve; that's an element of SL_2(Z). Don't you need some other identification between the homology of two elliptic curves in a family to get an element of $\text{SL}_2(\mathbb{Z})$? Otherwise you just get a homomorphism between two groups which are abstractly isomorphic to $\mathbb{Z}^2$. – Qiaochu Yuan Nov 1 '11 at 21:06 1 @Qiaochu: That's what you get from the Gauss-Manin connection. – Dan Petersen Nov 1 '11 at 21:24 add comment Not the answer you're looking for? Browse other questions tagged gt.geometric-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/79734/what-is-being-braided-in-sl2-z?sort=votes","timestamp":"2014-04-18T03:04:09Z","content_type":null,"content_length":"54422","record_id":"<urn:uuid:4929cb42-c9d5-499a-9311-a3a59a3626e6>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
American Mathematical Society Why LaTeX? Why Do We Recommend LaTeX? Authors who intend to publish a book or an article with the AMS are strongly encouraged to use LaTeX as described in the book Guide to LaTeX, fourth edition, by Helmut Kopka and Patrick W. Daly. It is important to use the latest edition: there are important and significant differences between LaTeX 2e (the current version) and "Old" LaTeX (version 2.09; see Appendix 1). Using LaTeX 2e is valuable both for the AMS and for authors themselves. Benefits for the AMS LaTeX 2e defines "structured" files in which the various elements (title, authors, headings, etc.) are easily identified. This is crucial for the future, when we may need to migrate tens of thousands of articles into new formats. AMS journals are already posted on line, with full bibliographic data in HTML. This is the primary reason we use LaTeX 2e as the common format for all files. LaTeX 2e also has better support than Old LaTeX, AMS-TeX, or most other TeX formats for the use of graphics in documents, making it easier to incorporate graphics consistently. The AMS production system is fine tuned to take advantage of the LaTeX 2e structure, and thus it can process these manuscripts more efficiently, as well as faster and more reliably. Benefits for authors LaTeX 2e has a user-friendly interface and good documentation. LaTeX 2e is the TeX document format most commonly used among mathematicians at the present time; thus it is very likely that you will be able to collaborate easily with co-authors. LaTeX 2e files contain markup language that enables them to be converted to other outputs more readily (e.g., PDF), allowing you to share the prepublication version more easily. LaTeX 2e shields you from a number of burdensome complications concerning fonts by defining these in uniform and consistent terms. LaTeX 2e has a coherent package system that makes it relatively easy for users to write extension packages (providing features that aren't included in the core); a very large number of such extension packages are already available. This means that you will often be able to achieve special effects by using one of the packages already in existence, instead of having to do it yourself. We encourage you to use LaTeX 2e and thank you for doing so. [Return to Top] Appendix 1: How To Tell Whether a LaTeX Document is Old LaTeX or LaTeX 2e? Old LaTeX LaTeX 2e • The document begins with \documentstyle. • The document begins with • Extra packages are invoked via the option list of the \documentstyle command. \documentclass. • Font changes have the form • Extra packages are loaded with the \usepackage command. {\bf ...}, {\it ...}, etc. • Font changes have the form \textbf{...}, \textit{...}, etc. See Appendix 2 • \providecommand: only in LaTeX 2e • \emph{...}: only in LaTeX 2e • \includegraphics{...}: only in LaTeX 2e • \begin{lrbox}{...}: only in LaTeX 2e • Options that are specific to a particular package: only in LaTeX 2e • \frontmatter and \backmatter commands: only in LaTeX 2e [Return to Top] Appendix 2: Longer Names for Font Commands? One thing that strikes the eye when comparing Old LaTeX and LaTeX 2e is that the names of the commonly used font commands are longer: \textbf{...} versus {\bf...} and so on. This scarcely seems like an improvement, if one is concerned only about the amount of typing needed. However, in a well-written LaTeX document, the body of the document will not contain any of the explicit font-changing commands that start with \text...: \textbf \textsl \textit \texttt \textup \textsc \textsf \textrm Rather, all such font changes will be handled either by commands for various objects (section, theorem, etc.) provided by the document class or via suitable definitions in the preamble of the And in an extraordinarily well-written LaTeX document, the body of the document will also not contain any of the explicit math font commands that start with \math...: \mathbf \mathit \mathcal \mathsf \mathbb \mathrm All such font changes will be handled via suitable definitions in the preamble of the document. This subject is discussed in more detail in the User's Guide for the amsmath package, section 9. [Return to Top] The Comprehensive TeX Archive Network (CTAN) CTAN page for those getting started with LaTeX The TeX Users Group [Return to Top] Colophon: About TeX and LaTeX LaTeX is based on another piece of software called TeX, written in the late 1970s to early 1980s by Donald E. Knuth, a well-known computer scientist and mathematician at Stanford University. Knuth created TeX initially so he could typeset the second edition of his books on computer programming. About TeX Before you can use LaTeX, you need to have a working copy of TeX. However, you will normally find when you obtain a copy of TeX that LaTeX comes with it (at least the software, if not the book). Rather than attempting to be all things in a single package, TeX is designed with modularity in mind. Thus TeX itself provides only fundamental typesetting capabilities and does not incorporate editing, printing, or previewing capabilities; instead, the result of running TeX is a graphics file in a format called "DVI" that is designed to make it as easy as possible for other programs to print or preview DVI files. The fundamental typesetting capabilities of TeX operate on a very low level. They address the tasks of stringing characters together in words and paragraphs, automatically finding good page breaks, dealing properly with footnotes and other floating objects (such as figures and tables), and positioning symbols properly in math formulas. But it is more natural for authors to work on a somewhat higher level: It is better to write \section{...} than to laboriously specify every aspect of the section title: font size, bold or italic, space above, indented or centered, etc. Therefore TeX is designed to work with auxiliary packages called "TeX formats" that add higher-level features at the author level. LaTeX is a TeX format. Some other well-known ones are Plain TeX, AMS-TeX, eplain, texinfo, and Context. Plain TeX Plain TeX is the generic example format that Knuth wrote to be distributed with TeX. It is not really suitable for serious publishing use---for example, it only supports one font size---but it was quickly incorporated as a base element into other TeX formats such as AMS-TeX and LaTeX. When TeX came to the attention of the American Mathematical Society in the early 1980s, the high quality of its mathematical typesetting capabilities was striking. The AMS became one of the staunchest early proponents of TeX and by 1985 was already beginning to use it in AMS books and journals. In the initial experimentation phase it quickly became apparent that Plain TeX was not comprehensive enough for AMS use. The AMS therefore worked with a mathematician named Michael Spivak to develop a TeX format "AMS-TeX" that would be better able to handle the kind of material typically found in AMS publications. When AMS-TeX became available in 1984 or so, the Society began to promote its use among AMS authors as a method of writing that promised to relieve authors finally of the chore of proofreading their articles after their manuscript was retyped by the publisher's compositors. LaTeX is a TeX format written by a computer scientist named Leslie Lamport. LaTeX is based on the principle that authors should concentrate on logical design rather than visual design when writing their documents. Thus instead of writing \Large \bf 2. Section Title LaTeX provides infrastructure that makes it possible for authors to write \section{Section Title} and have not only the visual appearance but even the numbering be done automatically. LaTeX also has certain features designed to be used in conjunction with auxiliary programs such as makeindex and BibTeX that help automate the tasks of making indexes and bibliographies. [Return to Top]
{"url":"http://ams.org/publications/authors/tex/latexbenefits","timestamp":"2014-04-21T02:47:56Z","content_type":null,"content_length":"48162","record_id":"<urn:uuid:5e0147ff-b214-4a95-bc2e-8a82f6ea124f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 591.05030 Autor: Erdös, Paul; Füredi, Z.; Hajnal, András; Komjáth, P.; Rödl, Vojtech; Seress, Á. Title: Coloring graphs with locally few colors. (In English) Source: Discrete Math. 59, 21-34 (1986). Review: Authors' abstract: "Let G be a graph, m > r \geq 1 integers. Suppose that it has a good coloring with m colors which uses at most r colors in the neighborhood of every vertex. We investigate these so-called local r- colorings. One of our results (Theorem 2.4) states: The chromatic number of G, Chr(G) \leq r2^r log[2] log[2] m and this value is the best possible in a certain sense. We consider infinite graphs as well." Reviewer: I.Tomescu Classif.: * 05C15 Chromatic theory of graphs and maps Keywords: strong limit cardinal; intersecting Sperner family; local r-colorings; chromatic number; infinite graphs © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/59105030.htm","timestamp":"2014-04-16T18:59:28Z","content_type":null,"content_length":"3268","record_id":"<urn:uuid:b9dd793a-8dbe-491b-9281-848b37b5baaf>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Prolog program to generate a fibonacci series of N elements Prolog program to generate a fibonacci series of N elements. domains x = integer predicates fibonacci(x) clauses fibonacci(1). fibonacci(N) :- N1 = N - 1, N1 >= 0,!, fibonacci(N1), write(F1," ,"), F = F1 + N. Milind Mishra author of Prolog program to generate a fibonacci series of N elements is from India. View All Articles Please enter your Comment • Comment should be atleast 30 Characters. • Please put code inside [Code] your code [/Code]. No Comment Found, Be the First to post comment!
{"url":"http://www.dailyfreecode.com/code/prolog-generate-fibonacci-series-n-3119.aspx","timestamp":"2014-04-19T04:19:18Z","content_type":null,"content_length":"34312","record_id":"<urn:uuid:da3cc68a-8c69-4eb2-87b9-4751b4b32e12>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Using The Goal Seek Function In Excel In some cases, I’m looking to find the exact value that would get me to a specific result. Here is one sample. Look at this sample spreadsheet. Suppose that my company expects to have sales of $1,000,000 this year. I’m trying to set targets for myself in order to reach $3,000,000 by 2021, 10 years from now. I will simply set up the spreadsheet with a bogus growth rate as you can see here: The formula that I use is the following: Which I dragged. As you can see, with a growth rate of 1% annually, I would have sales of $1,093,685 in 2021 which is way under my target. How much growth do I need to reach my target? I could simply try changing the growth value until I get close to $3M but there exists a much faster method in excel. I will use the goal-seek function that you can see in the data menu: You can see the result here: So 12.98% is the annual growth rate required to reach my objective. As you can imagine, this function can be very useful even though it has some limitations. Look Good at Work and Become Indispensable Become an Excel Pro and Impress Your Boss
{"url":"http://www.experiglot.com/2012/12/05/using-the-goal-seek-function-in-excel/","timestamp":"2014-04-21T09:35:24Z","content_type":null,"content_length":"27110","record_id":"<urn:uuid:bffde3c6-ee33-4d6a-9776-3868c2a67bd6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
2012 Mathematics Contest Winners All students of Calculus I and II (MAT 126-127) were invited to participate in this year’s Mathematics Contest. The test consisted of three challenging calculus problems, to be solved over one weekend, and turned in on November 14th, 2012. The results are in, and the following cash awards were given: Calculus I (26 participants) First Prize winner, $150: Aleksander Cole Second Prize winners, $75 each: Channosphea But, Riley Mattor, John Mucrose Third Prize winners, $40 each: Jenn Seneres, Samuel Wallace Calculus II (18 participants) First Prize: No winner Second Prize winners, $100 each: Hue Weon Hwang, Maso Urban Third Prize winners, $50 each: Mitche Beroit, Yi Peg. Congratulations to the winners!
{"url":"http://umaine.edu/mathematics/2012/12/05/2012-mathematics-contest-winners/?tpl=textonly","timestamp":"2014-04-17T18:53:30Z","content_type":null,"content_length":"11145","record_id":"<urn:uuid:5acc0e97-679b-4863-8112-8efb330d3575>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Desoto Math Tutor Find a Desoto Math Tutor ...Thanks for WyzAnt, I could help kids around me in this area, hope I could reach you too. I have always worked with kids, And I have been enjoying working with children. So, please feel free to let me know if I can help you. 13 Subjects: including calculus, elementary (k-6th), vocabulary, autism ...I have a bachelor's and master's degree in mathematics. I took geometry in high school, as well as at the college level while studying to get my bachelor's degree. I have a bachelor's and master's degree in mathematics. 10 Subjects: including calculus, statistics, probability, algebra 1 I am a current medical student in the DFW area. I have had a lot of experience with tutoring in both high school and at a college level. I have also had experience teaching science and math in high schools through Teach for America. I am very enthusiastic and patient! 13 Subjects: including prealgebra, algebra 1, biology, geometry ...I am available for tutoring during the day in the summer and some evenings during the school year. Weekends may be flexible depending on advance notice. My hourly rate depends on the age of the student, subject, and traveling distance. 22 Subjects: including algebra 1, prealgebra, English, reading ...I am willing to travel nearby, and meet in your home or at a public library. For younger students, I have some basic Montessori teaching material, and I am certified by the American Montessori Society for teaching ages 3-12. I am available on Saturdays and most Sunday afternoons. 9 Subjects: including algebra 1, prealgebra, geometry, reading Nearby Cities With Math Tutor Balch Springs, TX Math Tutors Bedford, TX Math Tutors Cedar Hill, TX Math Tutors Dalworthington Gardens, TX Math Tutors Duncanville, TX Math Tutors Euless Math Tutors Glenn Heights, TX Math Tutors Grand Prairie Math Tutors Highland Park, TX Math Tutors Lancaster, TX Math Tutors Mansfield, TX Math Tutors Midlothian, TX Math Tutors Pantego, TX Math Tutors Red Oak, TX Math Tutors Rowlett Math Tutors
{"url":"http://www.purplemath.com/desoto_math_tutors.php","timestamp":"2014-04-18T00:56:14Z","content_type":null,"content_length":"23367","record_id":"<urn:uuid:2c0440a0-bd0e-459a-b58c-aa23a53a8b6a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
James' Empty Blog As I was promised recently, our (non-)uniform prior paper has officially appeared in dead tree format in the January issue of Climatic Change . Abstract below, and you can find it on my web site . It was waiting for a commentary which doesn't seem to have happened. I'm not sure how I should feel about that: one the one hand, it's a relief that someone doesn't get a free shot at criticising it (not that I know what they were going to say), on the other, perhaps this is the one thing worse than being talked about. I'll be interested to see if the appearance of the paper version will attract more interest (citations), given that it was officially published on-line back in 2009 anyway. It seems that many (most? all?) people have accepted the argument, without explicitly referring to it. Eg Sokolov et al in their 2009 paper only used an expert prior, referring to their previous use of a uniform prior as merely a sensitivity analysis. However, in that previous work , it was actually the uniform prior case that was presented as the main result - and which was featured in the AR4 by the IPCC, who ignored my explicit request that the expert prior results should at least be mentioned therein. I realise our paper was not published when Sokolov et al was written, so I'm not criticising them for not citing us. It will be interesting to see how the IPCC authors manage this particular conundrum this time around. The equilibrium climate response to anthropogenic forcing has long been one of the dominant, and therefore most intensively studied, uncertainties in predicting future climate change. As a result, many probabilistic estimates of the climate sensitivity (S) have been presented. In recent years, most of them have assigned significant probability to extremely high sensitivity, such as P(Sgt6C)gt5%. In this paper, we investigate some of the assumptions underlying these estimates. We show that the popular choice of a uniform prior has unacceptable properties and cannot be reasonably considered to generate meaningful and usable results. When instead reasonable assumptions are made, much greater confidence in a moderate value for S is easily justified, with an upper 95% probability limit for S easily shown to lie close to 4°C, and certainly well below 6°C. These results also impact strongly on projected economic losses due to climate change. 26 comments: Trial in a Vacuum: Study of Studies Shows Few Citations “No matter how many randomized clinical trials have been done on a particular topic, about half the clinical trials cite none or only one of them.” “As cynical as I am about such things, I didn’t realize the situation was this bad,” Dr. Goodman said. It seems, Dr. Goodman said in an e-mail, that “either everyone thinks their study is really unique (when others thought it wasn’t), or they want to unjustifiably claim originality, or they just don’t know how or want to look.” Thanks, that's interesting. I've come across Goodman before... ...and on a more positive note, the first email request for a reprint has just arrived! I get completely lost with the details, but the conclusion is highly relevant. Hope to see this in the AR5 :) You say: "with an upper 95% probability limit for S easily shown to lie close to 4°C," What would the lower 95% probability limit for S be then? Around 2ºC? If I'm reading it right, the authors' preferred analysis gives this result: "The resulting 5–95% posterior probability interval is 1.2–3.6 C" Nice section heading: "Ignorant priors" vice the more usual "noninformative". I agree with your criticism that uniform priors over some interval actually contain information, but that doesn't mean your conclusions follow. I know you are critical of the usual "text book" solution to this problem, but consider this paper: Invariant Bayesian estimation on manifolds, which addresses one of the main concerns I've always had, not sure if it addresses your though. Jesús R. & Ned --- This simple model strongly suggests that the transient response so far exceeds 2 K. It is quite unlikely that the equilibrium response will be less than the transient response. The particular constraint here - though a particularly convenient one (since based on recent satellite data) - pushes the result a little lower than others might have done. The equivalent Forest et al result based on C20th warming seems to be about 1.9-4.7C (reading off their graph, I didn't find it reported in the paper). I'd certainly be happy to see other attempts at credible estimates - the more the merrier, it's important to show (test) robustness. David, I'll have to have another look at your results...I'm a bit surprised you seem to get such a high transient result. jstults, thanks, that seems similar to Jeffreys really? We'll see how that turns out in the climate context as people are working on it. I anticipate that all reasonable alternatives to a uniform prior will give broadly similar results anyway... James Annan --- Oops. The 2+ K is normalized for 2xCO2. The temperature record is what it is and I just simply match it using lnCO2 plus a small correction for AMO. I only mentioned it as providing easy to understand evidence that the Charney climate sensitivity exceeds 2 K, in agreement with IPCC AR4. Yes, I realised it was 2xCO2, but even so it seems a little surprising to see such a high estimate given that you have ignored aerosol forcing (which probably cancels out the GHG to some extent). Also, your (generally reasonable) suggestion of a 50% uprating of the transient to equilibrium sensitivity isn't really consistent with your use of a 1 decade fixed lag, but I'm quibbling... You wrote "you have ignored aerosol forcing", but you have ignored the albedo forcing. See: Flanner, M. G., Shell, K. M., Barlage, M., Perovich, D. K. and Tschudi, M. A. (2011) ‘Radiative forcing and albedo feedback from the Northern Hemisphere cryosphere between 1979 and 2008’, Nature Geosci, advance online publication. doi:10.1038/ngeo1062 Cheers, Alastair. A nice simple article in NY Times about misuse of statistics: James Annan --- I was pleasantly surpris4ed to see how well that zero reservoir model does with a one decade lag. I tried both smidecades and bidecades and neither did anywhere close to as well. Hansen and Sato 2011 use priors that really are prior. Sensitivity is found from the relation of past temperatures to past forcings (both estimated) the latter based in large part on past CO2 measured in ice cores. They find S = 3. However, if past temperatures were higher then S is higher and vice versa and either way we are in dangerous territory with respect to sea level change. Thus if you find lower S this cannot give economists and planners much comfort. Only ecologists get a break. Pete Dunkelberg Nosmo, I like that article. Andrew Gelman has also blogged about that paper. Alastair, no I haven't :-) Curry has noticed your paper. Doesn't that warm the cockles of your heart? Well James, I find that rather surprising! The word albdeo does not appear anywhere in your paper :-( Tee hee! Now James gets to re-consider... "...perhaps this is the one thing worse than being talked about" You are right up there with Schwartz! Ron Cram:I am a fan of Stephen Schwartz of Brookhaven National Labs. ... In 2010, he also published “Why hasn’t Earth warmed as much as expected?” curryja: Thanks for the links, i agree these are good papers. Ron Cram: Good papers and they were published after AR4. Annan and Hargreaves think it is “very likely” sensitivity is less than 4.5C. But Schwartz paper increases the chance sensitivity is below 1.5C. The Schwartz estimate is based on CRU data. If there is a systemic warming bias in CRU data (as Climategate hints there may be), then the Schwartz estimate may be significantly too high. curryja: [Silence] You might want to set the record straight on a reasonable 5% lower bound for S. I would like to have your comment on these observations by Alexander Harvey. My impression is that he has noticed a serious error in your calculations. If he is right, as I believe, the effect is minuscule in the case that you discuss. Thanks for the links. Actually, I already had something on Curry in the works, and will post it shortly. I'd even seen Alexander Harvey's comment and was waiting to see if any reader over there could answer it - or indeed if he would try asking me... As there was one additional question in Judith's blog, I decided to continue there. My impression is still that your calculations are seriously in error and the quantitative results totally On the other hand I agree that choosing the prior may in some cases influence strongly the results. Your example was, however, not one of those cases as far as I can see. After on badly slept night, I know what I missed in the argument. The full explanation is at As I state there, I should have seen this immediately. One problem is that I read your paper a couple months ago and checked only some parts of it now from computer screen. For me this seems to increase the risk of missing something essential from the content. I think that your sentence "A Gaussian likelihood in feedback space has the inconvenient property that f(O|L = 0) is strictly greater than zero, and so for all large S, f(O|S) is bounded below by a constant." is not the best way of describing the issues related to the fact that the relationship between S and L breaks at L = 0. It is possible that L < 0, but that does not correspond to S < 0, but to an oscillating or unstable system. Ah, glad to see that it was all my fault all along, even though my calculations are in fact correct :-) I am really embarrassed about the unfounded certainty that I expressed in my postings. I should have known better and I apologize for that. While I was looking deeper in the problem, the question came out, whether it is more proper to give a prior for L or S. If one accepts the mechanism of feedbacks as the main source for the potential high sensitivity, the conclusion could be that one should consider the strength of the feedback when proposing a proper prior. Alternatively one might wish to restrict the prior by combining arguments related to the feedback to arguments based directly on sensitivity. Looking at what we can learn from feedbacks, it appears to me quite reasonable to argue that the value L=0 does not have a such a special status that would allow for a prior peaking strongly at values very close to it on the positive side. It would rather be natural to argue that the strength of feedbacks could in absence of all observation be equally well larger than 1 as slightly smaller than 1. This would mean that the first prior should be smooth at L=0 (the strength of the overall feedback is 1-L). Adding some observations to this first prior, the first thing to add might be the observation that the climate is not strongly unstable. This means that L > 0. This requirement could be included in the prior for further considerations. On this basis we could assume that the prior must have a finite value at L = 0 and behave smoothly near to this point. The value at L = 0 may be 0, but need not be and allowing it to have a non-zero value would imply less information than requiring it to be zero. A finite value of the pdf of L at L=0 means that the tail of the prior for S must be bound by C/S^2 where C is some constant. This would give additional support for a Cauchy like distribution. Perhaps all this has been clear to you, but my purpose is to switch from badly founded criticism to constructive dialog. I don't think it should matter whether the prior is stated in terms of L or S - the important thing is that it should represent a reasonable belief, but it's possible that one viewpoint will lead more intuitively to such beliefs. The FG choice of uniform in L was really an accident (they weren't really doing a Bayesian analysis, merely presented their confidence interval as a probability interval) and so was never likely to be taken seriously as a Bayesian estimate - indeed they were quite diffident about it themselves. Even the cauchy-type of prior has a big problem when it comes to conventional economic analyses if you include a sufficiently nonlinear utility a la Weitzman. I view that as probably a problem with conventional economic analyses though :-) I'm not sure how many readers will have realised this. I didn't mention it in the paper as it would have required a significant digression. I wonder if a few more years of data has changed anything...they only used data from 1985-1996 with a gap for Pinatubo. But I don't know what radiation data are available since then. I understand that the prior can equally well be expressed in any variable in with a differentiable one-to-one relationship with another. As the prior represents rational expectations all arguments influencing these expectations should be taken into account and looking at the prior using a particular variable may be helpful in this respect. In my previous message I proposed that the requirement of a finite bound for the prior when expressed as pdf of L is a possible requirement for a prior. I agree on the sensitivity of the economic results on assumptions related to the possible development of the future damage costs and on how discounting is used in determining the present value of total costs. Several people (including Stern) have made proposals that may lead to very high costs for the higher climate sensitivities. With such estimates 1/S^2 may not be fast enough cutoff for making results finite or not very sensitive on details of the prior. Still accepting that the prior for S must fall of as 1/S^2 or faster for large S would help in reducing the uncertainties.
{"url":"http://julesandjames.blogspot.com/2011/01/better-late.html","timestamp":"2014-04-20T03:10:07Z","content_type":null,"content_length":"166834","record_id":"<urn:uuid:b9c8064d-4980-460c-b5c0-7b9660fe1bcf>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
he basis for the SIPHER algorithm was discovered within a U.S. Air Force Research Laboratory (AFRL)-sponsored imagery research project. It is based on the concept of objects being in or out of spatial intensity phase with one another. We first define this “spatial intensity phase” quantity mathematically, then compare it to conventional signal phase relationships, and finally apply it to some images to demonstrate its behavior and utility for discriminating objects. Applications include all forms of image interpretation, from airborne reconnaissance to medical image interpretation. We define spatial intensity phase (fSI) as an independent variable which contributes, along with other independent variables, to producing an amplitude in image pixels. This is similar to phase relationships producing time-dependent and phase-dependent amplitudes in signal processing situations. Hence, our general equation for this behavior is: A = f (fSI, V1, ... Vn) Where: f is a function of this spatial intensity phase (fSI) and possibly other independent variables, V1, ... Vn A is the amplitude Analogously, in signal processing terms, for a simple sinusoid we can express some voltage amplitude, V, as a function of the independent variables phase, f, frequency, f, and time, t. Hence, a comparable signal equation is: V = f ( f, f, t) V = V0sin(2πft + f) Where: V is time-varying signal amplitude in volts, f is frequency in Hz (constant if not frequency-modulated), t is time in seconds, f is phase in radians, and V0 is the peak or maximum voltage
{"url":"http://www.advancedimagingpro.com/print/Advanced-Imaging-Magazine/Solving-a-SIPHER/1$6497","timestamp":"2014-04-18T05:31:18Z","content_type":null,"content_length":"30343","record_id":"<urn:uuid:0f20f33d-9b73-48f6-8910-95e444ffc67d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Belvedere, GA Science Tutor Find a Belvedere, GA Science Tutor ...I love working with students to help them succeed and pursue their dreams.I played four years of varsity soccer throughout high school, receiving several all-star awards. I was then captain my senior year and played intramurals through college. I have been playing chess for more than 15 years now. 11 Subjects: including physical science, biology, chemistry, physics ...I took organic chemistry as a freshmen) for 3 different instructors. In addition, I have taken advanced (organometallic chemistry: TA'd for this instructor) and graduate level (bio-organic and synthesis: I TA'd for both of these instructors) courses within the field of organic chemistry. My int... 3 Subjects: including chemistry, biochemistry, organic chemistry Hello My name is Saed and I am currently a 4th year medical student finishing off my last year here in Atlanta, GA, at many of the hospitals downtown. I am quite flexible with the subjects I can tutor as I do hold a bachelors degree in biology, MBA, and MD by the end of the year. I have tutored an... 30 Subjects: including pharmacology, genetics, chemistry, microbiology ...While in grad school at TECH, I taught Thermodynamics. Also, I was a Physics major at GA State and earned a 4.0 GPA. Before that, I was a reactor operator on a nuclear submarine in the Navy. 2 Subjects: including physics, calculus ...Also, I was on the swim team for 3 years . I have taken many Religion courses at Emory University and have passed with an A, A- or B+. Additionally, studying and learning about religion has always been a hobby, and I read in my spare time. I have been a part of Mock Trial, which is a huge publi... 34 Subjects: including physiology, SAT reading, Microsoft Windows, sociology Related Belvedere, GA Tutors Belvedere, GA Accounting Tutors Belvedere, GA ACT Tutors Belvedere, GA Algebra Tutors Belvedere, GA Algebra 2 Tutors Belvedere, GA Calculus Tutors Belvedere, GA Geometry Tutors Belvedere, GA Math Tutors Belvedere, GA Prealgebra Tutors Belvedere, GA Precalculus Tutors Belvedere, GA SAT Tutors Belvedere, GA SAT Math Tutors Belvedere, GA Science Tutors Belvedere, GA Statistics Tutors Belvedere, GA Trigonometry Tutors Nearby Cities With Science Tutor Avondale Estates Science Tutors Briarcliff, GA Science Tutors Decatur, GA Science Tutors Dunaire, GA Science Tutors Embry Hls, GA Science Tutors North Atlanta, GA Science Tutors North Decatur, GA Science Tutors Overlook Sru, GA Science Tutors Rockbridge, GA Science Tutors Scottdale, GA Science Tutors Snapfinger, GA Science Tutors Stone Mountain Science Tutors Tuxedo, GA Science Tutors Vinnings, GA Science Tutors Vista Grove, GA Science Tutors
{"url":"http://www.purplemath.com/Belvedere_GA_Science_tutors.php","timestamp":"2014-04-19T14:40:59Z","content_type":null,"content_length":"23921","record_id":"<urn:uuid:481c24df-be5e-4998-a03c-4c7b34b11f13>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of multilayer perceptron multilayer perceptron is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate output. It is a modification of the standard linear in that it uses three or more layers of neurons (nodes) with nonlinear activation functions , and is more powerful than the perceptron in that it can distinguish data that is not linearly separable, or separable by a Activation function If a multilayer perceptron consists of a linear activation function in all neurons, that is, a simple on-off mechanism to determine whether or not a neuron fires, then it is easily proved with linear algebra that any number of layers can be reduced to the standard two-layer input-output model (see ). What makes a multilayer perceptron different is that each neuron uses a activation function which was developed to model the frequency of action potentials , or firing, of biological neurons in the brain. This function is modeled in several ways, but must always be The two main activation functions used in current applications are both sigmoids, and are described by $phi\left(v_i\right) = tanh\left(v_i\right) ~~ textrm\left\{and\right\} ~~ phi\left(v_i\right) = \left(1+e^\left\{-v_i\right\}\right)^\left\{-1\right\}$, in which the former function is a hyperbolic tangent which ranges from -1 to 1, and the latter is equivalent in shape but ranges from 0 to 1. Here $y_i$ is the output of the $i$th node (neuron) and $v_i$ is the weighted sum of the input synapses. More specialized activation functions include radial basis functions which are used in another class of supervised neural network models. The multilayer perceptron consists of an input and an output layer with one or more hidden layers of nonlinearly-activating nodes. Each node in one layer connects with a certain weight $w_ij$ to every other node in the following layer. Learning through backpropagation Learning occurs in the perceptron by changing connection weights (or synaptic weights ) after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning , and is carried out through , a generalization of the least mean squares algorithm in the linear perceptron. We represent the error in output node $j$ in the $n$th data point by $e_j\left(n\right)=d_j\left(n\right)-y_j\left(n\right)$, where $d$ is the target value and $y$ is the value produced by the perceptron. We then make corrections to the weights of the nodes based on those corrections which minimize the energy of error in the entire output, given by $mathcal\left\{E\right\}\left(n\right)=frac\left\{1\right\}\left\{2\right\}sum_j e_j^2\left(n\right)$. By the theory of differentials, we find our change in each weight to be $Delta w_\left\{ji\right\} \left(n\right) = -etafrac\left\{partialmathcal\left\{E\right\}\left(n\right)\right\}\left\{partial v_j\left(n\right)\right\} y_i\left(n\right)$ where $y_i$ is the output of the previous neuron and $eta$ is the learning rate, which is carefully selected to ensure that the weights converge to a response that is neither too specific nor too general. In programming applications, typically ranges from 0.2 to 0.8. The derivative to be calculated depends on the input synapse sum $v_j$, which itself varies. It is easy to prove that for an output node this derivative can be simplified to $-frac\left\{partialmathcal\left\{E\right\}\left(n\right)\right\}\left\{partial v_j\left(n\right)\right\} = e_j\left(n\right)phi^prime \left(v_j\left(n\right)\right)$ where $phi^prime$ is the derivative of the activation function describe above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is $-frac\left\{partialmathcal\left\{E\right\}\left(n\right)\right\}\left\{partial v_j\left(n\right)\right\} = phi^prime \left(v_j\left(n\right)\right)sum_k -frac\left\{partialmathcal\left\{E\right \}\left(n\right)\right\}\left\{partial v_k\left(n\right)\right\} w_\left\{kj\right\}\left(n\right)$. Note that this depends on the change in weights of the $k$th nodes, which represent the output layer. So to change the hidden layer weights, we must first change the output layer weights according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function. Multilayer perceptrons using a backpropagation algorithm are the standard algorithm for any supervised-learning pattern recognition process and the subject of ongoing research in computational neuroscience parallel distributed processing . They are useful in research in terms of their ability to solve problems stochastically, which often allows one to get approximate solutions for extremely Currently, they are most commonly seen in speech recognition, image recognition, and machine translation software, but they have also seen applications in other fields such as cyber security. In general, their most important use has been in the growing field of artificial intelligence, where the multilayer perceptron's power comes from its similarity to certain biological neural networks in the human brain.
{"url":"http://www.reference.com/browse/multilayer+perceptron","timestamp":"2014-04-20T09:44:04Z","content_type":null,"content_length":"83098","record_id":"<urn:uuid:a4ea059a-e5d7-4d52-ac8d-c78835bce5b7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50dfb798e4b050087cd0b743","timestamp":"2014-04-19T17:12:24Z","content_type":null,"content_length":"128509","record_id":"<urn:uuid:8d1bb29f-2753-42e1-9644-6bd9819821b5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Somerdale, NJ Trigonometry Tutor Find a Somerdale, NJ Trigonometry Tutor ...After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. I hold degrees in economics and business and an MBA. I have been in upper management since 2004 and have had the opportunity to teach classes in international business, strategic management, and operations management at a local university. 13 Subjects: including trigonometry, calculus, algebra 1, geometry ...My tutoring is guaranteed: During our first session, I will assess your situation and determine a grade that I think you can get with regular tutoring. If you don't get that grade, I will refund your money, minus any commission I paid to this website. Please note that I only tutor college stude... 11 Subjects: including trigonometry, calculus, statistics, precalculus ...Previously, I completed undergraduate work at North Carolina State University for a degree in Philosophy. Math is a subject that can be a bit difficult for some folks, so I really love the chance to break down barriers and make math accessible for students that are struggling with aspects of mat... 22 Subjects: including trigonometry, calculus, geometry, statistics ...Peter is always willing to offer flexible scheduling to suit the client's needs. He is also prepared to be responsive to any budgetary concerns.My qualification for tutoring GMAT is based upon (1) my academic record and (2) my workplace experience. Academic achievements include a BS (with honors)in Applied Physics and a Doctorate in Engineering Physics from Oxford University. 10 Subjects: including trigonometry, calculus, algebra 1, GRE I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including trigonometry, physics, calculus, geometry Related Somerdale, NJ Tutors Somerdale, NJ Accounting Tutors Somerdale, NJ ACT Tutors Somerdale, NJ Algebra Tutors Somerdale, NJ Algebra 2 Tutors Somerdale, NJ Calculus Tutors Somerdale, NJ Geometry Tutors Somerdale, NJ Math Tutors Somerdale, NJ Prealgebra Tutors Somerdale, NJ Precalculus Tutors Somerdale, NJ SAT Tutors Somerdale, NJ SAT Math Tutors Somerdale, NJ Science Tutors Somerdale, NJ Statistics Tutors Somerdale, NJ Trigonometry Tutors Nearby Cities With trigonometry Tutor Ashland, NJ trigonometry Tutors Barrington, NJ trigonometry Tutors Clementon trigonometry Tutors Echelon, NJ trigonometry Tutors Haddon Heights trigonometry Tutors Hi Nella, NJ trigonometry Tutors Laurel Springs, NJ trigonometry Tutors Lawnside trigonometry Tutors Magnolia, NJ trigonometry Tutors Runnemede trigonometry Tutors Stratford, NJ trigonometry Tutors Tavistock, NJ trigonometry Tutors Voorhees trigonometry Tutors Voorhees Kirkwood, NJ trigonometry Tutors Voorhees Township, NJ trigonometry Tutors
{"url":"http://www.purplemath.com/Somerdale_NJ_trigonometry_tutors.php","timestamp":"2014-04-16T19:37:44Z","content_type":null,"content_length":"24596","record_id":"<urn:uuid:70e47fe0-079d-42c9-94b9-5951d0585bab>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: i need help setting this up. one solution containing 20 percent alcohol is mixed with another solution containing 60 percent alcohol to make 10 gallons of a solution that is 32 percent alcohol. How much of each solution is used? • one year ago • one year ago Best Response You've already chosen the best response. So this is basically what you want to do: 0.32(10)= 0.2(x)+0.6(10-x) Set it up and find what x equals, which is the amount of the 20 percent solution needed. Once you know that, you can find the 60% solution by subtracting the amount of the 20 percent solution by 10, or the total solution. I hope this makes sense, do you understand?? Best Response You've already chosen the best response. yes it makes more sense then my set up. Thanks Best Response You've already chosen the best response. What did you get for your answer? (I am studying this too and want to compare) Best Response You've already chosen the best response. x= 71/5 and y= -21/5 Best Response You've already chosen the best response. am i correct? Best Response You've already chosen the best response. Let's think, does it make sense that one would be negative? I don't think so. So lets work through the equation and do what we can: 3.2=0.2x+6-0.6x We can combine like terms and we get −2.8= -0.4x, or 2.8+0.4x We know that 1/5 of x is 1.4, so x=7 x represents the number of gallons used in the 20% solution, so we can subtract 7 from 10 to get the number used in the 60% solution, to get 3. I hope you understand! Anymore questions? Best Response You've already chosen the best response. oh okay, i put a ten by the six on accident Best Response You've already chosen the best response. Haha totally okay! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5097175ae4b0d0275a3d0557","timestamp":"2014-04-21T10:22:52Z","content_type":null,"content_length":"44978","record_id":"<urn:uuid:2a6c3c88-a249-4e12-a99f-fd202f0d863d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: what is the y-intercept of y=-2x-7 and how do u get it • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50e62377e4b04bcb15167ed1","timestamp":"2014-04-20T08:44:50Z","content_type":null,"content_length":"45358","record_id":"<urn:uuid:0c0b738d-4047-441e-bd92-2998ea4964bd>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
HowStuffWorks "How Bits and Bytes Work" If you have used a computer for more than five minutes, then you have heard the words bits and bytes. Both RAM and hard disk capacities are measured in bytes, as are file sizes when you examine them in a file viewer. You might hear an advertisement that says, "This computer has a 32-bit Pentium processor with 64 megabytes of RAM and 2.1 gigabytes of hard disk space." And many HowStuffWorks articles talk about bytes (for example, How CDs Work). In this article, we will discuss bits and bytes so that you have a complete understanding. Decimal Numbers The easiest way to understand bits is to compare them to something you know: digits. A digit is a single place that can hold numerical values between 0 and 9. Digits are normally combined together in groups to create larger numbers. For example, 6,357 has four digits. It is understood that in the number 6,357, the 7 is filling the "1s place," while the 5 is filling the 10s place, the 3 is filling the 100s place and the 6 is filling the 1,000s place. So you could express things this way if you wanted to be explicit: (6 * 1000) + (3 * 100) + (5 * 10) + (7 * 1) = 6000 + 300 + 50 + 7 = 6357 Another way to express it would be to use powers of 10. Assuming that we are going to represent the concept of "raised to the power of" with the "^" symbol (so "10 squared" is written as "10^2"), another way to express it is like this: (6 * 10^3) + (3 * 10^2) + (5 * 10^1) + (7 * 10^0) = 6000 + 300 + 50 + 7 = 6357 What you can see from this expression is that each digit is a placeholder for the next higher power of 10, starting in the first digit with 10 raised to the power of zero. ­That should all feel pretty comfortable -- we work with decimal digits every day. The neat thing about number systems is that there is nothing that forces you to have 10 different values in a digit. Our base-10 number system likely grew up because we have 10 fingers, but if we happened to evolve to have eight fingers instead, we would probably have a base-8 number system. You can have base-anything number systems. In fact, there are lots of good reasons to use different bases in different situations. Computers happen to operate using the base-2 number system, also known as the binary number system (just like the base-10 number system is known as the decimal number system). Find out why and how that works in the next section.
{"url":"http://computer.howstuffworks.com/bytes.htm","timestamp":"2014-04-20T08:27:11Z","content_type":null,"content_length":"120896","record_id":"<urn:uuid:15b0cca0-940b-4c69-89b1-d6d0dcc1f9f7>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Did Someone Say, "Bubble"? Replies: 6 Last Post: May 12, 2013 3:02 PM Messages: [ Previous | Next ] Re: Did Someone Say, "Bubble"? Posted: May 12, 2013 10:57 AM I'm sorry to interrupt an interesting discussion, but I would rather not unsubscribe to this list. This topic and discussion, though interesting and informative on education broadly, seems to be outside or at least far from the center of the definition of this list: "A discussion on all topics *relating to mathematics education*." I find almost nothing in this discussion relating specifically to *mathematics education.* It could be addressed just as well to a list for English, sociology, chemistry, or art educators. As a long time subscriber, I would appreciate it if users of the list would send messages related to the list's definition. Martin Flashman Department of Mathematics Humboldt State University Arcata, CA 95521 Office: BSS 356 Date Subject Author 5/11/13 Did Someone Say, "Bubble"? Haim 5/11/13 Re: Did Someone Say, "Bubble"? Bret Taylor 5/11/13 Re: Did Someone Say, "Bubble"? Guy Brandenburg 5/12/13 Re: Did Someone Say, "Bubble"? Haim 5/12/13 Re: Did Someone Say, "Bubble"? Martin E. Flashman 5/12/13 Re: Did Someone Say, "Bubble"? Petrak, Daniel G. 5/12/13 Re: Did Someone Say, "Bubble"? Haim
{"url":"http://mathforum.org/kb/message.jspa?messageID=9122705","timestamp":"2014-04-19T15:49:47Z","content_type":null,"content_length":"24816","record_id":"<urn:uuid:bb2065e6-2bc3-42db-b68f-3b3ffe770e71>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving equation with logs and variables November 27th 2011, 09:17 AM #1 Oct 2008 Solving equation with logs and variables Problem: F(x) = 4^(x) for all real values of x. If p>1 and q>1, then f^(-1)(p)f^(-1)(q)=? The notation above for f^(-1) means inverse function. The answer is log(base 4) p x log(base 4)q I first attempted to solve for the inverse function by switching y and x and then solving for y. This did not get me log (base4) x, which the book says is the inverse of the exponential function. Last edited by benny92000; November 27th 2011 at 05:01 PM. Re: Solving equation with logs and variables Can you show how you came to your answer? Re: Solving equation with logs and variables I didn't come to the right answer, but I have y=4^(x) I switched x and y to solve for the inverse. x=4^(y) I took the log of both sides logx= ylog4. Then I got logx/log4. I then plugged in p and q and fiddled with it from there to no avail. Re: Solving equation with logs and variables I assume again ... $f(x) = 4^x$ (you used a capital F) you should know that the inverse of an exponential function is a log function, but here is the derivation anyway ... $y = 4^x$ $x = 4^y$ $\log_4(x) = \log_4(4^y)$ $\log_4(x) = y$ so ... $f^{-1}(x) = \log_4(x)$ $f^{-1}(p) \cdot f^{-1}(q) = \log_4(p) \cdot \log_4(q)$ Re: Solving equation with logs and variables Which logarithm did you use? $a^x$ gives a different function for every different a, so its inverse $log_a(x)$ is a different function for every a. If you had used $log_4(x)$, Since $4= 4^1$, log_4(4)= 1 so your "log x/log 4" is just $log_4(x)$. For any other logarithm, it is true that $log_a()x)/log_a(4)= log_4(x)$. That's a property of logarithms worth knowing: $\frac{log_a(x)}{log_a(b)}= log_b(x)$ Re: Solving equation with logs and variables Change of base rule. I am aware of it. I see how that applies here now. Thanks! November 27th 2011, 12:42 PM #2 November 27th 2011, 05:03 PM #3 Oct 2008 November 27th 2011, 05:49 PM #4 November 28th 2011, 05:11 AM #5 MHF Contributor Apr 2005 November 28th 2011, 10:20 AM #6 Oct 2008
{"url":"http://mathhelpforum.com/pre-calculus/192813-solving-equation-logs-variables.html","timestamp":"2014-04-19T15:58:44Z","content_type":null,"content_length":"48081","record_id":"<urn:uuid:7117a671-ac57-42bf-a9d5-50c4880908a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
calculus integration problem Indefinite Integral of e^(7x)(sin(4x))dx cant seem to get the substitution/integration by parts to work for me thanks in advance! Use integration by parts twice and treat the integral as the unknown. Then you can solve for the integral as if it's an algebra problem. Integration by parts should work, but sometimes one of the most powerful methods of integration is by taking derivatives. What we are trying to find is $I=\int (e^{7x}\sin(4x))\,dx$. If we find the derivative of I, we get $\frac{d}{dx}(e^{7x}\sin(4x))=4e^{7x}\cos(4x) + 7e^{7x}\sin(4x)$ In reverse, we get that $\int (4e^{7x}\cos(4x) + 7e^{7x}sin(4x))\,dx=e^{7x}\sin(4x)$ and with a bit of rearranging we find $7I= e^{7x}\sin(4x) - 4\int e^{7x}\cos(4x)\,dx$ With me so far? Now, if we were to divide both sides of the equation by 7, we'd have another expression for I (the function we are integrating). But there's another integral... Let's use the same process to solve for this integral. Let's call this integral J... so $7I=e^{7x}\sin(4x)-4J$ if $J=\int e^{7x}\cos(4x)\,dx$. Taking the derivative of J, we get $\frac{d}{dx}(e^{7x}\cos(4x))=-4e^{7x}\sin(4x) + 7e^{7x}cos(4x)$ or in reverse, that $\int (-4e^{7x}\sin(4x) + 7e^{7x}cos(4x))\,dx = e^{7x}\cos(4x)$. With a bit of rearranging we find that... $J=\frac{1}{7}e^{7x}\cos(4x) + \frac{4}{7}\int e^{7x}sin(4x)\,dx$ or $J=\frac{1}{7}e^{7x}\cos(4x) + \frac{4}{7}I$ Substituting J into our original equation gives... $7I= e^{7x}\sin (4x) - 4[\frac{1}{7}e^{7x}\cos(4x) + \frac{4}{7}I]$ which, when expanded and manipulated, reduces to $\frac{65}{7}I=e^{7x}\sin(4x) - \frac{4}{7}e^{7x}\cos(4x)$ And finally, we can solve for I, which is what we were finding... $I=\frac{7}{65}e^{7x}\sin(4x)-\frac{4}{65}e^{7x}\cos(4x) + C$ (the integration constant). Yes, it is a fair bit of writing, but is extremely powerful if all else fails...
{"url":"http://mathhelpforum.com/calculus/48411-calculus-integration-problem-print.html","timestamp":"2014-04-19T12:27:20Z","content_type":null,"content_length":"10091","record_id":"<urn:uuid:5798d1cd-b953-47fb-b0bf-395b56681ed3>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from November 4, 2011 on Uncharted Territory Einstein Causes Confusion Shock! Filed under: Neutrinos, Physics, Relativity — Tim Joslin @ 2:52 pm Regular readers will be aware that I’ve offered an explanation of the apparent superluminal neutrinos detected in the CERN-OPERA experiment. Things have moved on since my last post on the subject, and I submitted a paper with a more thorough explanation to ArXiv a week ago – more about that another time. My argument boils down essentially to the point that I believe that light doesn’t travel at the same speed in every direction (relative to an observer on our moving planet, and according to our reckoning of time and distance) and the physics establishment does, for reasons that defy simple logic. A couple of days ago I thought I’d see if I could find some books that shed some light (groan!) on the matter. What I was interested in was whether physicists are confused. I think it’s fair to say that they are. I ended up tracking all the way back to Einstein’s seminal paper, On the Electrodynamics of Moving Bodies (1905). I found myself standing in Waterstones reading a translation in Hawking’s On the Shoulders of Giants, which basically consists of some reprints and a few pages of comment by the current Lucasian Professor. £22 (OK, £15 on Amazon) for a paperback. Nice work. Anyway, it’s possible to find On the Electrodynamics kicking around on the internet (pdf), though not easily on the first page of Google’s results, which I guess tells you something straight away about the readership of the “most important scientific paper of the 20th century”. Any reader of On the Electrodynamics can’t help being struck by the paper’s obvious shortcomings. Yes, shortcomings. Just because a paper includes brilliant, revolutionary ideas does not mean it is perfect in all respects. And On the Electrodynamics has two serious flaws which have perhaps contributed to today’s confusion: 1. References or, rather, the lack of them – Einstein’s assumption of isotropy Einstein did not include any references in his paper. As a result, we simply do not know how carefully he’d studied certain works of the era. In particular, it makes it difficult to evaluate his opening arguments. My impression is that Einstein just wanted to focus on the crux of his argument. Einstein first sets out his postulates, or assumptions: “…the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest. They suggest rather that, as has already been shown to the first order of small quantities, the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good. We will raise this conjecture (the purport of which will hereafter be called the ‘Principle of Relativity’) to the status of a postulate, and also introduce another postulate, which is only apparently irreconcilable with the former, namely, that light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body. [my stress]“ Maybe these postulates are only nearly true. The crux of my argument is that if the speed of light is independent of its source, then we always (except in one very special case) need to apply the same equations Einstein used (Lorentz transformations) to determine the speed of light relative to ourselves, the observer. We can’t just assume we’re the stationary observer! It’s a simple point. Let’s read on a little more. Einstein is very particular about the need to reckon time in terms of synchronous clocks: “If at the point A of space there is a clock, an observer at A can determine the time values of events in the immediate proximity of A by finding the positions of the hands which are simultaneous with these events. If there is at the point B of space another clock in all respects resembling the one at A, it is possible for an observer at B to determine the time values of events in the immediate neighbourhood of B. But it is not possible without further assumption to compare, in respect of time, an event at A with an event at B. We have so far defined only an ‘A time’ and a ‘B time’. We have not defined a common ‘time’ for A and B, for the latter cannot be defined at all unless we establish by definition that the ‘time’ required by light to travel from A to B equals the ‘time’ it requires to travel from B to A. Let a ray of light start at the ‘A time’ tA from A towards B, let it at the ‘B time’ tB be reflected at B in the direction of A, and arrive again at A at the ‘A time’ t′A. In accordance with definition the two clocks synchronize if: tB − tA = t′A − tB We assume that this definition of synchronism is free from contradictions, and possible for any number of points; and that the following relations are universally valid:— 1. If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B. 2. If the clock at A synchronizes with the clock at B and also with the clock at C, the clocks at B and C also synchronize with each other. Thus with the help of certain imaginary physical experiments we have settled what is to be understood by synchronous stationary clocks located at different places, and have evidently obtained a definition of ‘simultaneous’, or ‘synchronous’, and of ‘time’. The ‘time’ of an event is that which is given simultaneously with the event by a stationary clock located at the place of the event, this clock being synchronous, and indeed synchronous for all time determinations, with a specified stationary clock. In agreement with experience we further assume the quantity 2AB/(t′A − tA) = c to be a universal constant—the velocity of light in empty space.” He’s going to go on, of course, to assume that the stationary clocks are in the “stationary” [sic] system, and show that the clocks will not appear synchronous to a moving observer. But how do we know that light would travel at the same speed from A to B as from B to A? This need not affect the round-trip time. One book I found in Ealing Central Library is devoted specifically to this issue of synchronising clocks. In Einstein’s Clocks, Poincaré’s Maps by Peter Galison, we find (p.217) that the French mathematician, well, polymath, really, Henri Poincaré, understood the problem of “true” and “local” time in 1904: “‘[Lorentz's] most ingenious idea was that of local time. Let us imagine two observers who want to set their watches by optical signals; they exchange their signals, but as they know that the transformation is not instantaneous, they take care to cross them. When the station B receives the signal of station A, its clock must not mark the same time as station A at the moment of the emission of the signal, but rather that time augmented by a constant representing the duration of the signal.’ [wrote Poincaré - so far so good] At first, Poincaré considered the two clock-minders at A and B to be at rest – their observing stations were fixed with respect to the ether. But then, as he had since 1900, Poincaré proceeded to ask what happened when the observers are in a frame of reference moving through the ether. In that case ‘the duration of the transmission will not be the same in the two directions, because station A, for example, moves towards any optical perturbation sent by B, while the station B retreats from a perturbation by A. Their watches set in this manner will not mark therefore true time, they will mark what one can call local time, in such a way that one of them will be offset with respect to the other. This is of little importance, because we don’t have any way to perceive it.‘ [my stress] True and local time differ. But nothing, Poincaré insisted, would allow A to realize that his clock will be set back relative to B’s, because B’s will be set back by precisely the same amount.’All the phenomena that will be produced at A for example, will be set back in time, but they will all be set back by the same amount, and the observer will not be able to perceive it because his watch will be set back’ [my stress]; thus, as the principle of relativity would have it, there is no means of knowing if he is at rest or in absolute movement. [my stress] Galison goes on (p.257ff) to speculate as to how familiar Einstein was in 1905 with Poincaré’s discussion of the idea of obtaining local time by the synchronisation of clocks. Regardless, when Einstein swept away the ideas of “local time” and “true time”, he took no account of the little difficulty highlighted by Poincaré. We can only speculate as to what Einstein thought and what he didn’t, but it would clearly have been more difficult to move away from the idea of the ether to that of relativity had it also been necessary to assume a “preferred” or isotropic reference frame relative to which the Earth is moving. That doesn’t mean it doesn’t exist, though. And, for my money, relativity includes a logical inconsistency. You simply can’t assume the light moves independent of its source and then perform transformations to show that the view of a moving observer of the reference frame in which the light is emitted perceives the light not to be moving at equal velocities in all directions relative to objects in the emitting frame, whilst those in the emitting frame magically do see equal velocities. 2. Unclear use of symbols There’s one way to “rescue” Special Relativity. Let’s imagine you’re a confused young physics student. You might imagine that the Lorentz transformations retain equal light speed in all directions despite diagrams to the contrary, that is, you might pay particular attention to the qualification “length contraction not depicted” in representations of Einstein’s train thought-experiment. You might define a thought-experiment (flashes when light from the centre reaches the ends of a moving train) and write something like: “In fact [from the point of view of an observer on the platform] it [the moving train] is shorter in the same proportion as the second flash is later than the rear one.” as Adam Hart-Davis does on p.231 of The Book of Time. Hart-Davis appears to assume time-dilation and the relativity of simultaneity are the same thing. His calculations are then totally confusing (even if we ignore the fact that he presents calculations based on the length of the train before he’s told us how long it actually is and later says km/h when he means km/s!). The formula for calculating the time delay between flashes at the rear and front of the train as perceived by the observer on the platform is: t = (1/√(1 – v^2/c^2))(τ – vx/c^2) • τ is the time difference as seen by the observer on the train • t is the time difference as seen by the observer on the platform • √ is supposed to be a square root sign – I’ve used ^2 for squared (can’t see how to get a superscipt on here) • v is the velocity of the train (according to the observer on the train) • c is the speed of light • x is the length of the train (according to the observer on the train) The term 1/√(1 – v^2/c^2) is known as γ (gamma) and can be ignored when v is small compared to c, as Hart-Davis does when calculating for a train moving at 22m/s. The curious thing is, he also ignores γ when calculating the delay when the train is running at 200,000,000m/s and gets a 44ns delay. He doesn’t explain this. Hart-Davis then uses γ correctly to show the train looks only ~15m long (he implies this is exact – it isn’t) rather than 20m. Presumably what Hart-Davis has done is assume that the length contraction of the train (a factor of ~0.75) cancels out with the time dilation factor (also ~0.75), the proportion by which time on the train runs slower than on the train. Is this correct? I guess so, though I’m not claiming to be 100% sure! Nevertheless, the length contraction doesn’t cancel out the relativity of simultaneity – the signal still appears to take longer to travel to the front of the train from the perspective of the observer on the platform than from that of the observer on the train. The light simply has further to travel from the p.o.v. of the observer on the platform, as the front of the train is receding relative to the observer on the platform, but stationary relative to the observer on the train. The statement: “In fact [from the point of view of an observer on the platform] it [the moving train] is shorter in the same proportion as the second flash is later than the rear one”, is either totally confusing or simply incorrect. You’d think they’d make a lot of effort to ensure accuracy in a book intended to inform, so maybe they’re confused. And maybe it’s Albert’s fault. If you take a look at On the Electrodynamics you might notice that Einstein uses “t” and “τ” (Greek letter small tau) to derive the difference (the formula above) between times observed by stationary and moving observers. He then, breathlessly, one might imagine, rushes on to derive the time dilation factor (γ) in the rates of the clocks of the moving and stationary observers, using the same “t” and “τ”. What he really meant to relate in the second case were, of course, “δt” and “δτ”. Naughty Albert! It’s like Peter Crouch pulling the hair of the Trinidad defender to score in 2006. He scored a goal, so we’ll ignore that little detail. Or maybe Einstein was being deliberately obscure just to see if people really understood! Recent Comments • Bilbo on Skype Video Call Bandwidth Usage • Could 2013 Still be the Warmest on Record in the CET? | Uncharted Territory on 2012 Not Record Warmest in English History (Probably) Shock! • Could 2013 Still be the Warmest on Record in the CET? | Uncharted Territory on 2013 UK Weather: Coldest First Quarter Since 1987 • Could 2013 Still be the Warmest on Record in the CET? | Uncharted Territory on How Significant is the Cold UK March of 2013 in the CET? • Could 2013 Still be the Warmest on Record in the CET? | Uncharted Territory on March 2013 WAS equal with 1892 as coldest in the CET record since 1883! • CET End of Month Adjustments | Uncharted Territory on March 2013 WAS equal with 1892 as coldest in the CET record since 1883! • CET End of Month Adjustments | Uncharted Territory on How Significant is the Cold UK March of 2013 in the CET? • March 2013 WAS equal with 1892 as coldest in the CET record since 1883! | Uncharted Territory on 1740 And All That • March 2013 WAS equal with 1892 as coldest in the CET record since 1883! | Uncharted Territory on How Significant is the Cold UK March of 2013 in the CET? • John Smith on How Significant is the Cold UK March of 2013 in the CET? • John Smith on March 2013 in UK: Coldest in CET since 1892 or 1883?
{"url":"http://unchartedterritory.wordpress.com/2011/11/04/","timestamp":"2014-04-19T13:22:11Z","content_type":null,"content_length":"68578","record_id":"<urn:uuid:ee422b43-fdf4-4cb8-ba91-c0570f3929c9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts tagged paperless One of my responsibilities at Jewish Day School is to write a weekly “tech tips” column for the online faculty news. This is one such tip. This one is, perhaps, particularly our-setup-specific (My Classes, Handins, Returns, etc.), but I think that the core ideas are worth sharing to the world. One of the real challenges that we confront when teaching in a digital classroom is that there are a tremendous number of documents, spread across a tremendous number of computers, often in tremendously varying states of completion. A team of faculty is coalescing around digital portfolios this spring, and file management is the single greatest challenge that we’re looking at initially. With that in mind, it seems timely to suggest some best practices for working with files in the My Classes folder on FirstClass: • Email attachments hurt. If students are turning in their work an email attachments, it counts against their disk quota (which is pretty slim by this point in the year). And you have to open each and every single message to download the attachment so that you can read it. That’s a recipe for frustration. Instead, have your students upload their files directly to the Handins folder — they can just drag them from their computer desktop into the FirstClass folder (or choose Upload… from the File menu in FirstClass). Files in the My Classes folder do not count against anyone’s disk quota. The best part: you can now select a group of files in your Handins folder and drag them to your computer desktop to download all of them all at once (no more opening every individual • File names matter. Ask your students to include both the name of the assignment and their name in the name of the file that they’re uploading. If the students don’t put their name on their files, it’s a hassle to figure out who turned in what. And likewise, if they don’t put the assignment on the file, you’ve got to open the file to find out. The file names don’t need to be Homeric epics: “Feb. 18 Essay – Seth B.doc” works great as a file name. • Students can’t cheat from the Handins folder. They aren’t able to open other people’s work (or even their own), nor can they remove their work once it’s turned in (so no coming back with an “improved” version after the fact). In fact, the only person who can open the files in the Handins folder is… the teacher. • Students need to be told about the Returns folder. Every class has a Returns folder that has an individual folder for each student in the class. You can drag files you are returning to those students directly into those folders (from, say, your computer desktop). Only the student whose folder it is can open the folder and read the files (and they can’t change them). Plus, now you don’t have student files cluttering up your inbox and counting against your disk quota as email attachments! • Be clear, but firm. You’re teaching technical skills, and your students won’t get it right at first. Help them to turn in their files correctly (i.e. in a way that is easy for you to work with), rather than fixing their mistakes. Every mistake you fix will end up being a mistake you have to fix every time. Obviously, the list goes on, but these five best practices should help cut through some of the chaos and confusion accompanied by the proliferation of documents produced by a digital classroom! Shelly Blake-Pock just posted a question on his blog about teaching math in a paperless environment (in fact, since I started gearing up to respond, he’s posted some follow-ups as well). Last year, wearing my math teacher hat (nominally given to me as a member of the Math & Computer Science department — normally only worn on the most formal occasions), I got involved in a project with my department trying to work with our students to develop a mathematical Wikipedia. The idea was that kids would write up their mathematical knowledge for the younger students and their classmates, creating a review site focused on what the students thought was important to know about the material we were covering in class. The big idea was that this would push the students to both reflect on what they knew (as they worked to articulate it for less experienced students) and take part in some independent learning (as they researched their topics to figure out how to write them up). It wasn’t really a rousing success, for a number of reasons, not the least of which was that the kids were assigned topics (rather than selecting their own) and ended up mostly parroting their textbook into the wiki. There wasn’t any real collaboration or peer-review going on, at least not in a really critical sense (“Why did you explain it the way the text book does? I didn’t get it then and I don’t get it now… do you get it?”) However, Brian Lester and I got excited about the idea of how one would pursue this project from a mechanical standpoint: how would you post mathematics in an editable, readable and shareable way on the web? We went through a number of permutations, but the solution that I think contains all of the desired mechanical qualities is this: use MathML. There’s a handy [take-a-deep-breath-this-is-about-to-be-a-lot-of-jargon] Javascript ASCII-to-MathML translator library online from Peter Jipsen at Chapman University. It works really well: you type in text as you would on a calculator and it gets typeset as you would see it in a professionally printed text. And you can go back and edit it. MathML requires a plug-in for Internet Explorer 7 (no idea about 8, but I’ll bet it still needs the plug-in), but Firefox can read and parse MathML natively. Peter Jipsen has links to some helpful fonts to download to make it all look a little nicer, but they’re truly optional. Once it’s set up on your server, you just include a magic incantation at the beginning of the page to invoke the translator, type in your calculator equations, and whamm-o: pretty equations! Now, this only handles equations on the web. We didn’t get to graphs or diagrams in our experiments last year. But I can tell you where I would look for graphs — Google has an embeddable chart generator that might work. I hope there are other similar tools. Again, all this is with the stated goal of readable, editable, shareable mathematics online. This doesn’t address doing the exploratory work: this is the write-up and reflection after the exploration. Without a tablet, I’m not convinced that one can do general mathematical work on a computer. And with a tablet, I’d add FluidMath (still in beta, I think) to the list of must-have
{"url":"http://battis.net/blog/tag/paperless/","timestamp":"2014-04-20T13:18:38Z","content_type":null,"content_length":"33550","record_id":"<urn:uuid:86475bfa-73a4-4f8e-9d2e-40d7eca6a13d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application Patent application title: RADAR DEVICE Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A radar device includes: an oscillator for generating a wave at a plurality of transmission frequencies; a transmitting antenna; a receiving antenna; a receiver for generating a real received signal; a Fourier transform unit for performing a Fourier transform on the real received signal in a time direction; a spectral peak detecting unit for receiving an input of a result of the Fourier transform to extract peak complex signal values of Doppler frequency points having a maximum amplitude; a distance calculating unit for storing the peak complex signal values and for calculating a distance to a reflecting object based on the stored peak complex signal values to output the obtained distance as a measured distance value; and a distance sign determining unit for determining validity of the measured distance value and for outputting the measured distance value and the Doppler frequency according to a result of determination. A radar device for emitting a wave into space, receiving the wave reflected by an object present in the space, and performing signal processing on the received wave to measure the object, comprising:an oscillator for generating the wave at a plurality of transmission frequencies;a transmitting antenna for emitting the wave generated from the oscillator into the space;a receiving antenna for receiving an incoming wave;a receiver for detecting the received wave received by the receiving antenna to generate a real received signal;a Fourier transform unit for performing a Fourier transform on the real received signal generated from the receiver in a time direction;a spectral peak detecting unit for receiving an input of a result of the Fourier transform from the Fourier transform unit to extract peak complex signal values of Doppler frequency points at which an amplitude is maximum;a distance calculating unit for storing the peak complex signal values from the spectral peak detecting unit, which are obtained by using the plurality of transmission frequencies, and for calculating a distance to the reflecting object based on the stored peak complex signal values to output the obtained distance as a measured distance value; anda distance determining unit for determining validity of the measured distance value obtained from the distance calculating unit and outputting the measured distance value calculated in the distance calculating unit and the Doppler frequency detected in the spectral peak detecting unit according to a result of The radar device according to claim 1, wherein the distance determining unit comprises a distance sign determining unit for outputting the Doppler frequency detected in the spectral peak detecting unit only when a sign of the measured distance value calculated in the distance calculating unit is positive. The radar device according to claim 1, wherein the distance determining unit comprises: a distance sign determining unit for determining a sign of the measured distance value calculated in the distance calculating unit as positive or negative and for outputting the measured distance value obtained in the distance calculating unit; and a sign reversing unit for reversing a sign of the Doppler frequency from which the peak complex signal value obtained in the spectral peak detecting unit is extracted and outputting the Doppler frequency with the reversed sign when the sign of the measured distance value calculated in the distance calculating unit is determined as negative by the distance sign determining unit. The radar device according to claim 1, further comprising:a pulse modulator for performing pulse modulation on the wave generated in the oscillator; anda distance selecting unit for selecting real received signals having the same difference between a transmission time and a reception time from among real received signals from the receiver obtained by transmission of a plurality of pulses and outputting the selected real received signals and for outputting a distance estimate corresponding to the difference between the transmission time and the reception time, wherein:the transmitting antenna emits the wave having passed through the pulse modulator into the space;the Fourier transform unit performs the Fourier transform on the real received signals selected to be output from the distance selecting unit; andthe distance determining unit comprises a distance range determining unit for comparing the distance estimate obtained in the distance selecting unit and the measured distance value obtained in the distance calculating unit with each other to output the measured distance value calculated in the distance calculating unit and the Doppler frequency calculated in the spectral peak detecting unit only when a difference obtained by the comparison is equal to or smaller than a predetermined value. The radar device according to claim 1, further comprising:a pulse modulator for performing pulse modulation on the wave generated in the oscillator; anda distance selecting unit for selecting real received signals having the same difference between a transmission time and a reception time from among real received signals from the receiver obtained by transmission of a plurality of pulses and outputting the selected real received signals and for outputting a distance estimate corresponding to the difference between the transmission time and the reception time, wherein:the transmitting antenna emits the wave having passed through the pulse modulator into the space;the Fourier transform unit performs the Fourier transform on the real received signals selected to be output from the distance selecting unit; andthe distance determining unit comprises: a distance determination correcting unit for comparing the distance estimate obtained in the distance selecting unit and the measured distance value calculated in the distance calculating unit with each other, correcting the measured distance value supposing that a sign of the Doppler frequency is reversed when a difference obtained by the comparison is equal to or larger than a predetermined value, and outputting the corrected measured distance value; and a distance range re-determining unit for comparing the corrected measured distance value obtained in the distance determination correcting unit with the distance estimate and outputting the corrected measured distance value and a sign-reversed Doppler frequency obtained by reversing the sign of the Doppler frequency only when a difference obtained by the comparison is equal to or smaller than the predetermined value. The radar device according to claim 4, further comprising an amplitude determining unit provided between the spectral peak detecting unit and the distance calculating unit, wherein the amplitude determining unit compares a ratio of an amplitude value having the difference between the transmission time and the reception time at which a maximum amplitude value is obtained and another amplitude value when a plurality of the peak complex signal values are obtained from the same Doppler frequency in the result of the Fourier transform obtained from received signal samples obtained by sampling the received signal at different differences between the transmission time and the reception time and eliminates the peak complex signal value at the difference between the transmission time and the reception time at which the amplitude value whose ratio is smaller than the predetermined reference is obtained. The radar device for emitting a wave into space, receiving the wave reflected by an object present in the space and performing signal processing on the received wave to measure the object, comprising:an oscillator for generating the wave at a plurality of transmission frequencies;a transmitting antenna for emitting the wave generated from the oscillator into the space;a plurality of receiving antennas for receiving an incoming wave;a plurality of receivers for detecting the received wave received by the plurality of receiving antennas to generate a real received signal;a Fourier transform unit for performing a Fourier transform on the real received signal generated from the plurality of receivers in an element direction,a spectral peak detecting unit for receiving an input of a result of the Fourier transform from the Fourier transform unit to extract peak complex signal values of angle points at which an amplitude is maximum;a distance calculating unit for storing the peak complex signal values from the spectral peak detecting unit, which are obtained by using the plurality of transmission frequencies, and calculating a distance to the reflecting object based on the stored peak complex signal values to output the obtained distance as a measured distance value; anda distance determining unit for determining validity of the measured distance value obtained from the distance calculating unit and for outputting the measured distance value calculated in the distance calculating unit and the angle detected in the spectral peak detecting unit according to a result of determination. The radar device according to claim 7, wherein the distance determining unit comprises a distance sign determining unit for outputting the angle detected in the spectral peak detecting unit only when a sign of the measured distance value calculated in the distance calculating unit is positive. The radar device according to claim 7, wherein the distance determining unit comprises: a distance sign determining unit for determining a sign of the measured distance value calculated in the distance calculating unit as positive or negative and for outputting the measured distance value obtained in the distance calculating unit; and a sign reversing unit for reversing a sign of the angle from which the peak complex signal value obtained in the spectral peak detecting unit is extracted and outputting the angle with the reversed sign when the sign of the measured distance value calculated in the distance calculating unit is determined as negative by the distance sign determining unit. The radar device according to claim 7, further comprising:a pulse modulator for performing pulse modulation on the wave generated in the oscillator; anda distance selecting unit for selecting real received signals having the same difference between a transmission time and a reception time from among real received signals from the plurality of receivers obtained by transmission of a plurality of pulses and outputting the selected real received signals and for outputting a distance estimate corresponding to the difference between the transmission time and the reception time, wherein:the transmitting antenna emits the wave having passed through the pulse modulator into the space;the Fourier transform unit performs the Fourier transform on the real received signals selected to be output from the distance selecting unit; andthe distance determining unit comprises a distance range determining unit for comparing the distance estimate obtained in the distance selecting unit and the measured distance value obtained in the distance calculating unit with each other to output the measured distance value calculated in the distance calculating unit and the angle calculated in the spectral peak detecting unit only when a difference obtained by the comparison is equal to or smaller than a predetermined value. The radar device according to claim 7, further comprising:a pulse modulator for performing pulse modulation on the wave generated in the oscillator; anda distance selecting unit for selecting real received signals having the same difference between a transmission time and a reception time from among real received signals from the plurality of receivers obtained by transmission of a plurality of pulses and outputting the selected real received signals and for outputting a distance estimate corresponding to the difference between the transmission time and the reception time, wherein:the transmitting antenna emits the wave having passed through the pulse modulator into the space;the Fourier transform unit performs the Fourier transform on the real received signals selected to be output from the distance selecting unit; andthe distance determining unit comprises: a distance determination correcting unit for comparing the distance estimate obtained in the distance selecting unit and the measured distance value calculated in the distance calculating unit with each other, correcting the measured value supposing that a sign of the angle is reversed when a difference obtained by the comparison is equal to or larger than a predetermined value, and for outputting the corrected measured distance value; and a distance range re-determining unit for comparing the corrected measured distance value obtained in the distance determination correcting unit with the distance estimate and for outputting the corrected measured distance value and a sign-reversed angle obtained by reversing the sign of the angle only when a difference obtained by the comparison is equal to or smaller than the predetermined value. The radar device according to claim 10, further comprising an amplitude determining unit provided between the spectral peak detecting unit and the distance calculating unit, wherein the amplitude determining unit compares a ratio of an amplitude value having the difference between the transmission time and the reception time at which a maximum amplitude value is obtained and another amplitude value when a plurality of the peak complex signal values are obtained from the same angle in the result of the Fourier transform obtained from received signal samples obtained by sampling the received signal at different differences between the transmission time and the reception time and eliminates the peak complex signal value at the difference between the transmission time and the reception time at which the amplitude value whose ratio is smaller than the predetermined reference is obtained. The radar device according to claim 7, wherein:the Fourier transform unit comprises a two-dimensional Fourier transform unit for performing a two-dimensional Fourier transform in a time direction and the element direction on the real received signals generated from the plurality of receivers;the spectral peak detecting unit comprises a two-dimensional spectral peak detecting unit for receiving an input of the result of the Fourier transform from the two-dimensional Fourier transform unit to extract the peak complex signal value at which the amplitude is maximum; andthe distance determining unit outputs the measured distance value calculated in the distance calculating unit and Doppler frequency and the angle detected in the spectral peak detecting unit according to the result of determination of validity of the measured distance value from the distance calculating unit. BACKGROUND OF THE INVENTION [0001] 1. Field of the Invention The present invention relates to a radar device for emitting a wave into space, receiving the wave reflected by an object present in the space, and for performing signal processing on the received wave to measure the object. 2. Description of the Related Art Generally, a radar emits an electromagnetic wave into space and receives the electromagnetic wave reflected by a target present in the space to know the presence/absence of the target, specifically, to detect the target. When the target moves relative to the radar, the measurement of a frequency shift caused by the Doppler effect, specifically, the measurement of a Doppler frequency also allows measurement of a relative velocity of the target, specifically, a Doppler velocity. For the measurement of the Doppler frequency, an I/Q detection system for obtaining two orthogonal signal components as received signals is generally used. According to this detection system, each of a received wave and a local wave is divided into two to prepare two combinations of the received wave and the local wave. For each combination, the received wave and the local wave are mixed by using a mixer to obtain received signals in two channels. The two received signal channels are referred to as an In-phase channel (I-channel) and a Quadrature-phase channel (Q-channel), respectively. For obtaining the Q-channel received signal of the received signals in two channels, a phase of any of the received wave and the local wave is rotated through 90 degrees. As a result, orthogonal components between the I-channel and the Q-channel are obtained. By performing a Fourier transform on a complex received signal obtained by regarding the I-channel as a real part and the Q-channel as an imaginary part, an amplitude of a frequency corresponding to a target Doppler frequency becomes larger. As a result, the target Doppler frequency can be obtained (for example, see R. J. Doviak and D. S. Zrnic, "3. Radar and Its Environment," in Doppler Radar and weather Observations, Second Ed., p. 30-53, Academic Press, Inc., 1993.). When the received signal is obtained only for one channel, specifically, only the I-channel is obtained, the received signal is a real signal. In this case, the Fourier transform of the received signal provides an amplitude distribution symmetrical about a frequency of 0. Therefore, even if the target Doppler frequency is positive, the amplitude becomes larger at two points, one of which is in a positive frequency and the other is in a negative frequency (the amplitude has two peaks in the frequency) after the Fourier transform. On the contrary, even if the target Doppler frequency is negative, the amplitude similarly becomes larger at two points (has two peaks), one of which is in the positive frequency and the other is in the negative frequency. Specifically, even if an absolute value of the Doppler frequency is obtained, a sign of the absolute value cannot be determined. Therefore, the sign of the Doppler frequency remains ambiguous. The ambiguous sign of the Doppler frequency means impossibility in determining whether the target is approaching or receding. Similar ambiguity in the sign of the frequency also appears in a radar using a digital beam forming (DBF) system corresponding to a technique of synthesizing received beams through signal processing. In the DBF system, received signals obtained from a plurality of received elements are subjected to the Fourier transform in an element direction to obtain a signal distribution in an angular direction. Specifically, the DBF system is a technique of synthesizing the received beams through signal processing (for example, see M. I. Skolnik, Introduction to Radar Systems, Third Ed., pp. 610-614, McGraw-Hill, 2001). In such a DBF-system radar, when the received signal is obtained only for one channel, specifically, only a real received signal is obtained, an amplitude pattern of a received beam obtained by performing the Fourier transform on the received signal is symmetrical on the positive angle side and the negative angle side about a front direction defined as 0 degree. Specifically, it is uncertain whether an incoming angle of the received wave is positive or negative. As described above, in the Doppler radar, when only a real signal (only for the I-channel) is obtained as the received signal, the sign of the Doppler frequency cannot be obtained. Furthermore, when a DBF-system antenna is used, information indicating whether a target angle (defining the front direction as 0 degree) is positive or negative cannot be obtained. However, if a radar device is configured to have the I-channel alone, the number of components advantageously becomes less than in the case where the radar device is configured to have two channels, i.e., the I-channel and the Q-channel. Therefore, the radar device can be reduced in size as well as in cost. SUMMARY OF THE INVENTION [0011] The present invention is devised to solve the above-mentioned problem of incompatibility between the ambiguity in sign and the reduction in size and cost, and has an object of providing a radar device allowing a sign of a Doppler frequency or a sign of a target angle to be determined as positive or negative even when only a real signal is obtained as a received signal. The present invention provides a radar device for emitting a wave into space, receiving the wave reflected by an object present in the space, and performing signal processing on the received wave to measure the object. The radar device includes: an oscillator for generating the wave at a plurality of transmission frequencies; a transmitting antenna for emitting the wave generated from the oscillator into the space; a receiving antenna for receiving an incoming wave; a receiver for detecting the received wave received by the receiving antenna to generate a real received signal; a Fourier transform unit for performing a Fourier transform on the real received signal generated from the receiver in a time direction or an element direction or an element direction; and a spectral peak detecting unit for receiving an input of a result of the Fourier transform from the Fourier transform unit to extract peak complex signal values of Doppler frequency or angle points at which an amplitude is maximum. Also, the radar device includes: a distance calculating unit for storing the peak complex signal values from the spectral peak detecting unit, which are obtained by using the plurality of transmission frequencies, and for calculating a distance to the reflecting object based on the stored peak complex signal values to output the obtained distance as a measured distance value; and a distance determining unit for determining validity of the measured distance value obtained from the distance calculating unit and for outputting the measured distance value calculated in the distance calculating unit and the Doppler frequency or angle detected in the spectral peak detecting unit according to a result of determination. According to the present invention, even when only a real signal is obtained as a received signal, it can be determined whether the sign of a Doppler frequency or a target angle is positive or BRIEF DESCRIPTION OF THE DRAWINGS [0014] FIG. 1 is a block diagram illustrating a configuration of a radar device according to a first embodiment of the present invention; FIG. 2 is a view illustrating the principle of the radar device according to the first embodiment of the present invention; FIG. 3 is a block diagram illustrating a configuration of a radar device according to a second embodiment of the present invention; FIG. 4 is a flowchart illustrating an operation procedure of the radar device according to the second embodiment of the present invention; FIG. 5 is a block diagram illustrating a configuration of a radar device according to a third embodiment of the present invention; FIG. 6 is a view illustrating the principle of the radar device according to the third embodiment of the present invention; FIG. 7 is a block diagram illustrating a configuration of a radar device according to a fourth embodiment of the present invention; FIG. 8 is a flowchart illustrating an operation procedure of the radar device according to the fourth embodiment of the present invention; FIG. 9 is a block diagram illustrating a configuration of a radar device according to a fifth embodiment of the present invention; FIG. 10 is a view illustrating the principle of the radar device according to the fifth embodiment of the present invention; FIG. 11 is a block diagram illustrating a configuration of a radar device according to a sixth embodiment of the present invention; and FIG. 12 is a view illustrating the principle of the radar device according to the sixth embodiment of the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS First Embodiment [0026] FIG. 1 is a block diagram illustrating a configuration of a radar device according to a first embodiment of the present invention. The radar device illustrated in FIG. 1 includes an oscillator 1, a divider 2, a transmitting antenna 3, a receiving antenna 4, a receiver 5, and an A/D converter 6. The oscillator 1 generates a transmission wave. The divider 2 divides the transmission wave output from the oscillator 1. One of the transmission waves output from the divider 2 is input to the transmitting antenna 3 which in turn emits the transmission wave into space. The receiving antenna 4 receives a reflected wave generated by the transmission wave reflected by an object present in the space to obtain a received wave. The received wave is input from the receiving antenna 4 to the receiver 5 where the received wave is mixed with the output of the transmission wave input from the divider 2 to generate a received signal. The A/D converter 6 performs analog-to-digital (AD) conversion on the received signal output from the receiver 5 to generate a digital received signal. The radar device also includes a Fourier transform unit 7, a spectral peak detecting unit 8, a distance calculating unit 9, and a distance sign determining unit 10. The Fourier transform unit 7 performs a Fourier transform on the received signal to calculate the Fourier transform of the received signal. The spectral peak detecting unit 8 receives the received signal Fourier transform input from the Fourier transform unit 7 to output Doppler frequencies at which an amplitude has the maximum value and complex amplitude values at the Doppler frequency points. The distance calculating unit 9 receives and stores the complex amplitudes input from the spectral peak detecting unit 8, and uses the complex amplitudes obtained by transmitting transmission waves at different transmission frequencies, thereby calculating a distance to a reflecting object. The distance sign determining unit 10 serves as a distance determining unit to determine the validity of a measured distance value obtained in the distance calculating unit 9 to output the measured distance value obtained in the distance calculating unit 9 and the Doppler frequency calculated in the spectral peak detecting unit 8 according to the result of determination. The distance sign determining unit 10 determines whether the sign of the measured distance value obtained in the distance calculating unit 9 is positive or negative. Only when the measured distance value is positive, the distance sign determining unit 10 outputs the Doppler frequency calculated in the spectral peak detecting unit 8. Next, an operation of the radar device according to the first embodiment will be described. The oscillator 1 generates a transmission wave. A transmission frequency band frequently used in the radar device is a microwave band or a millimeter wave band. In the present invention, however, a transmission frequency of the radar device is not particularly limited. Hereinafter, the description will be given supposing that the transmission wave is a radio wave. However, the present invention is similarly applicable to the use of a laser beam corresponding to a kind of electromagnetic wave, specifically, to a laser radar. Furthermore, the application of the present invention is not limited to the radar using the electromagnetic wave. The present invention is also applicable to a radar using a sonic wave (SODAR). The transmission frequency of the transmission wave output from the oscillator 1 is switched between multiple patterns on the time-division basis. For simplification, the transmission frequency herein is alternately switched between two patterns. The transmission wave generated in the oscillator 1 is input to the divider 2. The divider 2 divides the transmission wave into multiple (two) transmission waves and outputs the obtained transmission waves. One of the transmission wave outputs obtained by the division is output to the transmitting antenna 3. The other transmission wave output is output to the receiver 5 as a local wave. The transmitting antenna 3 serving as a transmission element emits the transmission wave input from the divider 2 into space. The emitted transmission wave is reflected by a reflecting object present in the space. A part of the resultant reflected wave returns to the position of the radar device. The reflected wave reaching the position of the radar is captured by the radar device through the receiving antenna 4. Herein, the reflected wave captured by the receiving antenna 4 is referred to as a received wave. The receiver 5 mixes the received wave and the transmission wave to generate a received signal having a frequency corresponding to a difference between the transmission wave and the received wave (difference frequency). The difference frequency is equal to a Doppler frequency of the reflecting object. The receiver 5 may include an amplifier as needed. However, the amplifier is not particularly explicitly illustrated in FIG. 1 because it does not limit the system employed by the radar device according to the present invention. The receiver 5 outputs the received signal generated at each of the frequencies to the A/D converter 6. The A/D converter 6 performs analog-to-digital conversion on the input received signal to generate a digital received signal. It is assumed that a transmission wave at a transmission angular frequency ω is expressed by a time-series signal given by the following Formula (1). [Formula 1] t) (1) where t is time , and A is an amplitude. The received wave is expressed by the following Formula (2). As expressed by Formula (2), an angular frequency of the received wave is shifted from that of the transmission wave by a Doppler angular frequency COD, and a phase shift proportional to a distance r is generated in the received wave. [ Formula 2 ] s RF ( t ) = A cos { ( ω 1 + ω D ) t - 2 ω 1 c r + θ r } ( 2 ) where A is an amplitude of the received wave , and θ is a constant representing an initial phase of the received wave when a distance to the reflecting object is 0 and is determined by a length of a power feeding path for the received wave or a phase characteristic of radio wave reflection by a target. It is assumed that a transmission wave component (local signal) input from the divider 2 to the receiver 5 is expressed by the following Formula (3). [Formula 3] ) (3) is a constant representing an initial phase of the local signal, which is determined by a length of a power feeding path of the local signal. The received signal output from the receiver 5 is obtained by multiplying S (t) in Formula (2) by S (t) in Formula (3) and then removing a harmonic component from the result of multiplication. The received signal is expressed by the following Formula (4). [ Formula 4 ] s R 1 ( t ) = A 2 cos ( ω D t - 2 ω 1 c r + θ 0 ) ( 4 ) Formula (4) expressed in an exponential function results in the following Formula (5). [ Formula 5 ] s R 1 ( t ) = A 4 { j ( ω D t - 2 ω 1 c r + θ 0 ) + - j ( ω D t - 2 ω 1 c r + θ 0 ) } ( 5 ) Assuming that a signal component obtained by performing the Fourier transform on the received signal S (t) and then extracting a component of the angular frequency ω therefrom is S + and a signal component obtained by performing the Fourier transform on the received signal S (t) and then extracting a component of the angular frequency -ω therefrom is S -, the signal components are expressed by the following Formulae (6) and (7), respectively. [ Formula 6 ] s R 1 + = A ' j ( - 2 ω 1 c r + θ 0 ) ( 6 ) [ Formula 7 ] s R 1 - = A ' - j ( - 2 ω 1 c r + θ 0 ) ( 7 ) where A ' is an amplitude of the extracted signal component and is obtained by multiplying the amplitude of the signal expressed by Formula (5) by a gain of the Fourier transform, S + and S - are complex amplitudes of peaks detected by the spectral peak detecting unit 8, S + is a true signal component, and S - is a false signal component generated by ambiguity in the sign of the Doppler frequency. The false signal component is generated by the absence of I/Q detection in the receiver 5. A phase of the true signal component S + is expressed by the following Formula (8). [ Formula 8 ] φ R 1 + = - 2 ω 1 c r + θ 0 ( 8 ) As in Formula (8), a phase of the true signal component is expressed by the following Formula (9) when the transmission angular frequency is ω [ Formula 9 ] φ R 2 + = - 2 ω 2 c r + θ 0 ( 9 ) The calculation of a phase difference between the phase obtained by Formula (8) and that obtained by Formula (9) results in the following Formula (10). [ Formula 10 ] Δφ R + = φ R 2 + - φ R 1 + = 2 Δω c r = - 4 πΔ f c r ( 10 ) , and Δf=Δω/2π Therefore, the use of the relation expressed by Formula (10) allows a distance r to be calculated. Specifically, since the phase difference between the complex amplitudes obtained by the observation using two transmission frequencies is proportional to the distance to the reflecting object, the distance calculating unit 9 calculates the distance to the reflecting object by the following Formula [ Formula 11 ] r = - c Δφ R + 4 πΔ f ( 11 ) Note that, the calculation of the distance described above is premised on the use of a Doppler frequency component with a correct sign. In practice, the receiver 5 does not conduct I/Q detection in the radar device having the configuration shown in FIG. 1. Therefore, the output received signal is a real signal. Accordingly, an amplitude of the Doppler spectrum is symmetrical about the Doppler frequency of 0, thereby causing ambiguity in the sign of the Doppler frequency. Specifically, even though the true signal component + should be used, there is a possibility that the false signal component S - is erroneously used. For example, when the positive Doppler frequency component is erroneously extracted when the true Doppler frequency is negative, the extracted component is S - expressed by Formula (7). Similarly, a phase difference Δφ - between S - and S -, which is calculated by obtaining S - through the observation using the transmission angular frequency ω2 is expressed by the following Formula (12). [ Formula 12 ] Δφ R - = 2 Δω c r = 4 πΔ f c r ( 12 ) By performing the same distance calculating process as that of Formula (11) for Δφ - expressed by Formula (12), a measured distance value r' is given by the following Formula (13). [ Formula 13 ] r ' = - c Δφ R - 4 π Δ f = - r ( 13 ) As can be understood from Formula (13), when the distance is calculated using the Doppler frequency component with the incorrect sign, the obtained measured distance value is negative. Therefore, based on the negative measured distance value, it can be determined that the spectral peak of the Doppler frequency with the incorrect sign is erroneously detected. FIG. 2 schematically illustrates the above description. FIG. 2 assumes the situation where two Doppler spectral peaks 110 and 111 having the same absolute value of the Doppler frequency but with different signs. For the Doppler frequency at the peak 110, it is assumed that a complex amplitude 112 is obtained for the use of a transmission frequency f1 while a complex amplitude 113 is obtained for the use of a transmission frequency f2. For the Doppler frequency at the peak 111, it is assumed that a complex amplitude 114 is obtained for the use of the transmission frequency f1 while a complex amplitude 115 is obtained for the use of the transmission frequency f2. When the peak 110 is selected, the phase difference calculated by Formula (10) is positive. Therefore, the distance calculated by Formula (11) is negative. Specifically, assuming the positive Doppler frequency, the negative distance is calculated. On the other hand, when the peak 111 is selected, the phase difference calculated by Formula (10) is negative, and therefore the distance calculated by Formula (11) is positive. Specifically, assuming the negative Doppler frequency, the positive distance is calculated. Based on the above results, it can be determined that the negative Doppler frequency is correct in the situation illustrated in FIG. 2. In view of the above-mentioned characteristic, the peaks are detected both for the positive Doppler frequency and the negative Doppler frequency. Only when the calculation of distance provides the result of a positive value, the measured distance value and the Doppler frequency are output. In this manner, the correct Doppler frequency is output. Since Δφ + is a value of the phase, an interval length which allows the calculation of the phase difference without ambiguity is 2π. In the radar device according to the first embodiment, when the interval length of 2π is expressed by -π to π, an interval length from -π to 0 is assigned to a positive distance, whereas an interval length from 0 to π is assigned to a negative distance. When the assumed maximum distance is r , a difference Δf between the transmission frequencies is set as expressed by Formula (14). [ Formula 14 ] Δ f ≦ c 4 r max ( 14 ) In this embodiment, two transmission frequencies are used to obtain a distance from a phase difference obtained at the two frequencies. In this case, the process is performed supposing that one target is contained in the received signal S +. If a plurality of targets is supposed to be contained in the received signal S +, a distance calculating method using three or more transmission frequencies may be used. For example, as described in "Multiple Target Detection using Stepped Multiple Frequency Interrupted CW Radar" by Takayuki Inaba, the IEICE transactions B, Vol. J.89-B, No. 3, pp. 373 to 383, 2006, for the observation with three or more transmission frequencies, a signal train is created in the transmission frequency direction. By performing spectrum analysis (for example, MUSIC processing) on the signal train, the distance calculating process can be performed for a plurality of targets. As a method of detecting the spectral peak, a method of detecting the maximum value of the spectrum has been described above. When the signal component can be regarded as a line spectrum, even such a simple method does not cause any problem. However, if a target speed has fluctuations or the hardware of the radar device is unstable, the spectral peak ranges. If receiver noise is superimposed on such a spectrum, there is a possibility that a plurality of small maximum values appear in a single spectral peak. In such a case, the use of a technique such as a Moment method, for example, as disclosed in "3.1.2 Estimators of Spectral Moments" by H. Sauvageot in Radar Meteorology, Artech House, 1992, allows a center frequency of the ranging spectral peak to be appropriately calculated. As described above, since the radar device according to the first embodiment can correctly measure the Doppler frequency with the correct sign even when the receiver is provided only for the real component. Therefore, the number of components is reduced as compared with the use of the I/Q detection method requiring two receivers, that is, one for the real component and the other for the imaginary component. Accordingly, the radar device of this embodiment has an advantage in reducing cost. Second Embodiment [0057] In the first embodiment described above, the peaks are detected in both the positive Doppler frequency and the negative Doppler frequency. Only when the result of the calculation of a distance is a positive value, the measured distance value and the Doppler frequency are output. This second embodiment discusses a configuration of a radar device for detecting a peak in any of the positive Doppler frequency domain and the negative Doppler frequency domain. Then, based on the result of the sign of the calculated distance, the sign of the Doppler frequency is corrected. FIG. 3 is a block diagram illustrating a configuration of a radar device according to the second embodiment of the present invention. In the configuration illustrated in FIG. 3 according to the second embodiment, the same components as those in the first embodiment shown in FIG. 1 are denoted by the same reference numerals, and the description thereof will be omitted. In contrast with the configuration illustrated in FIG. 1 according to the first embodiment, the configuration illustrated in FIG. 3 according to the second embodiment is additionally provided with a Doppler frequency sign reversing unit 11 in addition to the distance sign determining unit 10 as the distance determining unit. The Doppler frequency sign reversing unit 11 reverses the sign of the Doppler frequency obtained in the spectral peak detecting unit 8 when the measured distance value is determined as negative in the distance sign determining unit 10. Specifically, in the second embodiment, the distance determining unit consists of the distance sign determining unit 10 and the Doppler frequency sign reversing unit 11. The distance sign determining unit 10 determines whether the measured distance value calculated in the distance calculating unit 9 is positive or negative and outputs the measured distance value obtained in the distance calculating unit 9. The Doppler frequency sign reversing unit 11 reverses the sign of the Doppler frequency, at which the peak complex signal value obtained in the spectral peak detecting unit 8 is extracted, when the measured distance value calculated in the distance calculating unit 9 is determined as negative by the distance sign determining unit 10, and then outputs the Doppler frequency with the reversed sign. Next, an operation of the radar device according to the second embodiment will be described. The operation from the generation of the transmission wave in the oscillator 1 to the Fourier transform performed in the Fourier transform unit 7 is the same as that in the first embodiment described above. However, an amplitude distribution of the received signal Fourier transform obtained from the Fourier transform unit 7 becomes symmetrical about the Doppler frequency of 0. Therefore, the peak detecting process in the spectral peak detecting unit 8 is conducted only in the domain where the Doppler frequency is positive. When a true value of the Doppler frequency is positive, the measured distance value obtained in the distance calculating unit 9 is positive, that is, a correct value. On the contrary, when the true value of the Doppler frequency is negative, the measured distance value obtained in the distance calculating unit 9 is negative. However, since the result of the measured distance is incorrect merely in its sign, the absolute value is output as the measured distance value. The negative measured distance value obtained in the distance calculating unit 9 means that the sign of the detected Doppler frequency is opposite, i.e., incorrect. However, the absolute value of the Doppler frequency is correct. Therefore, when the sign of the distance, which is input to the distance sign determining unit 10, is negative, the Doppler frequency sign reversing unit 11 reverses the sign of the Doppler frequency input from the spectral peak detecting unit 8 to negative and then outputs the Doppler frequency with the reversed sign. On the other hand, when the sign of the distance is positive, the Doppler frequency input from the spectral peak detecting unit 8 is directly output because the correct Doppler frequency is obtained. FIG. 4 illustrates the above operation in the form of a flowchart. In Step S001, the Fourier transform is performed on the received signal in the Fourier transform unit 7. In Step S002, a spectral peak is detected in the spectral peak detecting unit 8. However, the spectral peak detecting process is performed only for a frequency interval in which the Doppler frequency is positive. In Step S003, the distance is calculated (a distance measurement process is performed) by using the result of observation with the dual-frequency method with Formula (11) in the distance calculating unit 9. In Step S004, the sign of the measured distance value obtained in Step S003 is determined by the distance sign determining unit 10. If the sign of the measured distance value is negative, the operation proceeds to Step S005. In Step S005, the sign of the result of distance measurement is reversed to positive. In Step S006, the sign of the Doppler frequency is reversed in the Doppler frequency sign reversing unit 11. As described above, in this second embodiment, the interval of the Doppler frequency, which corresponds to a target of the spectral peak detection, is halved. Therefore, in addition to the effects of the first embodiment, the second embodiment provides the effect of reducing the amount of calculation. Third Embodiment [0064] In the first embodiment described above, an example of the configuration of the radar device applicable to the use of a continuous wave as the transmission wave has been described. An embodiment of the radar device, in which the transmission wave is subjected to pulse modulation, will now be described. FIG. 5 is a block diagram illustrating a configuration of a radar device according to the third embodiment of the present invention. In the configuration illustrated in FIG. 5 according to the third embodiment, the same components as those of the configuration illustrated in FIG. 1 according to the first embodiment are denoted by the same reference numerals, and the description thereof will be omitted. In contrast with the configuration illustrated in FIG. 1 according to the first embodiment, the third embodiment illustrated in FIG. 5 further includes a pulse modulating unit 101, a distance selecting unit 102, and a distance range determining unit 103 serving as the distance determining unit. The pulse modulating unit 101 performs pulse modulation on the transmission wave output from the oscillator 1. The distance selecting unit 102 extracts only the digital received signals sampled at a predetermined difference between a transmission time and a reception time from the digital received signals obtained in the A/D converter 6, in other words, selects the digital received signals having the same difference between the transmission time and the reception time, to output only the received signals corresponding to the received wave reflected at a specific distance and to output a distance estimate corresponding to the difference between the transmission time and the reception time. The distance range determining unit 103 compares the measured distance value calculated in the distance calculating unit 9 and the distance estimate to be selected in the distance selecting unit 102 to output the measured distance value and the Doppler frequency only when a difference obtained as the result of comparison is equal to or smaller than a predetermined Next, an operation of the radar device according to the third embodiment will be described. The oscillator 1 generates a continuous transmission wave. The pulse modulating unit 101 performs pulse modulation on the transmission wave. The transmission frequency of the transmission wave output from the oscillator 1 is switched between multiple patterns on the time-division basis. Herein, for simplification, the transmission frequency is alternately switched between two patterns. The transmission wave generated in the oscillator 1 is input to the divider 2. The divider 2 divides the transmission wave into multiple (two) transmission waves and then outputs the obtained transmission waves. One of the transmission wave outputs obtained by the division is output to the receiver 5 as a local wave. The other transmission wave output obtained by the division is subjected to pulse modulation in the pulse modulating unit 101, and then is emitted from the transmitting antenna 3 into space. The emitted transmission wave is reflected by the reflecting object present in the space. A part of the resultant reflected wave returns to the position of the radar device. The reflected wave reaching the position of the radar device is captured by the receiving antenna 4 into the radar device as a received wave. In this third embodiment, the emission of the transmission wave and the capture of the received wave are performed at different times because the transmission wave is subjected to pulse modulation. Therefore, a single antenna may be used in a time-divisional manner so as to realize the transmitting antenna and the receiving antenna by the single antenna. The receiver 5 mixes the received wave and the transmission wave to generate a received signal having a frequency corresponding to a difference between the two waves (difference frequency). The difference frequency is equal to a Doppler frequency of the reflecting object. The receiver 5 outputs the received signal generated at each of the transmission frequencies to the A/D converter 6. The A/D converter 6 performs analog-to-digital conversion on the input received signal to generate the resultant digital received signal. In this third embodiment, the transmission wave is subjected to pulse modulation. Therefore, a time difference between the transmission and the reception, that is, a delay time of the received signal is proportional to the distance to the reflecting object. Thus, based on the delay time of the pulsed received signal (received pulse), the distance to the reflecting object can be obtained. By setting a sampling period of the A/D converter 6 to substantially the same level as a pulse width, the received signal separated for each distance resolution determined by the pulse width can be The distance selecting unit 102 selects only the digital received signal, which is obtained within the delay time corresponding to a distance of interest, from those sampled in the A/D converter 6. As a result, only the receiver signal corresponding to the reflecting object present within a distance interval determined by the pulse width is selected. Signal processing can be conducted for an arbitrary distance interval by changing a distance interval to be selected and then performing the processing in the subsequent steps for each time. The selected received signal is output to the Fourier transform unit 7. The selected distance interval is output to the distance range determining unit 103. The Fourier transform unit 7 performs the Fourier transform on the received signal selected by the distance selecting unit 102 to calculate the received signal Fourier transform. Furthermore, as in the operation according to the first embodiment described above, the spectral peak detecting unit 8 extracts the complex amplitudes of the Doppler frequency at which the amplitude of the received signal Fourier transform has the maximum value. Then, the distance is calculated in the distance calculating unit 9. By this step, the operation is the same as that described in the first embodiment. However, this third embodiment differs from the first embodiment in that the received signal used for calculating the distance corresponds only to the distance interval (distance measurement target interval) set in the distance selecting unit. The distance calculation is obtained in the distance calculating unit 9 from a phase difference between the complex amplitudes obtained through the observation with a plurality of transmission frequencies. The phase difference is proportional to the distance. However, the phase difference which can be measured without ambiguity in sign is only within an interval having an interval length of 2π. In the first embodiment described above, by assigning an interval of -π to 0 to a positive distance and an interval of 0 to π to a negative distance, it is determined based on the negative distance that the sign of the Doppler frequency is not true, i.e., is incorrect. Since the situation satisfying the conditions defined by Formula (14) is premised in the first embodiment, it can be believed that a target is not present at a distance longer than the distance corresponding to the phase difference of π. However, for example, when the reflected wave from the reflecting object present at a distance corresponding to the phase difference of -1.5π can be received, the phase difference cannot be distinguished from the phase difference of 0.5π. Therefore, it cannot be determined based merely on the sign of the distance whether the sign of the Doppler frequency is correct or incorrect. In this third embodiment, the transmission wave is subjected to pulse modulation. Therefore, as described above, the distance interval of the received signal to be extracted can be limited in the distance selecting unit 102. Therefore, the comparison between the distance interval set in the distance selecting unit 102 and the result of distance measurement in the distance calculating unit 9 allows the determination of validity of the measured distance value. Specifically, the distance range determining unit 103 operates so as to determine the validity of the measured distance value to determine the sign of the Doppler frequency in the following manner. The distance calculating unit 9 can obtain the result of distance measurement for two cases, that is, the case where the Doppler frequency is positive and the case where the Doppler frequency is negative. The distance range determining unit 103 can determine that the result of distance measurement obtained within the distance interval set in the distance selecting unit 102 is being obtained by using the correct Doppler frequency component. Therefore, only when the correct result of distance measurement is obtained, the measured distance value and the Doppler frequency are output. FIG. 6 is a schematic view illustrating the above principle. In the radar device supposed in FIG. 6, a phase difference of 0 to -π is assigned to an interval which covers range gates 1 to 3 (an interval 123), whereas a phase difference of -π to +2π is assigned to an interval which covers range gates 4 to 6 (an interval 124). At each of the transmission frequencies f1 and f2, it is assumed that a peak of the Doppler spectrum is detected at the range gate 2. The Doppler spectrum has a symmetrical shape for the positive and negative Doppler frequencies. Herein, it is supposed that a peak is detected in the interval of the positive Doppler frequency. When the measured distance value obtained by extracting the respective complex amplitudes of the peaks in both the transmission frequencies f1 and f2 and then performing the processing as expressed by Formula (11) is obtained as a distance 121 illustrated in FIG. 6, it can be determined that the result of distance measurement is correct because the measured distance value is obtained in the range gate 2 included in the distance measurement target interval. Specifically, when the result of distance measurement is obtained in the range gate in which the peak is detected, the sign of the Doppler frequency as well as the measured distance value remain On the other hand, when the result of distance measurement is obtained as a distance 120, the result of distance measurement is out of the range gate 2 included in the distance measurement target interval. Therefore, it can be determined that the result of distance measurement is incorrect. Since the correct result of distance measurement can be obtained by the processing using the complex amplitude value obtained by the detection of a peak obtained within the negative Doppler frequency interval, the erroneous result of distance measurement may be discarded. Specifically, when the result of distance measurement is obtained out of the range gate in which the peak is detected, the sign of the Doppler frequency can be believed to be incorrect. In an interval with the distance resolution Ar determined by the pulse width, if radar characteristics do not produce any ambiguity in the calculation of the distance performed in the distance calculating unit 9, that is, the measured distance value calculated from the positive Doppler frequency component and that calculated from the negative Doppler frequency component do not fall within the interval having the same value of Δr, the sign of the Doppler frequency can be determined. In this case, Δf is set so as to satisfy the following Formula (15). [ Formula 15 ] Δ f ≦ c 2 Δ r ( 15 ) By adjusting Δf, a distance interval length assigned to the phase difference interval of 2π can be set smaller. Therefore, since a distance error corresponding to the same phase error can be reduced, distance calculation accuracy in the distance calculating unit 9 can be improved. As described above, according to the radar device of this third embodiment, even when the receiver has a low-cost device configuration which generates only real signals, the distance can be measured without ambiguity in the sign of the target Doppler frequency. More specifically, it can be correctly measured whether the target is approaching or receding. Fourth Embodiment [0083] In the third embodiment described above, distance results r1 and r2 denoted by the reference numerals 121 and 120 in FIG. 6 appear in a symmetrical positional relation with respect to the center of a distance interval r . Therefore, even when the distance is measured with the signal component of the Doppler frequency with the incorrect sign, the result of distance measurement can be corrected by using the symmetry. The embodiment for performing such a correction will be described. FIG. 7 is a block diagram illustrating a configuration of a radar device according to a fourth embodiment of the present invention. The same components of the configuration shown in FIG. 7 according to the fourth embodiment as those of the configuration shown in FIG. 5 according to the third embodiment are denoted by the same reference numerals, and the description thereof is herein omitted. In contrast with the configuration shown in FIG. 5 according to the third embodiment, the configuration shown in FIG. 7 according to the fourth embodiment includes a distance determining/correcting unit 104 and a distance range re-determining unit 105 in place of the distance range determining unit 103 as the distance determining unit. The distance determining/correcting unit 104 compares the distance estimate obtained in the distance selecting unit 102 and the measured distance value calculated in the distance calculation unit 9 with each other, corrects the measured distance value assuming that the sign of the Doppler frequency is reversed when a difference obtained by the comparison is equal to or larger than a predetermined value (the two values are far from each other), and output a corrected measured distance value. The distance range re-determining unit 105 compares the corrected measured distance value obtained in the distance determination correcting unit 104 and the distance estimate with each other and outputs the corrected measured distance value and a sign-reversed Doppler frequency obtained by reversing the sign of the Doppler frequency as the Doppler frequency only when a difference obtained by the comparison is equal to or smaller than the predetermined value (the two values are close to each other). The distance range determining/correcting unit 104 first verifies whether or not the measured distance value falls within the distance measurement target interval, as in the case of the distance range determining unit 103 shown in FIG. 5. If the measured distance value falls within the distance measurement target interval, the measured distance value and the Doppler frequency are output. On the other hand, if the measured distance value does not fall within the distance measurement target interval, the sign of the Doppler frequency is assumed to be incorrect to correct the measured distance value. Specifically, after a correction process for the phase difference calculated within the range of 0 to -2π, the corrected measured distance value is calculated. Values of the phase differences before and after the correction (hereinafter, referred to respectively as the "pre-correction phase difference" and the "post-correction phase difference") are positioned symmetrically with respect to -π. Therefore, the post-correction phase difference is obtained by subtracting the pre-correction phase difference from -2π. By using the thus obtained post-correction phase difference, the measured distance value is calculated. This process will be described as follows in terms of the measured distance value. In the example illustrated in FIG. 6, for example, a shift of the measured distance value to the position symmetrical in distance about the boundary between the range gates 3 and 4 corresponds to the process of correcting the distance. The distance range re-determining unit 105 determines whether or not the corrected measured distance value falls within the distance measurement target interval. If the measured distance value falls within the distance measurement target interval, it is appropriate to regard the corrected measured distance value as correct. However, when the measured distance value is obtained out of the distance measurement target interval, the measured distance value is not output because the correct measured distance value is not obtained. For example, when the distance is measured by using the peak of the Doppler spectrum, which is erroneously detected due to the effect of thermal noise or the like, the situation where the correct measured distance value cannot be obtained occurs. FIG. 8 is a flowchart illustrating an operation procedure of the radar device according to the fourth embodiment. In Step S101, the Fourier transform unit 7 performs the Fourier transform on the received signal. In Step S102, the spectral peak detecting unit 8 detects a spectral peak. The spectral peak detecting process is performed only for the domain of the positive Doppler frequency. In Step S103, the distance calculating unit 9 uses the result of the observation with the dual-frequency method to calculate the distance (perform the distance measurement process) using Formula (11). In Step S104, the measured distance value obtained in Step S103 is compared with the distance measurement target interval. If the measured distance value falls within the distance measurement target interval, specifically, in the range gate where the peak is detected, the process is terminated because it is considered that the correct result of distance measurement is obtained. On the other hand, if the obtained measured distance value is out of the distance measurement target interval, the process proceeds to Step S105. In Step S105, assuming that the sign of the Doppler frequency is incorrect, the distance range determining/correcting unit 104 corrects the measured distance value. In Step S106, the corrected measured distance value and the distance interval set in the distance setting unit 102 are compared with each other. If a difference obtained by the comparison is equal to or less than a predetermined value, the process proceeds to Step S108. If not, the process proceeds to Step S107. In Step S107, the measured distance value even after the correction is regarded as being still incorrect. Therefore, the results of detection and distance measurement are deleted. In Step S108, the corrected measured distance value is regarded as correct to reverse the sign of the Doppler frequency. As described above, according to the fourth embodiment, as in the third embodiment described above, the effects that the distance accuracy is improved in the radar device which performs the pulse modulation are obtained. Furthermore, since the measured distance value obtained when the sign of the Doppler frequency is incorrect is corrected, the Doppler frequency interval corresponding to the target of the spectral peak detecting process is limited to any of the positive and negative frequency domains. Thus, the effects of reducing the amount of calculation in the signal processing are Fifth Embodiment [0091] The third or fourth embodiment described above supposes an ideal operation of the pulse modulating unit 101, specifically, the transmission wave is scarcely output in a transmission wave OFF state. In practice, for the restrictions in terms of performance of the pulse modulating unit 101, the transmission wave generally leaks at a low power level even during an output OFF time. An embodiment applicable even to the case where the power level of the transmission wave leakage is unignorably high will be described. FIG. 9 is a block diagram illustrating a configuration of the radar device according to the fifth embodiment of the present invention. The same components in the configuration illustrated in FIG. 9 according to the fifth embodiment as those of the configuration illustrated in FIG. 5 according to the third embodiment are denoted by the same reference numerals, and the description thereof is herein omitted. In contrast with the configuration illustrated in FIG. 5 according to third embodiment, the fifth embodiment illustrated in FIG. 9 further includes an amplitude determining unit 106 between the spectral peak detecting unit 8 and the distance calculating unit 9. The amplitude determining unit 106 compares a ratio of an amplitude value having the difference between the transmission time and the reception time at which the maximum amplitude value is obtained and another amplitude value, when multiple peak complex signal values are obtained from the same Doppler frequency in the received signal Fourier transform obtained from the received signal samples obtained by sampling the received signal at different differences between the transmission time and the reception time, and deletes the peak complex signal value at the difference between the transmission time and the reception time at which the amplitude value whose ratio is smaller than the predetermined reference is obtained. Specifically, in the radar device according to the fifth embodiment, the transmission wave is output at a low power level even while the transmission OFF is being instructed in the pulse modulating unit 101. Therefore, when the peak complex signal value is obtained from the same Doppler frequency in the received signal Fourier transform obtained from the received signal samples obtained by sampling the received signal at different differences between the transmission time and the reception time, an amplitude value of the peak complex signal value is compared with the amplitude value at the difference between the transmission time and the reception time at which the maximum amplitude value is obtained. The peak complex signal value at the difference between the transmission time and the reception time, at which the amplitude value smaller than a predetermined reference is obtained, is deleted. FIG. 10 illustrates the principle of the radar device according to the fifth embodiment. FIG. 10 is similar to FIG. 6 referred to in the third embodiment. FIG. 10 illustrates, however, the situation where the signal is also received in the range gates other than the range gate 2 because the transmission OFF state in the pulse modulating unit 101 is not complete. If the transmission OFF state in the pulse modulating unit 101 is complete, the received signal is detected only when the range gate 2 is selected in the distance selecting unit 102. In the radar device according to the fifth embodiment, the received signal is also detected in the other range gates. When the received signal detected in the range gate 5 is used to calculate the distance in the distance calculating unit 9, the measured distance values are obtained within the interval of the range gate 2 and that of the range gate 5, respectively. The measured distance value obtained within the interval of the range gate 2 is correct, however, the measure distance value obtained within the interval of the range gate 5 is erroneously determined as correct because the received signal is also detected in the range gate 5. As a result, the incorrect result of distance measurement is output. In addition, the Doppler frequency with the incorrect sign is output. However, in view of the result of detection obtained when the range gate 2 is selected in the distance selecting unit 102, the possibility of obtaining the incorrect result of distance measurement in the case where the range gate 5 is selected in the distance selecting unit 102 can be easily understood. Specifically, in the distance calculating process for the case where the range gate 2 is selected in the distance selecting unit 102, a false measured distance value is obtained within the interval of the range gate 5. The same measured distance value is also obtained in the case where the range gate 5 is selected in the distance selecting unit 102. However, for the amplitude of the received signal in the case where the range gate 5 is selected in the distance selecting unit 102, a power value is satisfactorily smaller than in the case of the selection of the range gate 2. This is because the power for leakage of the transmission wave during a transmission OFF time is apparently smaller than that transmitted in a transmission ON time although the transmission OFF is incomplete. Therefore, when the measured distance value is obtained within the interval of the range gate 5 for the selection of the range gate 5, it is checked whether or not the same measured distance value is obtained as a false measured distance value for the case of the selection of another range gate. If the same measured distance value is obtained even when another range gate is selected, the amplitude of the received signal for the calculation of the measured distance value is compared with that of the received signal for the selection of the range gate 5. If the amplitude of the received signal for the selection of another range gate is larger than that of the received signal for the selection of the range gate 5, there is a possibility that the measured distance value with the selection of the range gate 5 may be false. Therefore, the result may be eliminated. More specifically, in FIG. 10, the negative Doppler frequency is obtained in the range gate 2, whereas the positive Doppler frequency is obtained in the range gate 5. Both of the Doppler frequencies can be considered to be available for the distance measurement. As a countermeasure against such a case, a spectral peak having a larger amplitude is used to perform the distance measurement process when the spectral peaks in the same Doppler frequency are obtained in a plurality of range gates. A configuration illustrated in FIG. 9 is obtained by providing the amplitude determining unit 106 between the spectral peak detecting unit 8 and the distance calculating unit 9 in the configuration illustrated in FIG. 5 according to the third embodiment. The present invention is carried out in the same manner by providing the amplitude determining unit 106 between the spectral peak detecting unit 8 and the distance calculating unit 9 in the configuration illustrated in FIG. 7 according to the fourth embodiment. With this configuration, the same effects can be obtained. According to the fifth embodiment, even if the performance of the pulse modulating unit 101 is not satisfactory, the sign of the Doppler frequency can be correctly determined. Therefore, the fifth embodiment is advantageous in that a low-cost component can be used as the pulse modulating unit 101. Sixth Embodiment [0101] In the first to fifth embodiments described above, only one receiving antenna 4 is provided. In this sixth embodiment, a receiving antenna is composed of a plurality of receiving elements to synthesize received beams through signal processing. In this sixth embodiment, a problem of ambiguity in the sign in angle measurement is solved in addition to ambiguity in the sign of the Doppler FIG. 11 is a block diagram illustrating a configuration of a radar device according to the sixth embodiment of the present invention. The same components of the configuration illustrated in FIG. 11 according to the sixth embodiment as those in the third embodiment illustrated in FIG. 5 are denoted by the same reference numerals, and the description thereof is herein omitted. In contrast with the configuration illustrated in FIG. 5 according to the third embodiment, in the sixth embodiment shown in FIG. 11, the receiving antenna is composed of receiving elements 4a to 4d. Receivers 5a to 5d are connected to the respective receiving elements 4a to 4d to provide a plurality of channels. Moreover, A/D converters 6a to 6d are connected to the respective receivers 5a to 5d. The Fourier transform unit includes a two-dimensional Fourier transform unit 107 for performing a two-dimensional Fourier transform on the two-dimensionally defined received signal, i.e., in a time direction and a receiving-element direction. The two-dimensional Fourier transform unit 107 performs a Fourier transform on the real received signals selected by and output from the distance selecting unit 102, which are generated from the plurality of receivers 5a to 5d, in an element direction to obtain a signal distribution in an angular direction. Moreover, the spectral peak detecting unit includes a two-dimensional spectral peak detecting unit 108 which detects a peak at which the amplitude is maximum for the two-dimensional Fourier transform of the received signal to output a complex amplitude, a Doppler frequency and an angle at the peak. Furthermore, the distance determining unit includes the distance range determining unit 103. The distance range determining unit 103 compares the distance estimate obtained in the distance selecting unit 102 and the measured distance value obtained in the distance calculating unit 9 to output the measured distance value calculated in the distance calculating unit 9, and the Doppler frequency and the angle calculated in the two-dimensional spectral peak detecting unit 108 only when a difference obtained by the comparison is equal to or smaller than a predetermined value. The operation from the generation of the transmission wave in the oscillator 1 to the selection of the distance in the distance selecting unit 102 is almost the same as that in the embodiments described above. In this sixth embodiment, however, the plurality of receiving elements and the plurality of A/D converters are provided to thereby constitute the multi-channel receiving system. The receiving elements 4a to 4d are arranged so that their positions are slightly offset from each other. As a result, a phase difference determined by an incoming angle of the receiving wave is generated between the receiving elements. The signal processing using information of the phase difference provides an angle of the incoming wave. Specifically, the received beams can be synthesized by the signal processing. Such a technique is referred to as digital beam forming (DBF), and is conventionally well known. More specifically, the Fourier transform in the element direction provides the distribution of the received signal in the angular direction. In the case of the exemplary configuration illustrated in FIG. 11, the distance selecting unit 102 extracts data in the same set distance interval for all the channels. Thereafter, the two-dimensional Fourier transform unit 107 performs the two-dimensional Fourier transform. The two-dimensional Fourier transform simultaneously performs the Fourier transform in the time axis direction and in the element direction. Since the received signal in each channel is a real signal, the result of the Fourier transform contains ambiguity in the sign as in the above-mentioned embodiments. More specifically, as in the embodiments described above, simultaneously with ambiguity in the sign of the Doppler frequency, ambiguity in the sign of the angle is caused. FIG. 12 illustrates how the ambiguity is caused in the sign. As illustrated in the left part of FIG. 12, in two-dimensional space represented by the Doppler frequency and the angle, two spectral peaks are obtained for one reflecting object at the positions symmetrical with respect to the origin. The two-dimensional spectral peak detecting unit 108 detects a peak of the two-dimensional Fourier transform and then extracts a complex amplitude of the detected peak. The distance calculating unit 9 uses a complex amplitude value obtained at the same peak position (in the same Doppler frequency and at the same angle) to perform the same distance measurement process as that in the embodiments described above. The distance range determining unit 103 verifies whether or not the measured distance value obtained in the distance calculating unit 9 falls within the distance measurement target interval set in the distance selecting unit 102. If the measured distance value falls within the distance measurement target interval, the measured distance value and the Doppler frequency are output. On the other hand, if the measured distance value does not fall within the distance measurement target interval, it is determined that the distance is measured by using the peak of the Doppler frequency with the incorrect sign to discard the result of distance measurement. Besides the configuration illustrated in FIG. 11, the sixth embodiment can also be configured to correct the result of distance measurement supposing that the sign of the Doppler frequency is incorrect, re-determine the distance range, and then output the corrected result of distance measurement as in the fourth embodiment described above referring to FIGS. 7 and 8. Further, the sixth embodiment may be applied to the process for incomplete pulse modulation as in the fifth embodiment described referring to FIGS. 9 and 10. Moreover, in the sixth embodiment, the determination of the sign of the angle for the case where the transmission wave is pulsed has been described. Even for the case where the transmission wave is not pulsed as in the first embodiment illustrated in FIG. 1, the determination of the sign of an angle may be similarly performed. Specifically, the Fourier transform unit includes the two-dimensional Fourier transform unit 107 for performing the two-dimensional Fourier transform in the time direction and the element direction on the real received signals generated from a plurality of receivers. The spectral peak detecting unit includes the two-dimensional spectral peak detecting unit 108 for receiving the input of the result of the Fourier transform from the two-dimensional Fourier transform unit 107 to extract a complex signal value of the peak at which the amplitude becomes maximum. The distance determining unit outputs the measured distance value calculated in the distance calculating unit, and the Doppler frequency and the angle detected in the spectral peak detecting unit, according to the result of determination of validity of the measured distance value from the distance calculating unit. With such a configuration, the sixth embodiment is applicable to any of the first to fifth embodiments. Two signs, that is, the sign of the Doppler frequency and that of the angle are determined in the above description. However, the radar device may have a configuration in which only the sign of the angle is determined. In this case, the Fourier transform is one-dimensionally performed only in a receiving channel direction. Specifically, in the first to fifth embodiments, as in FIG. 11, the radar device is configured as follows. A multi-channel receiving system including the plurality of receiving antennas 4a to 4d and the plurality of receivers 5a to 5d is configured. The Fourier transform unit 7 performs the Fourier transform on the real received signals generated from the plurality of receivers 5a to 5d in the element direction. The spectral peak detecting unit 8 receives the input of the result of the Fourier transform to extract a complex signal value of the peak at an angular point at which the amplitude is maximum. As a result, the distance determining unit can output the angle together with the measured distance value in place of the Doppler frequency. Thus, the same effects as those of the first to fifth embodiments can be obtained. As described above, in addition to the effects of the embodiments described above, the sixth embodiment has an advantage in that the angle with the correct sign can be output. Patent applications by Takashi Sekiguchi, Tokyo JP Patent applications by Takayuki Inaba, Tokyo JP Patent applications by Toshio Wakayama, Tokyo JP Patent applications by Mitsubishi Electric Corporation Patent applications in class Plural frequencies transmitted Patent applications in all subclasses Plural frequencies transmitted User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20080309546","timestamp":"2014-04-18T04:52:51Z","content_type":null,"content_length":"113661","record_id":"<urn:uuid:3d748d2d-a1f8-431c-a1c4-60b2d95d2b3f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
מ. רונן פלסר M. Ronen Plesser Associate Professor of Physics and Mathematics Center for Geometry and Theoretical Physics Research Student Research Teaching Outreach Contact Curriculum Vitae Personal My research is in String Theory, the most ambitious attempt yet at a comprehensive theory of the fundamental structure of the universe. In some (rather imprecise) sense, string theory replaces the particles that form the fundamental building blocks for conventional theories (the fields, or wave phenomena, we observe are obtained starting from particles when we apply the principles of quantum mechanics) with objects that are not point-like but extended in one dimension – strings. At present, the theory is not precisely formulated, as we still seek the conceptual and technical tools needed. The structures we do have in hand suggest that, when formulated precisely, the theory will provide a consistent framework encompassing the two greatest achievements of twentieth century theoretical physics: Einstein’s general theory of relativity, which describes gravitational forces objects in terms of deformations of the geometry of spacetime; and quantum mechanics, a model of fundamental physics in which microscopic objects exhibit the properties of particles under some circumstances and those of waves under others. Both of these theories have been tested with extraordinary precision and yield predictions that agree with our observations of the physical universe. Relativistic effects are manifest at the largest scales in the universe, in the interactions of stars, galaxies, etc. The differences between a quantum mechanical description and a classical nineteenth century description of these objects are so small they can be neglected. Quantum effects dominate at the smallest scales – atoms and their constituents. In this realm, the effects of gravitation can be completely neglected. And yet, under extreme conditions of density, such as may obtain in the final instant of the evaporation of a black hole, both kinds of effects are important. A universal theory of physics thus requires a consistent quantum theory of gravity. Thus far, string theory is the most promising candidate for producing such a theory. Investigations of this theory have already yielded rich insights, and continue to produce more. My own research centers on the crucial role played in the theory by geometric structures. There is an obvious role for geometry in a theory that incorporates gravitation, which as discussed above is tantamount to the geometry of spacetime. Related to this are several other, less obvious, geometric structures that play an important role in determining the physics of the theory. Indeed, advances in mathematics and in the physics of string theory have often been closely linked. An example of how the two fields have interacted in a surprising way is the ongoing story of mirror symmetry. A more detailed description of my research can be found here, and a list of my published papers here. Students participate in my research in several modes. Because of the very technical nature of the work, most student research is performed by fairly advanced graduate students. Currently, two graduate students in the physics department, Sven Rinke and Ilarion Melnikov, are working towards their Ph.D. under my supervision. Details of our joint work may be found on the research page. Occasionally, outstanding, highly motivated undergraduates find a way to contribute to this work. In the past, Mark Jackson and Chris Beasley have worked with me on their senior honors theses. You can find the theses here. Currently, David Marks, a junior mathematics major, is working with our group through a PRUV fellowship. In the fall of 2004 and the spring of 2003, I will be teaching Physics 342, an advanced course in quantum field theory. In the past, I have also taught introductory astronomy and advanced quantum mechanics, as well as a graduate class on classical mechanics. I enjoy sharing my love of science, and children are my favorite audience. Over the past five years, I have been developing an increasingly close and productive partnership with Durham public schools, beginning with Forest View elementary. Together, we have developed programs that take advantage of the knowledge and resources available through the Duke physics department to enhance science teaching. This work has evolved into a broad range of activities involving several schools and a large number of volunteers from the Duke community. For more details please contact me, or see here for more details.
{"url":"http://www.cgtp.duke.edu/~plesser/","timestamp":"2014-04-20T06:32:49Z","content_type":null,"content_length":"21060","record_id":"<urn:uuid:13d7b0d7-7ce8-4e51-8af6-11a69b82e84d>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: beginning programming advice [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Re: beginning programming advice From Kit Baum <baum@bc.edu> To statalist@hsphsun2.harvard.edu Subject st: Re: beginning programming advice Date Tue, 16 Oct 2007 06:19:21 -0400 If you can write down the maximand (or minimand) for your problem, optimize() can work with it. It is not clear to me what that might be. Note in the documentation that "type v" optimizers allow you to work with a vector rather than scalar function. optimize() is general enough to solve a multivariate optimization problem. From the online help: "In a type v0 evaluator, you return a column vector v such that colsum (v)=f(p)." If the elements of column vector v are the squared elements of ( (y1 - f(x1)), (y2 - f(x2)), minimizing the column sum of v is the equivalent of solving the two nonlinear equations y1 = f (x1) y2 = f (x2) where some elements of x are shared in x1, x2. Kit Baum, Boston College Economics and DIW Berlin An Introduction to Modern Econometrics Using Stata: On Oct 16, 2007, at 2:33 AM, statalist-digest wrote: Thanks very much for your reply. I have examined the optimize() function you mention in your first proposed solution. Can this be called to solve two functions simultaneously? I follow how one would call it to optimize a single function. Can you add any details on how it would be used for two functions each defined on similar variables as in my two equations in two unknowns problem? * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-10/msg00537.html","timestamp":"2014-04-17T04:46:17Z","content_type":null,"content_length":"6737","record_id":"<urn:uuid:02440bc4-cceb-4b01-a734-129933fa382c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
The Basic Elements of String Theory Five key ideas are at the heart of string theory. Become familiar with these key elements of string theory right off the bat. Read on for the very basics of these five ideas of string theory in the sections below. Strings and membranes When the theory was originally developed in the 1970s, the filaments of energy in string theory were considered to be 1-dimensional objects: strings. (One-dimensional indicates that a string has only one dimension, length, as opposed to say a square, which has both length and height dimensions.) These strings came in two forms — closed strings and open strings. An open string has ends that don’t touch each other, while a closed string is a loop with no open end. It was eventually found that these early strings, called Type I strings, could go through five basic types of interactions, as shown this figure. Type I strings can go through five fundamental interactions, based on different ways of joining and splitting. The interactions are based on a string’s ability to have ends join and split apart. Because the ends of open strings can join together to form closed strings, you can’t construct a string theory without closed strings. This proved to be important, because the closed strings have properties that make physicists believe they might describe gravity. Instead of just being a theory of matter particles, physicists began to realize that string theory may just be able to explain gravity and the behavior of particles. Over the years, it was discovered that the theory required objects other than just strings. These objects can be seen as sheets, or branes. Strings can attach at one or both ends to these branes. A 2-dimensional brane (called a 2-brane) is shown in this figure. In string theory, strings attach themselves to branes. Quantum gravity Modern physics has two basic scientific laws: quantum physics and general relativity. These two scientific laws represent radically different fields of study. Quantum physics studies the very smallest objects in nature, while relativity tends to study nature on the scale of planets, galaxies, and the universe as a whole. (Obviously, gravity affects small particles too, and relativity accounts for this as well.) Theories that attempt to unify the two theories are theories of quantum gravity, and the most promising of all such theories today is string theory. Unification of forces Hand-in-hand with the question of quantum gravity, string theory attempts to unify the four forces in the universe — electromagnetic force, the strong nuclear force, the weak nuclear force, and gravity — together into one unified theory. In our universe, these fundamental forces appear as four different phenomena, but string theorists believe that in the early universe (when there were incredibly high energy levels) these forces are all described by strings interacting with each other. All particles in the universe can be divided into two types: bosons and fermions. String theory predicts that a type of connection, called supersymmetry, exists between these two particle types. Under supersymmetry, a fermion must exist for every boson and vice versa. Unfortunately, experiments have not yet detected these extra particles. Supersymmetry is a specific mathematical relationship between certain elements of physics equations. It was discovered outside of string theory, although its incorporation into string theory transformed the theory into supersymmetric string theory (or superstring theory) in the mid-1970s. Supersymmetry vastly simplifies string theory’s equations by allowing certain terms to cancel out. Without supersymmetry, the equations result in physical inconsistencies, such as infinite values and imaginary energy levels. Because scientists haven’t observed the particles predicted by supersymmetry, this is still a theoretical assumption. Many physicists believe that the reason no one has observed the particles is because it takes a lot of energy to generate them. (Energy is related to mass by Einstein’s famous E = mc2 equation, so it takes energy to create a particle.) They may have existed in the early universe, but as the universe cooled off and energy spread out after the big bang, these particles would have collapsed into the lower-energy states that we observe today. (We may not think of our current universe as particularly low energy, but compared to the intense heat of the first few moments after the big bang, it certainly is.) Scientists hope that astronomical observations or experiments with particle accelerators will uncover some of these higher-energy supersymmetric particles, providing support for this prediction of string theory. Extra dimensions Another mathematical result of string theory is that the theory only makes sense in a world with more than three space dimensions! (Our universe has three dimensions of space — left/right, up/down, and front/back.) Two possible explanations currently exist for the location of the extra dimensions: • The extra space dimensions (generally six of them) are curled up (compactified, in string theory terminology) to incredibly small sizes, so we never perceive them. • We are stuck on a 3-dimensional brane, and the extra dimensions extend off of it and are inaccessible to us. A major area of research among string theorists is on mathematical models of how these extra dimensions could be related to our own. Some of these recent results have predicted that scientists may soon be able to detect these extra dimensions (if they exist) in upcoming experiments, because they may be larger than previously expected.
{"url":"http://www.dummies.com/how-to/content/the-basic-elements-of-string-theory.navId-404494.html","timestamp":"2014-04-16T04:37:28Z","content_type":null,"content_length":"56389","record_id":"<urn:uuid:3e1d723c-592b-49be-b296-142b62770317>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Berwyn, IL Algebra 2 Tutor Find a Berwyn, IL Algebra 2 Tutor ...One main difference (besides focusing on algebra and geometry instead of precalc) included myself in attendance during the main lecture to aid students during in-class exercises and quizzes. These classes were a bit fast-paced in comparison to those during the main school year. In one summer we would hold classes for two entirely different groups of algebra pupils. 7 Subjects: including algebra 2, calculus, physics, geometry ...I am also a TA and I enjoy the teaching aspect of my program very much. I have been an active math tutor both professionally and for friends for the past few years and I love the experience of helping someone gain an understanding of mathematics. I hope to ignite the same love of the subject th... 22 Subjects: including algebra 2, calculus, geometry, statistics ...I have a degree in Mathematics from Augustana College. I am currently pursuing my Teaching Certification from North Central College. I have assisted in Pre-Algebra, Algebra, and Pre-Calculus 7 Subjects: including algebra 2, geometry, algebra 1, trigonometry ...I also help students prepare for the ACT. I provide my own materials, including actual ACT exams from previous years. I work on reviewing concepts, going through practice tests, and teaching students test taking strategies.I have a PhD in molecular genetics from the University of Illinois at Chicago, which I received in 2006. 21 Subjects: including algebra 2, reading, study skills, algebra 1 ...They have to be, for in my line of work, we breathe deadlines. Calculating a design problem in an overly complicated and very time-consuming process, when there is instead a faster, simpler, and more efficient way to do it, is deeply frowned upon. Efficient and innovative ways of solving math problems are what I wish to share with and impart to a young student. 7 Subjects: including algebra 2, geometry, algebra 1, precalculus Related Berwyn, IL Tutors Berwyn, IL Accounting Tutors Berwyn, IL ACT Tutors Berwyn, IL Algebra Tutors Berwyn, IL Algebra 2 Tutors Berwyn, IL Calculus Tutors Berwyn, IL Geometry Tutors Berwyn, IL Math Tutors Berwyn, IL Prealgebra Tutors Berwyn, IL Precalculus Tutors Berwyn, IL SAT Tutors Berwyn, IL SAT Math Tutors Berwyn, IL Science Tutors Berwyn, IL Statistics Tutors Berwyn, IL Trigonometry Tutors Nearby Cities With algebra 2 Tutor Bellwood, IL algebra 2 Tutors Broadview, IL algebra 2 Tutors Brookfield, IL algebra 2 Tutors Cicero, IL algebra 2 Tutors Forest Park, IL algebra 2 Tutors Forest View, IL algebra 2 Tutors La Grange Park algebra 2 Tutors Lyons, IL algebra 2 Tutors Maywood, IL algebra 2 Tutors North Riverside, IL algebra 2 Tutors Oak Park, IL algebra 2 Tutors River Forest algebra 2 Tutors Riverside, IL algebra 2 Tutors Stickney, IL algebra 2 Tutors Westchester algebra 2 Tutors
{"url":"http://www.purplemath.com/Berwyn_IL_algebra_2_tutors.php","timestamp":"2014-04-16T16:06:40Z","content_type":null,"content_length":"24170","record_id":"<urn:uuid:d1e6980c-12e0-471b-a433-bb240b48e2d3>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Common Core Math Standards - 8th Grade MathScore EduFighter is one of the best math games on the Internet today. You can start playing for free! Common Core Math Standards - 8th Grade MathScore aligns to the Common Core Math Standards for 8th Grade. The standards appear below along with the MathScore topics that match. If you click on a topic name, you will see sample problems at varying degrees of difficulty that MathScore generated. When students use our program, the difficulty of the problems will automatically adapt based on individual performance, resulting in not only true differentiated instruction, but a challenging game-like experience. Want unlimited math worksheets? Learn more about our online math practice software. View the Common Core Math Standards at other levels. The Number System Know that there are numbers that are not rational, and approximate them by rational numbers. 1. Know that numbers that are not rational are called irrational. Understand informally that every number has a decimal expansion; for rational numbers show that the decimal expansion repeats eventually, and convert a decimal expansion which repeats eventually into a rational number. (Repeating Decimals ) 2. Use rational approximations of irrational numbers to compare the size of irrational numbers, locate them approximately on a number line diagram, and estimate the value of expressions (e.g., π^2). For example, by truncating the decimal expansion of √2, show that √2 is between 1 and 2, then between 1.4 and 1.5, and explain how to continue on to get better approximations. Expressions and Equations Work with radicals and integer exponents. 1. Know and apply the properties of integer exponents to generate equivalent numerical expressions. For example, 3^2 × 3^–5 = 3^–3 = 1/3^3 = 1/27. (Negative Exponents Of Fractional Bases , Multiplying and Dividing Exponent Expressions , Exponent Rules For Fractions ) 2. Use square root and cube root symbols to represent solutions to equations of the form x^2 = p and x^3 = p, where p is a positive rational number. Evaluate square roots of small perfect squares and cube roots of small perfect cubes. Know that √2 is irrational. (Perfect Squares ) 3. Use numbers expressed in the form of a single digit times a whole-number power of 10 to estimate very large or very small quantities, and to express how many times as much one is than the other. For example, estimate the population of the United States as 3 times 10^8 and the population of the world as 7 times 10^9, and determine that the world population is more than 20 times larger. 4. Perform operations with numbers expressed in scientific notation, including problems where both decimal and scientific notation are used. Use scientific notation and choose units of appropriate size for measurements of very large or very small quantities (e.g., use millimeters per year for seafloor spreading). Interpret scientific notation that has been generated by technology. (Scientific Notation , Scientific Notation 2 ) Understand the connections between proportional relationships, lines, and linear equations. 5. Graph proportional relationships, interpreting the unit rate as the slope of the graph. Compare two different proportional relationships represented in different ways. For example, compare a distance-time graph to a distance-time equation to determine which of two moving objects has greater speed. 6. Use similar triangles to explain why the slope m is the same between any two distinct points on a non-vertical line in the coordinate plane; derive the equation y = mx for a line through the origin and the equation y = mx + b for a line intercepting the vertical axis at b. (Graphs to Linear Equations ) Analyze and solve linear equations and pairs of simultaneous linear equations. 7. Solve linear equations in one variable. (Single Variable Equations , Single Variable Equations 2 , Single Variable Equations 3 , Linear Equations ) a. Give examples of linear equations in one variable with one solution, infinitely many solutions, or no solutions. Show which of these possibilities is the case by successively transforming the given equation into simpler forms, until an equivalent equation of the form x = a, a = a, or a = b results (where a and b are different numbers). b. Solve linear equations with rational number coefficients, including equations whose solutions require expanding expressions using the distributive property and collecting like terms. (Single Variable Equations , Single Variable Equations 2 , Single Variable Equations 3 , Linear Equations ) 8. Analyze and solve pairs of simultaneous linear equations. a. Understand that solutions to a system of two linear equations in two variables correspond to points of intersection of their graphs, because points of intersection satisfy both equations b. Solve systems of two linear equations in two variables algebraically, and estimate solutions by graphing the equations. Solve simple cases by inspection. For example, 3x + 2y = 5 and 3x + 2y = 6 have no solution because 3x + 2y cannot simultaneously be 5 and 6. (System of Equations Substitution , System of Equations Addition ) c. Solve real-world and mathematical problems leading to two linear equations in two variables. For example, given coordinates for two pairs of points, determine whether the line through the first pair of points intersects the line through the second pair. (Age Problems ) Define, evaluate, and compare functions. 1. Understand that a function is a rule that assigns to each input exactly one output. The graph of a function is the set of ordered pairs consisting of an input and the corresponding output.^1 2. Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). For example, given a linear function represented by a table of values and a linear function represented by an algebraic expression, determine which function has the greater rate of change. 3. Interpret the equation y = mx + b as defining a linear function, whose graph is a straight line; give examples of functions that are not linear. For example, the function A = s^2 giving the area of a square as a function of its side length is not linear because its graph contains the points (1,1), (2,4) and (3,9), which are not on a straight line. Use functions to model relationships between quantities. 4. Construct a function to model a linear relationship between two quantities. Determine the rate of change and initial value of the function from a description of a relationship or from two (x, y) values, including reading these from a table or from a graph. Interpret the rate of change and initial value of a linear function in terms of the situation it models, and in terms of its graph or a table of values. 5. Describe qualitatively the functional relationship between two quantities by analyzing a graph (e.g., where the function is increasing or decreasing, linear or nonlinear). Sketch a graph that exhibits the qualitative features of a function that has been described verbally. Understand congruence and similarity using physical models, transparencies, or geometry software. 1. Verify experimentally the properties of rotations, reflections, and translations: a. Lines are taken to lines, and line segments to line segments of the same length. b. Angles are taken to angles of the same measure. c. Parallel lines are taken to parallel lines. 2. Understand that a two-dimensional figure is congruent to another if the second can be obtained from the first by a sequence of rotations, reflections, and translations; given two congruent figures, describe a sequence that exhibits the congruence between them. 3. Describe the effect of dilations, translations, rotations, and reflections on two-dimensional figures using coordinates. 4. Understand that a two-dimensional figure is similar to another if the second can be obtained from the first by a sequence of rotations, reflections, translations, and dilations; given two similar two-dimensional figures, describe a sequence that exhibits the similarity between them. 5. Use informal arguments to establish facts about the angle sum and exterior angle of triangles, about the angles created when parallel lines are cut by a transversal, and the angle-angle criterion for similarity of triangles. For example, arrange three copies of the same triangle so that the sum of the three angles appears to form a line, and give an argument in terms of transversals why this is so. Understand and apply the Pythagorean Theorem. 6. Explain a proof of the Pythagorean Theorem and its converse. 7. Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems in two and three dimensions. (Pythagorean Theorem ) 8. Apply the Pythagorean Theorem to find the distance between two points in a coordinate system. Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres. 9. Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems. (Cylinders ) Statistics and Probability Investigate patterns of association in bivariate data. 1. Construct and interpret scatter plots for bivariate measurement data to investigate patterns of association between two quantities. Describe patterns such as clustering, outliers, positive or negative association, linear association, and nonlinear association. 2. Know that straight lines are widely used to model relationships between two quantitative variables. For scatter plots that suggest a linear association, informally fit a straight line, and informally assess the model fit by judging the closeness of the data points to the line. 3. Use the equation of a linear model to solve problems in the context of bivariate measurement data, interpreting the slope and intercept. For example, in a linear model for a biology experiment, interpret a slope of 1.5 cm/hr as meaning that an additional hour of sunlight each day is associated with an additional 1.5 cm in mature plant height. 4. Understand that patterns of association can also be seen in bivariate categorical data by displaying frequencies and relative frequencies in a two-way table. Construct and interpret a two-way table summarizing data on two categorical variables collected from the same subjects. Use relative frequencies calculated for rows or columns to describe possible association between the two variables. For example, collect data from students in your class on whether or not they have a curfew on school nights and whether or not they have assigned chores at home. Is there evidence that those who have a curfew also tend to have chores? Learn more about our online math practice software.
{"url":"http://www.mathscore.com/math/standards/Common%20Core/8th%20Grade/","timestamp":"2014-04-17T10:55:57Z","content_type":null,"content_length":"18104","record_id":"<urn:uuid:209ae97c-66fc-46f0-bfc5-3e776a26e375>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
he Doomsday ABSTRACT. The Doomsday argument purports to show that the risk of the human species going extinct soon has been systematically underestimated. This argument has something in common with controversial forms of reasoning in other areas, including: game theoretic problems with imperfect recall, the methodology of cosmology, the epistemology of indexical belief, and the debate over so-called fine-tuning arguments for the design hypothesis. The common denominator is a certain premiss: the Self-Sampling Assumption. We present two strands of argument in favor of this assumption. Through a series of thought experiments we then investigate some bizarre prima facie consequences – backward causation, psychic powers, and an apparent conflict with the Principal The Self-Sampling Assumption and its use in the Doomsday argument Let a person’s birth rank be her position in the sequence of all observers who will ever have existed. For the sake of argument, let us grant that the human species is the only intelligent life form in the cosmos.[1] Your birth rank is then approximately 60 billionth, for that is the number of humans who have lived before you. The Doomsday argument proceeds as follows. Compare two hypotheses about how many humans there will have been in total:[2] h[1]: = “There will have been a total of 200 billion humans.” h[2]: = “There will have been a total of 200 trillion humans.” Suppose that after considering the various empirical threats that could cause human extinction (species-destroying meteor impact, nuclear Armageddon, self-replicating nanobots destroying the biosphere, etc.) you still feel fairly optimistic about our prospects: Pr(h[1]) = .05 Pr(h[2]) = .95 But now consider the fact that your birth rank is 60 billionth. According to the doomsayer, it is more probable that you should have that birth rank if the total number of humans that will ever have lived is 200 billion than if it is 200 trillion; in fact, your having that birth rank is one thousand times more probable given h[1] than given h[2]: Pr(“My rank is 60 billionth.” | h[1]) = 1 / 200 billions Pr(“My rank is 60 billionth.” | h[2]) = 1 / 200 trillions With these assumptions, we can use Bayes’s theorem to derive the posterior probabilities of h[1] and h[2] after taking your low birth rank into account: Your rosy prior probability of 5% of our species ending soon (h[1]) has mutated into a baleful posterior of 98%. This is a nutshell summary of the Doomsday argument. The long version consists largely of answers to objections that have been made against the reasoning just outlined.[3] Rather than reviewing all these objections here, we shall zoom in on one of the argument’s central premises: the idea that you should think of yourself as if you were in some sense a random human. This idea has independent interest beyond its use in the Doomsday argument because it plays a role in many other types of reasoning, some of which are blessed with a good deal more prima facie plausibility than is the Doomsday argument. These other forms of reasoning include methods of deriving observational consequences from cosmological models (Leslie 1989; Bostrom 2000b; Bostrom 2001a), arguments concerning how improbable was the evolution of intelligent life on Earth (Carter 1983; Carter 1989), and game theoretic problems involving imperfect recall (e.g. Grove 1997; Piccione and Rubinstein 1997; Elga 2000). Tracking down the implications of this randomness principle has relevance for each of these domains. The randomness assumption is invoked in the Doomsday argument in the step where the conditional probability of your having a specific birth rank given hypothesis h is set equal to the inverse of the number of observers that would exist if h were true. We can dub it the Self-Sampling Assumption: (SSA) Observers should reason as if they were a random sample from the set of all observers in their reference class. For the purposes of this paper we can take the reference class to consist of all (intelligent) observers, although one of the lessons one might want to draw from our investigation is that this is too generous a definition and that the only way that SSA can continue to be defensibly used is by incorporating some restriction on the reference class such that not all observers are included in every observer’s reference class. However, even with the stipulation that we take the reference class to be the class of all observers, our formulation of SSA is still vague in that it leaves open at least two important questions: What counts as an observer? And what is the sampling density with which you have been sampled? How these areas of vagueness are resolved has serious consequences for what empirical predictions one gets when applying SSA to real empirical situations. Yet for present purposes we can sidestep these issues by introducing some simplifying assumptions. These will not change the fundamental principles involved but will on the contrary make it easier for us to focus on them. To this effect, let’s consider an imaginary world where there are no borderline cases of what counts as an observer and where the observers are sufficiently similar to each other to justify using a uniform sampling density (rather then one, say, where long-lived observers get a proportionately greater weight). Thus, let us suppose for the sake of illustration that the only observers in existence are human beings, that we have no evolutionary ancestors, that all humans are fully self-aware and are knowledgeable about probability theory and anthropic reasoning, etc., that we all have identical life spans and that we are equal in any other arguably relevant respect. Assume furthermore that each human has a unique birth rank, and that the total number of humans that will ever have lived is finite.[4] Under these assumptions, we get as a corollary of the SSA that (D) [], where R and N are random variables: N representing the total number of people that will have lived, and R the birth rank of the particular person doing the reasoning. I call this expression “D” because of its complicity in the Doomsday argument. It is responsible for supplying the premiss from which the conditional probabilities (of you having a particular birth rank given a hypothesis about the duration of the human species) are derived. Without this premiss, the Doomsday argument could not get off the ground. Arguments for the Self-Sampling Assumption Before taking up the pursuit of some of the counterintuitive consequences of SSA, it is worth pausing to briefly consider some arguments that support SSA. These fall into two categories. First, there are a variety of thought experiments that describe situations in which it is plausible that one should reason in accordance with SSA. Second, there are arguments pointing to a methodological need for a principle like SSA in concrete scientific applications. It can be claimed, for example, that SSA serves to bridge a troublesome cleft between cosmological theory and The thought experiments that seem to favor adopting SSA include The Dungeon: The world consists of a dungeon that has one hundred cells. In each cell there is one prisoner. Ninety of the cells are painted blue on the outside and the other ten are painted red. Each prisoner is asked to guess whether he is in a blue or a red cell. (And everybody knows all this.) You find yourself in one of the cells. What color should you think it is? It seems that – in accordance with SSA – you should think that you are in a blue cell, with 90% probability. This answer is both intuitively plausible to many people and can be backed up by additional considerations. For instance, if all prisoners bet in accordance with SSA, then ninety per cent of then will win their bets; if you take part in great number of similar experiments, then you will likely in the long run find yourself in blue cells in ninety per cent of the cases; and so on. And the result doesn’t seem to depend any assumptions about how the prisoners came to inhabit the cells they are in. Whether they were assigned to their cells by a random mechanism like lot or they were destined by physical laws to end up where they are, makes no difference so long as the prisoners aren’t capable of figuring out their location from any knowledge they have of those circumstances. It is their subjective uncertainty that is guiding their credence assignments in Dungeon. One might therefore think that the prisoners should assign credence in accordance with SSA only as long as they are uncertain about which cell they are in. Clearly, once you’ve stepped out of your cell and discovered that the outside is indeed blue, you should no longer assign a 90% credence to that hypothesis. Instead your credence is now unity (or very close to unity). Does this mean that you should reason in accordance with SSA only until such a time that you have direct empirical evidence as to what position you are in? If so, the SSA would not in any way support the Doomsday argument, since we have a lot of evidence that enables us to determine what our (approximate) birth ranks are in the human species. If you should cease to regard yourself a random sample once you have identifying information about the sample (yourself), then SSA would amount to nothing but a restricted and fairly toothless version of the principle of indifference, and it would not have any of the counterintuitive consequences that we will encounter later in this paper. That reading of SSA is not what the doomsayers have in mind, however. To see what’s at stake, rather, consider the following thought experiment: The Incubator Stage (a): The world consists of a dungeon with one hundred cells. The outside of each cell has a unique number painted on it (which can’t be seen from the inside); the numbers being the integers between 1 and 100. The world also contains a mechanism which we can term the incubator.[5] The incubator first creates one observer in cell #1. It then activates a randomization mechanism; let’s say it flips a fair coin. If the coin falls tails, the incubator does nothing more. If the coin falls heads, the incubator creates one observer in each of the cells ##2 - 100. Apart from this, the world is empty. It is now a time well after the coin has been tossed and any resulting observers have been created. Everyone knows all the above. Stage (b): A little later, you have just stepped out of your cell and discovered that it is #1. Here the suggestion is that at stage (a) you should assign a 50% probability to the coin having landed heads. Moreover, your conditional probabilities at stage (a) of being in a particular cell, say cell #1, given that the coin fell heads seems to be 1%, since if the coin fell heads then there are one hundred people, any one of which might be you for all you know, and only one of which is in cell #1. Similarly, your conditional probability of being in cell #1 given that the coin fell tails is 100%, since that’s the only place you could be given that outcome. This is in accordance with SSA. What you should think at stage (b) seems to follow from this. If you continue to accept a prior probability of heads equal to 50%, and conditional probabilities of being in cell #1 equal to 1% (or 100%) given Heads (or Tails), then it follows from Bayes’s theorem that after finding that you are in cell #1 in order to be coherent you must assign a posterior probability of Heads that is equal to 1/101, and a posterior probability of Tails that is equal to 100/101.[6] In other words, you go from being completely ignorant about how the coin landed (50% probability of Tails) to being quite confident that it landed tails (99% probability).[7] In this reasoning you continue to regard yourself as a random sample throughout. This is analogous to a case where you draw a random sample from an urn which contains either one ball that is numbered “#1”, or one hundred balls which are numbered consecutively from “#1” to “#100”. Suppose a fair coin toss determined which of these alternatives obtains, so the prior probability of the urn containing only one ball is 50%. (Let’s say Tails gives one ball, Heads a hundred.) The probability that the ball you have drawn is #1 is []. After you’ve examined the ball and found that it is #1, it remains correct to view the ball as a random sample, and of course that doesn’t mean that you should continue to assign a 50.5% probability to it being #1. Rather, you simply add in the new information you’ve obtained about the random sample and update your beliefs accordingly. It remains the case, for example, that the conditional probability of the ball (the random sample) being #1 is much greater given Tails than Heads, and you can use this to infer, after finding that you drew ball #1, that the coin probably fell tails and the urn contained only one ball. In the same manner, the doomsayer maintains that we should regard ourselves as random samples even though we know many facts that show that we are a product of our age and that tie us to a specific position in the human species. The injunction that we should reason as if we were random observers is a methodological prescription about what values to give to certain conditional probabilities, in particular those of the Pr(“I’m an observer with such and such properties.” | “The world is such and such.”)[8] This methodological prescription is intended as an epistemological principle that is independent of any assumptions about us having been generated through some objectively random process. There is no need to assume a time-traveling stork that had an equal probability of dropping you off at any location throughout history where a human child was about to be delivered. Now to the second kind of arguments for SSA. These are arguments that point to legitimate scientific needs that rely on the services provided by SSA. We can most readily see this in cosmology, where the basic idea is as follows. It seems that the cosmos is very big, so big in fact that we have reason to believe that every possible observation is made.[9] How can we ever test theories which say that the cosmos is that big? For any observation we specify, such theories assign a very high probability (a probability of one in the case of typical infinite-cosmos theories) to the hypothesis that that observation is made. So all such theories seem to be perfectly probabilistically compatible with every possible observation; from which it would follow that empirical evidence cannot possibly give us any reason whatever for favoring one such infinite-cosmos theory to another.[10] Even a theory saying that, say, the gravitational constant has a different value than the one we have observed would not be in any way disfavored by our observations, because even on the theory with the deviant value of the gravitational constant, observations like ours would be made, with probability one. This line of reasoning must be faulty, for cosmologists are constantly testing and revising big-cosmos theories in light of new empirical evidence. The way to resolve the conundrum, it seems, is by insisting that when evaluating a theory in light of empirical evidence, we should use the most specific version of the evidence that is known. And in this case, we know more than that some observation b has been made. We know that b has been made by us. The question thus arises, how probable was it on rival theories that we should make that particular observation? This is where SSA comes in. According to SSA, we should reason as if we were random observers. Using this principle, we can then infer that the conditional probability (given theory T) of a specific observer making observation b should be set equal to the expected fraction of all observers who (according to T) make observation b. SSA enables us to take this step from fractions to probabilities. By doing so, SSA rescues us from a methodological embarrassment, and it deserves credit for that.[11] SSA is thus not an arbitrary or silly assumption pulled from an empty hat. It is a methodological principle supported by two fairly compelling strands of argument. This, in addition to its role in the Doomsday argument, makes it important to learn that SSA comes with a price tag attached: adopting it commits one to certain consequences which one might feel are unacceptable. Being clear about this will help us be more informed if we decide to search for a more affordable substitute for SSA. The three Adam & Eve thought experiments that follow are all variations on the same theme. They put different problematic aspects of SSA into focus. First experiment: Serpent’s Advice Eve and Adam, the first two humans, knew that if they gratified their flesh, Eve might bear a child, and if she did, they would be expelled from Eden and would go on to spawn billions of progeny that would cover the Earth with misery.[12] One day a serpent approached the couple and spoke thus: “Pssst! If you embrace each other, then either Eve will have a child or she won’t. If she has a child then you will have been among the first two out of billions of people. Your conditional probability of having such early positions in the human species given this hypothesis is extremely small. If, one the other hand, Eve doesn’t become pregnant then the conditional probability, given this, of you being among the first two humans is equal to one. By Bayes’s theorem, the risk that she will have a child is less than one in a billion. Go forth, indulge, and worry not about the consequences!” Given SSA and the stated assumptions, it is easy to see that the serpent’s argument is sound. We have [] and using SSA, []. We can assume that the prior probability of getting pregnant (based on ordinary empirical considerations) after congress is very roughly one half, []. Thus, according to Bayes’s theorem we have Eve has to conclude that the risk of her getting pregnant is negligible. This result is counterintuitive. Most people’s intuition, at least at first glance, is that it would be irrational for Eve to think that the risk is that low. It seems foolish of her to act as if she were extremely unlikely to get pregnant – it seems to conflict with empirical data. And we can assume she is fully aware of these data, at least to the extent to which they are about past events. We can assume that she has access to a huge pool of statistics, maybe based on some population of lobotomized human drones (lobotomized so that they don’t belong to the reference class, the class from which Eve should consider herself a random sample). Yet all this knowledge, combined with everything there is to know about the human reproductive system, would not change the fact that it would be irrational for Eve to believe that the risk of her getting pregnant is anything other than effectively nil. This is a strange result, but it follows from SSA.[13] Second experiment: Lazy Adam The next example effects another turn of the screw, deriving a consequence that has an even greater degree of initial counterintuitiveness: Assume as before that Adam and Eve were once the only people and that they know for certain that if they have a child they will be driven out of Eden and will have billions of descendants. But this time they have a foolproof way of generating a child, perhaps using advanced in vitro fertilization. Adam is tired of getting up every morning to go hunting. Together with Eve, he devises the following scheme: They form the firm intention that unless a wounded deer limps by their cave, they will have a child. Adam can then put his feet up and rationally expect with near certainty that a wounded dear – an easy target for his spear – will soon stroll by. One can verify this result the same way as above, choosing appropriate values for the prior probabilities. The prior probability of a wounded deer limping by their cave that morning is one in ten thousand, say. In the first experiment we had an example of what looked like anomalous precognition. Here we also have (more clearly than in the previous case) the appearance of psychokinesis. If the example works, which it does if we assume SSA, it almost seems as if Adam is causing a wounded deer to walk by. For how else could one explain the coincidence? Adam knows that he can repeat the procedure morning after morning and that he should expect a deer to appear each time. Some mornings he may not form the relevant intention and on those mornings no deer turns up. It seems too good to be mere chance; Adam is tempted to think he has magical powers. Third experiment: Eve’s Card Trick One morning, Adam shuffles a deck of cards. Later that morning, Eve, having had no contact with the cards, decides to use her willpower to retroactively choose what card lies top. She decides that it shall have been the dame of spades. In order to ordain this outcome, Eve and Adam form the firm intention to have a child unless the dame of spades is top. They can then be virtually certain that when they look at the first card they will indeed find the dame of spades. Here it looks as if the couple is in one and the same act performing both psychokinesis and backward causation. No mean feat before breakfast. These three thought experiments seem to show that SSA has bizarre consequences: strange coincidences, precognition, psychokinesis and backward causation in situations where we would not expect such phenomena. If these consequences are genuine, they must surely count heavily against the unrestricted version of SSA, with ramifications for the Doomsday argument and other forms of anthropic reasoning that rely on that principle. However, we shall now see that such an interpretation misreads the experiments. The truth is more interesting than that. A careful look at the situation reveals that SSA, in subtle ways, wiggles its way out of the worst of the imputed implications. Analysis of Lazy Adam: predictions and counterfactuals This section discusses the second experiment, Lazy Adam. I think that the first and the third experiments can be analyzed along similar lines. Adam can repeat the Lazy Adam experiment many mornings.[14] And the experiment seems prima facie to show that, given SSA, there will be a series of remarkable coincidences between Adam’s procreational intentions and appearances of wounded deer. It was suggested that such a series of coincidences could be a ground for attributing paranormal causal powers to Adam. The inference from a long series of coincidences to an underlying causal link can be disputed. Whether such an inference is legitimate would depend on how long is the series of coincidences, what are the circumstances, and also on what theory of causation one should hold. If the series were sufficiently long and the coincidences sufficiently remarkable, intuitive pressure would mount to give the phenomenon a causal interpretation; and one can fix the thought experiment so that these conditions are satisfied. For the sake of argument, we may assume the worst case for SSA, namely that if the series of coincidences occurs then Adam has anomalous causal powers. I shall argue that even if we accept SSA, we can still think that neither strange coincidences nor anomalous causal powers would have existed if the experiment had been carried out. We need to be careful when stating what is implied by the argument given in the thought experiment. All that was shown is that Adam would have reason to believe that his forming the intentions will have the desired outcome. The argument can be extended to show that Adam would have reason to believe that the procedure can be repeated: provided he keeps forming the right intentions, he should think that morning after morning, a wounded deer will turn up. If he doesn’t form the intention on some mornings, then on those mornings he should expect deer not to turn up. Adam thus has reason to think that deer turn up on those and only on those mornings for which he formed the relevant intention. In other words, Adam has reason to believe there will be a coincidence. However, we cannot jump from this to the conclusion that there will actually be a coincidence. Adam could be mistaken. And he could be mistaken even though he is (as the argument in Lazy Adam showed, assuming SSA) perfectly rational. Imagine for a moment that you are looking at the situation from an external point of view. That is, suppose (per impossible?) that you are an intelligent observer who is not a member of the reference class. Suppose you know the same non-indexical facts as Adam; that is, you know the same things as he does except such things as that “I am Adam” or “I am among the first two humans” etc. Then the probability you should assign to the proposition that a deer will limp by Adam’s cave one specific morning conditional on Adam having formed the relevant intention earlier that morning is the same as what we called Adam’s prior probability of deer walking by – one in ten thousand. As an external observer you would consequently not have reason to believe that there were to be a coincidence.[15] Adam and the external observer, both being rational but having different information, make different predictions. At least one of them must be mistaken (although both are “right” in the sense of doing the best they can with the evidence available to them). In order to determine who was in fact mistaken, we should have to decide whether there would be a coincidence or not. Nothing said so far settles this question. There are possible worlds where a deer does turn up on precisely those mornings when Adam forms the intention, and there are other possible worlds with no such coincidence. The description of the thought experiment does not specify which of these two kinds of possible worlds we are referring to; it is underdetermined in this respect. So far so good, but we want to be able to say something stronger. Let’s pretend there actually once existed these two first people, Eve and Adam, and that they had the reproductive capacities described in the experiment. We would want to say that if the experiment had actually been done (i.e. if Adam had formed the relevant intentions on certain mornings) then almost certainly he would have found no coincidence. Almost certainly, no wounded deer would have turned up. That much seems common sense. If SSA forced us to relinquish that conviction, it would count quite strongly as a reason for rejecting SSA. We therefore have to evaluate the counterfactual: If Adam had formed the relevant intentions, would there have been a coincidence? To answer this, we need a theory of conditionals. I will use a simplified version of David Lewis’ theory[16] but I think what I will say generalizes to other accounts of conditionals. Let w denote the actual world. (We are pretending that Adam and Eve actually existed and that they had the appropriate reproductive abilities etc.) To determine what would have happened had Adam formed the relevant intentions, we look at the closest[17] possible world w’ where he did do the experiment. Let t be the time when Adam would have formed the intentions. When comparing worlds for closeness to w, we are to disregard features of them that exclusively concern what happens after t. Thus we seek a world in which Adam forms the intentions and which is maximally similar to w in two respects: first, in its history up to t; and, second, in its laws. Is the closest world (w’) to w on these accounts and where Adam forms the intentions a world where deer turn up accordingly, or is it a world where there is no Adam-deer correlation? The answer is quite clearly that there is no Adam-deer correlation in w’. For such a w’ can be more similar to w on both accounts than can any world containing the correlation. Regarding the first account, whether there is a coincidence or not in a world presumably makes little difference as to how similar it can be to w with respect to its history up to t. But what difference it makes is in favor of no coincidence. This is so because in the absence of a correlation the positions and states of the deer in the neighborhood, at or shortly before t, could be exactly as in w (where none happened to stroll past Adam’s cave on the mornings when he did the experiment). The presence of a correlation, on the other hand, would entail a world that would be somewhat different regarding the initial states of the deer. Perhaps more decisively, a world with no Adam-deer correlation would tend to win out on the second account as well. w doesn’t (as far as we know) contain any instances of anomalous causation. The laws of w do not support anomalous causation. The laws of any world containing an Adam-deer correlation, at least if the correlation were of the sort that would prompt us to ascribe it to an underlying causal connection, include laws supporting anomalous causation. By contrast, the laws of a world lacking the Adam-deer correlation could easily have laws exactly as in w. Similarity of laws would therefore also favor a w’ with no correlation. Since there is no correlation in w’, the following statement is true: “If Adam had formed the intentions, he would have found no correlation”. Although Adam would have reason to think that there would be a coincidence, he would find he was mistaken. One might wonder: if we know all this, why can’t Adam reason in the same way? Couldn’t he too figure out that there will be no coincidence? He couldn’t, and the reason is that he is lacking some knowledge you and I have. Adam has no knowledge of the future that will show that his creative hunting technique will fail. If he does his experiment and deer do turn up on precisely those mornings he forms the intention, then it could (especially if the experiment were successfully repeated many times) be the case that the effect should be ascribed to a genuine psychokinetic capacity. If he does the experiment and no deer turns up, then of course he has no such capacity. But he has no means of knowing that no deer turns up. The evidence available to him strongly favors the hypothesis that there will be a coincidence. So although Adam may understand the line of reasoning that we have been pursuing here, it will not lead him to the conclusion we arrived at, because he lacks a crucial premiss. There is a puzzling point here that needs be addressed. Adam knows that if he forms the intentions then he will very likely witness a coincidence. But he also knows that if he doesn’t form the intentions then it will be the case that he will live in a world like w, where it is true that had he done the experiment he would most likely not have witnessed a coincidence. That looks paradoxical. Adam’s forming (or not forming) the conditional procreational intentions gives him relevant information. Yet, the only information he gets is about what choice he made. If that information makes a difference as to whether he should expect to see a coincidence, isn’t that just to say that his choice affects whether there will be a coincidence or not? If so, it would seem he has got paranormal powers after all. A more careful analysis reveals that this conclusion doesn’t follow. True, the information Adam gets when he forms the intentions is about what choice he made. This information has a bearing on whether to expect a coincidence or not, but that doesn’t mean that the choice is a cause of the coincidence. It is simply an indication of a coincidence. Some things are good indicators of other things without causing them. Take the stock example: the barometer’s falling may be a good indicator of impending rain, if you knew something about how barometers work, but it is certainly not a cause of the rain. Similarly, there is no need to think of Adam’s decision to procreate if and only if no deer walks by as a cause of that event, although it will lead Adam to rationally believe that that event will happen. One may feel that an air of mystery lingers on. Maybe we can put it into words as follows: Let E be the proposition that Adam forms the reproductive intention at time t = 1, let C stand for the proposition that there is a coincidence at time t = 2 (i.e. that a deer turns up). It would seem that the above discussion commits one to the view that at t = 0 Adam knows (probabilistically) the (1) If E then C. (2) If ¬E then ¬C. (3) If ¬E then “if E then it would have been the case that ¬C”. And there seems to be a conflict between (1) and (3). I suggest that the appearance of a conflict is due to an equivocation in (3). To bring some light into this, we can paraphrase (1) and (2) as: (1’) Pr[Adam] (C|E) [] 1 (2’) Pr[Adam] (¬C|¬E) [] 1 But we cannot paraphrase (3) as: (3’) Pr[Adam] (¬C|E) [] 1 When I said earlier, “If Adam had formed the intentions, he would have found no correlation”, I was asserting this on the basis of background information that is available to us but not to Adam. Our set of background knowledge differs from Adam’s in respect to both non-indexical facts (we have observed the absence of any subsequent correlation between peoples’ intentions and the behavior of deer) and indexical facts (we know that we are not among the first two people). Therefore, if (3) is to have any support in the preceding discussion, it should be explicated as: (3’’) Pr[We] (¬C|E) [] 1 This is not in conflict with (1’). I also asserted that Adam could know this. This gives: (4) Pr[Adam] (“Pr[We] (¬C|E) [] 1”) [] 1 At first sight, it might seem as if there is a conflict between (4) and (1). However, appearances in this instance are deceptive. Let’s first see why it could appear as if there is a conflict. It has to do with the relationship between Pr[Adam] and Pr[We]. We have assumed that Pr[Adam] is a rational probability assignment (in the sense: not just coherent but “reasonable, plausible, intelligent” as well) relative to the set of background knowledge that Adam has at t = 0. And Pr[We] is a rational probability assignment relative to the set of background knowledge that we have, say at t = 3. (And of course we pretend that we know that there actually was this fellow Adam at t = 0 and that he had the appropriate reproductive abilities etc.) But now, if we know everything Adam knew, and if in addition we have some extra knowledge, and if Adam knows that, then it is irrational of him to persist in believing what he believes. Instead he ought to adopt our beliefs, which he knows are based on more information. At least this follows if we assume, as we may in this context, that our a priori probability function is identical to Adam’s, and that we haven’t made any computational error, and that Adam knows all this. That would then imply (3’) after all, which contradicts (1’). The fallacy in this argument is that it assumes that Adam knows that we know everything he knows. Adam doesn’t know that, because he doesn’t know that we exist. He may well know that if we exist then we will know everything (at least every objective – non-indexical – piece of information) that he knows and then some. But as far as he is concerned, we are just hypothetical beings.[18] So all that Adam knows is that there is some probability function, the one we denoted Pr[We], that gives a high conditional probability of ¬C given E. That gets him nowhere. There are infinitely many probability functions, and not knowing that we will actually exist he has no more reason to tune his own credence to our probability function than to any other. To summarize the results so far, what we have shown is the following: Granting SSA, we should think that if Adam and Eve had carried out the experiment, there would almost certainly not have been any strange coincidences. There is thus no reason to ascribe anomalous causal powers to Adam. Eve and Adam would rationally think otherwise but they would simply be mistaken. Although they can recognize the line of reasoning we have been pursuing they won’t be moved by its conclusion, because it hinges on a premiss that we – but not they – know is true. Good news for SSA. One more point needs to be addressed in relation to Lazy Adam. We have seen that what the thought experiments demonstrate is not strange coincidences or anomalous causation but simply that Adam and Eve would be misled. Now, there might be a temptation to see this by itself as a ground for rejecting SSA – if a principle misleads people it is not reliable and should not be adopted. However, this temptation is to be resisted. There is a good answer available to the SSA-proponent, as follows: It is in the nature of probabilistic reasoning that some people using it, if they are in unusual circumstances, will be misled. Eve and Adam were in highly unusual circumstances – they were the first two humans – so we shouldn’t bee too impressed by the fact that the reasoning based on SSA didn’t work for them. For a fair assessment of the reliability of SSA we have to look at how it performs not only in exceptional cases but in more normal cases as well. Compare the situation to the Dungeon gedanken. There, remember, one hundred people were placed in different cells and were asked to guess the color of the outside of their own cell. Ninety cells were blue and ten red. SSA recommended that a prisoner thinks that with 90% probability he is in a blue cell. If all prisoners bet accordingly, 90% of them will win their bets. The unfortunate 10% who happen to be in red cells lose their bets, but it would be unfair to blame SSA for that. They were simply unlucky. Overall, SSA leads 90% to win, compared to merely 50% if SSA is rejected and people bet at random. This consideration works in favor of SSA. What about the “overall effect” of everybody adopting SSA in the three experiments pondered above? Here the situation is more complicated because Adam and Eve have much more information than the people in the cells. Another complication is that these are stories where there are two competing hypotheses about the total number of observers. In both these respects the thought experiments are similar to the Doomsday argument and presumably no easier to settle. What we are interested in here is finding out whether there are some other problematic consequences of SSA which are not salient in the Doomsday argument – such as strange coincidences and anomalous causation. The UN^++ gedanken: reasons, abilities, and decision theory We shall now discuss a thought experiment which is similar to the Adam & Eve experiments but differs in that it is one that we might actually one day be able to carry out. It is the year 2100 A.D. and technological advances have enabled the formation of an all-powerful and extremely stable world government, UN^++. Any decision about human action taken by the UN ^++ will certainly be implemented. However, the world government does not have complete control over natural phenomena. In particular, there are signs that a series of n violent gamma ray bursts is about to take place at uncomfortably close quarters in the near future, threatening to damage (but not completely destroy) human settlements. For each hypothetical gamma ray burst in this series, astronomical observations give a 90% chance of it coming about. However, UN^++ raises to the occasion and passes the following resolution: It will create a list of hypothetical gamma ray bursts, and for each entry on this list it decides that if the burst happens, it will build more space colonies so as to increase the total number of humans that will ever have lived by a factor of m. By arguments analogous to those in the earlier thought experiments, UN^++ can then be confident that the gamma ray bursts will not happen, provided m is sufficiently great compared to n. The UN^++ experiment introduces a new difficulty. For although creating UN^++ and persuading it to adopt the plan would no doubt be a daunting undertaking, it is the sort of project that we could quite conceivably carry out by non-magical means. The UN^++ experiment places us in more or less the same situation as Adam and Eve in the other three experiments. This twist compels us to carry the investigation one step further. Let us suppose that if there is a long series of coincidences (“C”) between items on the UN^++ target list and failed gamma ray bursts then there is anomalous causation (“AC”). This supposition is more problematic than the corresponding assumption when we were discussing Adam and Eve. For the point of UN^++ experiment is that it is claiming some degree of practical possibility, and it is not clear that this supposition could be satisfied in the real world. It depends on the details and on what view of causation one holds, but it could well be that the list of coincidences would have to be quite long before one would be inclined to regard it as a manifestation of an underlying causal link. And since the number of people that UN^++ would have to create in case of failure increases rapidly as the list grows longer, it is not clear that such a plan is feasible. But let’s shove this scruple to one side in order to give the objector to SSA as good a shot as he can hope to have. A first point is that even if we accept SSA, it doesn’t follow that we have reason to believe that C will happen. For we might think that it is unlikely both that UN^++ will ever be formed and that, if formed, it will adopt and carry out the relevant sort of plan. Without UN^++ being set up to execute the plan, there is of course no reason to expect C (and consequently no reason to believe that there will be AC). But there is a more subtle way of attempting to turn this experiment into an objection against SSA. One could argue that we know that we now have the causal powers to create UN^++ and make it adopt the plan; and we have good reason (given SSA) to think that if we do this then there will be C and hence AC. But if we now have the ability to bring about AC then we now, ipso facto, have AC. Since this is absurd, we should reject SSA. This reasoning is fallacious. Our forming UN^++ and making it adopt the plan would be an indication to us that there is a correlation between the list and gamma ray bursts.[19] But it would not cause there to be a correlation unless we do in fact have AC. If we don’t have AC then forming UN^++ and making it adopt the plan (call this event “A”) has no influence whatever on astronomical phenomena, although it misleads us to thinking we have. If we do have AC of the relevant sort, then of course the same actions would influence astronomical phenomena and cause a correlation. But the point is this: the fact that we have the ability to do A does not in any way determine whether we have AC. It doesn’t even imply that we have reason to think that we have AC. In order to be perfectly clear about this point, let me explicitly write down the inference I am rejecting. I’m claiming that from the following two premises: (5) We have strong reasons to think that if we do A then we will have brought about C. (6) We have strong reasons to think that we have the power to do A. one cannot legitimately infer: (7) We have strong reasons to think that we have the power to bring about C. My reason for rejecting this inference is that one can consistently hold the conjunction of (5) and (6) together with the following: (8) If we don’t do A then the counterfactual “Had we done A then C would have occurred” is false. There might be a temptation to think that the counterfactual in (8) would have been true even if don’t do A. I suggest that this is due to the fact that (granting SSA) our conditional probability of C given that we do A is large. Let’s abbreviate this conditional probability ‘Pr(C|A)’. If Pr(C|A) is large, doesn’t that mean that C would (probably) have happened if we had done A? Not so. One must not confuse the conditional probability Pr(C|A) with the counterfactual “C would have happened if A had happened”. For one thing, the reason why your conditional probability Pr(C|A) is large is that you have included indexical information (about your birth rank) in the background information. Yet one may well choose to exclude indexical information from the set of facts upon which counterfactuals are to supervene. (Especially so if one intends to use counterfactuals to define causality, which should presumably be an objective notion and therefore independent of indexical facts – see the next section for some further thoughts on this.) So, to reiterate, even though Pr(C|A) is large (as stated in (5)) and even though we can do A (as stated in (6)), we still know that, given that we don’t do A, C almost certainly does not happen and would not have happened even if we had done A. As a matter of fact, we have excellent grounds for thinking that we won’t do A. The UN^++ experiment, therefore, does not show that we have reason to think that there is AC. Good news for SSA, again. Finally, although it may not be directly relevant to assessing whether SSA is true, it is interesting to ask: Would it be rational (given SSA) for UN^++ to adopt the plan?[20] The UN^++ should decrease its credence of the proposition that a gamma ray burst will occur if it decides to adopt the plan. Its conditional credence Pr(Gamma ray burst | A) is smaller than Pr (Gamma ray burst); that is what the thought experiment showed. Provided a gamma ray burst has a sufficiently great negative utility, non-causal decision theories would recommend that we adopt the plan if we can. What about causal decision theories? If our theory of causation is one on which no AC would be involved even if C happens, then obviously causal decision theories would say that the plan is misguided and shouldn’t be adopted. The case is more complicated on a theory of causation that says that there is AC if C happens. UN^++ should then believe the following: If it adopts the plan, it will have caused the outcome of averting the gamma ray burst; if it doesn’t adopt the plan, then it is not the case that had it adopted the plan it would have averted the gamma ray bursts. (This essentially just repeats (5) and (8).) The question is whether causal decision theories would under these circumstances recommend that UN^++ adopt the plan. The decision that UN^++ makes gives it information about whether it has AC or not. Yet, when UN^++ deliberates on the decision, it can only take into account information available to it prior to the decision, and this information doesn’t suffice to determine whether it has AC. UN^++ therefore has to make its decision under uncertainty. Since on a causal decision theory UN^++ should do A only if it has AC, UN^++ would have to act on some preliminary guess about how likely it seems that AC; and since AC is strongly correlated with what decision UN^++ makes, it would also base its decision, implicitly at least, on a guess about what its decision will be. If it thinks it will eventually choose to do A, it has reason to think it has AC, and thus it should do A. If it thinks it will eventually choose not to do A, it has reason to think that it hasn’t got AC, and thus should not do A. UN^++ therefore is faced with a somewhat degenerate decision problem in which it should choose whatever it initially guesses it will come to choose. More could no doubt be said about the decision theoretical aspects of this scenario[21], but we will leave it at that. Quantum Joe: SSA and the Principal Principle Our final thought experiment probes the connection between SSA and objective chance: Quantum Joe Joe, the amateur scientist, has discovered that he is alone in the cosmos so far. He builds a quantum device which according to quantum physics has a one-in-ten chance of outputting any single-digit integer. He also builds a reproduction device which when activated will create ten thousand clones of Joe. He then hooks up the two so that the reproductive device will kick into action unless the quantum device outputs a zero; but if the output is a zero, then the reproductive machine will be destroyed. There are not enough materials left for Joe to reproduce in some other way, so he will then have been the only observer. We can assume that quantum physics correctly describes the objective chances associated with the quantum device, and that Everett-type interpretations (including the many-worlds and the many-minds interpretations) are false; and that Joe knows this. Using the same kinds of argument as before, we can show that Joe should expect that a zero come up, even though the objective (physical) chance is a mere 10%. Our reflections on the Adam & Eve and UN^++ apply to this gedanken also. But here we shall focus on another problem: the apparent conflict between SSA and David Lewis’s Principal Principle. The Principal Principle requires, roughly, that one proportion one’s credence in a proposition B in accordance with one’s estimate of the objective chance that B will come true (Lewis 1980; Mellor 1971). For example, if you know that the objective chance of B is x%, then your subjective credence of B should be x%, provided you don’t have “inadmissible” information. An early formalization of this idea turned out to be inconsistent when applied to so-called “undermining” futures, but this problem has recently been solved through the introduction of the “new Principal Principle”, which states that: Pr(B|HT) = Ch(B|T) H is a proposition giving a complete specification of the history of the world up to time t, T is the complete theory of chance for the world (giving all the probabilistic laws), Pr is a rational credence function, and Ch is the chance function specifying the world’s objective probabilities at time t. (For an explanation of the modus operandi of this principle and of how it can constitute the centerpiece of an account of objective chance, see Lewis 1994; Thau 1994; Hall 1994.) Now, Quantum Joe knows all the relevant aspects of the history of the world up to the time when he is about to activate the quantum device. He also has complete knowledge of quantum physics, the correct theory of chance for the world in which he is living. If we let B be the proposition that the quantum device outputs a zero, the new Principal Principle thus seems to recommend that he should set his credence of B equal to Ch(B|T) [] 1/10. Yet the SSA-based argument shows that his credence should be [] 1. Does SSA therefore require that we give up the Principal Principle? I think this can be answered in the negative, as follows. True, Joe’s credence of getting a zero should diverge from the objective chance of that outcome, even though he knows what that chance is. But that is because he is basing his estimation on inadmissible information. That being so, the new Principal Principle does not apply to Joe’s situation. The inadmissible information is indexical information about his Joe’s own position in the human species. Normally, indexical information does not affect one’s subjective credence in propositions whose objective chances are known. But in certain kinds of cases, such as the one we are dealing with here, indexical information turns out to be relevant and must be factored in. It not really surprising that the Principal Principle, which expresses the connection between objective chance and rational subjective credence, is trumped by other considerations in cases like these. For objective chances can be seen as concise, informative summaries of patterns of local facts about the world. (That is certainly how they are seen in Lewis’s analysis.) But the facts that form the supervenience base for chances are rightly taken not to include indexical facts, for chances are meant to be objective. Since indexical information is not baked into chances, it is only to be expected that your subjective credence may have to diverge from known objective chances if you have additional information of an indexical character that needs be taken into account. So Quantum Joe can coherently believe that the objective chance (as given by quantum physics) of getting a zero is 10% and yet set his credence in that outcome close to one; he can accept both the Principal Principle and SSA. SSA is a central premiss in the Doomsday argument. We have considered two strands of argument that support SSA: one based on thought experiments where many people have intuitions that lead to conclusions parallel to that of the Doomsday argument; the other based on the scientific need for a methodological principle that can establish a link between big-world cosmologies and observational consequences – a role that SSA is able to fill. These arguments establish at least that SSA deserves serious attention. It behooves anybody who would reject SSA to show why these arguments fail, and to propose a better principle in its stead. We then turned to consider some challenges to SSA. In Lazy Adam, it looked as though on the basis of SSA we should think that Adam had the power to produce anomalous coincidences by will, exerting a psychokinetic influence on the nearby deer population. On closer inspection, it turned out that SSA implies no such thing. It gives us no reason to think that there would have been coincidences or psychic causation if Adam had carried out the experiment. SSA does lead Adam to think otherwise, but he would simply have been mistaken. We argued that the fact that SSA would have misled Adam is no good argument against SSA. For it is in the nature of probabilistic reasoning that exceptional users will be misled, and Adam is such a user. To assess the reliability of SSA-based reasoning one has to look at not only the special cases where it fails but also the normal cases where it succeeds. We noted that in the Dungeon experiment, SSA maximizes the fraction of observers who are right. With the UN^++ gedanken, the scene was changed to one where we ourselves might actually have the ability to step into the role of Adam. We found that SSA does not give us reason to think that there will be strange coincidences or that we (or UN^++) have anomalous causal powers. However, there are some hypothetical (empirically implausible) circumstances under which SSA would entail that we had reason to believe these things. If we knew for certain that UN^++ existed and had the power to create observers in the requisite numbers and possessed sufficient stability to certainly follow through on its original plan, and that the other presuppositions behind the thought experiment were also satisfied – no extraterrestrials, all observers created are in the reference class, etc. – then SSA implies that we should expect to see strange coincidences, namely that the gamma ray bursts on the UN^++ target list would fizzle. (Intuitively: because this would make it enormously much less remarkable that we should have the birth ranks we have.) But we should think it extremely unlikely that this situation will arise.[22] Finally, in Quantum Joe we examined an ostensible conflict between SSA and the Principal Principle. It was argued that this conflict is merely apparent because the SSA-line of reasoning relies on indexical information that should properly be regarded as “inadmissible” and thus outside the scope of the Principal Principle. These triumphs notwithstanding, it is fair to characterize the SSA-based advice to Eve, that she need not worry about pregnancy, and its recommendation to Adam, that he should expect a deer to walk by given that the appropriate reproductive intentions are formed, and Quantum Joe’s second-guessing of quantum physics, as deeply counterintuitive results. We are forced to espouse these implications if we accept the version of SSA discussed in this paper. Maybe the lesson is that we should search for a version of SSA that avoids these consequences.[23] Thus modifying SSA may pull the rug from under the Doomsday argument. I’m grateful for interesting discussions with Craig Callender, Milan M. Ćirković, Dennis Dieks, William Eckhardt, Adam Elga, Paul Franceschi, Mark Greenberg, Colin Howson, John Leslie, Peter Milne, Ken Olum, Elliott Sober, and Roger White, for helpful comments by three anonymous referees, and for audience comments on an earlier version of the paper presented at a conference by the London School of Advanced Study on the Doomsday argument (London, Nov. 6, 1998). I gratefully acknowledge a research grant from the John Templeton Foundation. Bartha, P. and C. Hitchcock (1999). “No One Knows the Date of the Hour: An Unorthodox Application of Rev. Bayes's Theorem.” Philosophy of Science (Proceedings) 66: S229-S353. Bartha, P. and Hitchcock, C. (2000). “The Shooting-Room Paradox and Conditionalizing on Measurably Challenged Sets. ” Synthese 108 (3): 403-437. Bostrom, N. (1999). “The Doomsday Argument is Alive and Kicking.” Mind 108 (431): 539-550. Bostrom, N. (2000a). “Observer-relative chances in anthropic reasoning?” Erkenntnis 52: 93-108. Bostrom, N. (2000b). “Observational Selection Effects and Probability.” Doctoral dissertation, Department of Philosophy, London School of Economics, London. Available at http:// Bostrom, N. (2001a). “Are Cosmological Theories Compatible with All Possible Evidence? A Missing Methodological Link.” In preparation. Preprint at http://www.anthropic-principle.com/ Bostrom, N. (2001b) “The Meta-Newcomb Problem.” Analysis. In press. Carter, B. (1983). “The anthropic principle and its implications for biological evolution.” Phil. Trans. R. Soc. A 310: 347-363. Carter, B. (1989). “The anthropic selection principle and the ultra-Darwinian synthesis.” In The Anthropic Principle, eds. F. Bertola and U. Curi., Cambridge University Press, Cambridge, pp. Coles, P. & Ellis, G. (1994). “The Case for an Open Universe.” Nature. Vol. 370, No. 6491: 609-615. Dieks, D. (1992). “Doomsday - Or: the Dangers of Statistics.” Philosophical Quarterly 42 (166): 78-84. Elga, A. (2000). “Self-locating Belief and the Sleeping Beauty problem.” Analysis 60.2: 143-147. Freedman, W. L. (2000). “The Hubble constant and the expansion age of the Universe.” Physics Letters. Vol. 333: (1-6): 13-31. Gott, R. J. (1993). “Implications of the Copernican principle for our future prospects.” Nature 363 (27 May): 315-319. Gott, R. J. (1994). “Future prospects discussed.” Nature 368 (10 March): 108. Gott, R. J. (1996). “Clusters, Lensing, and the Future of the Universe.” Astronomical Society of the Pacific Conference Series. Vol. 88, San Francisco, eds. V. Trimble and A. Reisenegger. Grove, A. J. (1997). “On the Expected Value of Games with Absentmindedness.” Games and Economic Behaviour. 20: 51-65. Hall, N. (1994). “Correcting the Guide to Objective Chance.” Mind 103 (412): 505-517. Hawking, S. and Israel, W. (1979). General Relativity: An Einstein Centenary Survey. Cambridge, Cambridge University Press. Kopf, T., P. Krtous, et al. (1994). “Too soon for doom gloom.” Preprint at http://xxx.lanl.gov/abs/gr-qc/9407002. Lachièze-Rey, M. and Luminet, J-P. (1995). “Cosmic Topology. ” Physics Report. Vol. 254, no. 3: 135-214. Leslie, J. (1989). Universes. London, Routledge. Leslie, J. (1992). “Doomsday Revisited.” Philosophical Quarterly 42 (166): 85-87. Leslie, J. (1993). “Doom and Probabilities.” Mind 102 (407): 489-91. Leslie, J. (1996). The End of the World: the science and ethics of human extinction. London, Routledge. Lewis, D. (1980). “A Subjectivist Guide to Objective Chance”, in Richard C. Jeffrey, ed., Studies in Inductive Logic and Probability, vol. II. Berkeley: University of California Press. Reprinted with postscripts in Lewis 1986, pp. 83-132. Lewis, D. (1986). Philosophical Papers. New York, Oxford University Press. Lewis, D. (1994). “Humean Supervenience Debugged.” Mind 103 (412): 473-490. Linde, A. (1990). Inflation and Quantum Cosmology. San Diego, Academic Press. Martin, J. L. (1995). General Relativity. 3rd edition, London, Prentice Hall. Mellot, D. H. (1971). The Matter of Chance. Cambridge: Cambridge University Press. Neta, A. and Bahcall, N. et al. “The Cosmic Triangle: Revealing the State of the Universe.” Science Magazine. Vol. 284, no. 5419, Issue of 28 May 1999, pp. 1481-1488. Nielson, H. B. (1989). “Random dynamics and relations between the number of fermion generations and the fine structure constants.” Acta Physica Polonica B20: 427-468. Olum, K. (2000). “The doomsday argument and the number of possible observers.” Preprint at http://xxx.lanl.gov/abs/gr-qc/0009081. Perlmutter,S. et al. (1999). “Measurements of Omega and Lambda from 42 high-redshift supernovae.” Astrophysical Journal 517: 565-586. Perry, J. (1977). “Frege on Demonstratives.” Philosophical Review 86: 474-97. Piccione, M. and A. Rubinstein (1997). “On the Interpretation of Decision Problems with Imperfect Recall.” Games and Economic Behaviour 20: 3-24. Reiss, A. (2000). “The Case for an Accelerating Universe from Supernovae.” Publications of the Astronomical Society of the Pacific 122: 1284-1299. Thau, M. (1994). “Undermining and Admissibility.” Mind 103 (412): 491-503. Zehavi, I. and Dekel, A. (1999). “Evidence for a positive cosmological constant from flows of galaxies and distant supernovae.” Nature 401 (6750): 252-254. The Presumptuous Philosopher It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T[1] and T[2] (using considerations from super-duper symmetry). According to T[1] the world is very, very big but finite, and there are a total of a trillion trillion observers in the cosmos. According to T[2], the world is very, very, very big but finite, and there are a trillion trillion trillion observers. The super-duper symmetry considerations are indifferent between these two theories. Physicists are preparing a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T[2] is about a trillion times more likely to be true than T[1] (whereupon the philosopher runs the Incubator thought experiment and appeals to SIA)!”
{"url":"http://www.anthropic-principle.com/preprints/cau/paradoxes.html","timestamp":"2014-04-20T15:51:06Z","content_type":null,"content_length":"117599","record_id":"<urn:uuid:87ba3354-a9e3-40a5-9ecc-23d83037ba2a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Article Statistics and Computing 1997 ADE-4: a multivariate analysis and graphical display software Jean Thioulouse (1), Daniel Chessel (2), Sylvain Dolédec (2) & Jean-Michel Olivier (2) (1) Laboratoire de Biométrie, Génétique et Biologie des Populations, UMR CNRS 5558, Université Lyon 1, 69622 Villeurbanne Cedex, France. (2) Laboratoire d'Ecologie des Eaux Douces et des Grands Fleuves, URA CNRS 1974, Université Lyon 1, 69622 Villeurbanne Cedex, France. The article is now totalling up more than 500 citations (as of Dec. 2006). Reference : Thioulouse J., Chessel D., Dolédec S., & Olivier J.M. (1997) ADE-4: a multivariate analysis and graphical display software. Statistics and Computing, 7, 1, 75-83. 1. Introduction 2. The user interface 2.1 Computational modules 2.2 Graphical modules 2.3 HyperCard and WinPlus interface 3. Data analysis methods 3.1 One table methods 3.2 One table with spatial structures 3.3 One table with groups of rows 3.4 Linear regression 3.5 Two-tables coupling methods 3.6 Coinertia analysis method 3.7 K-table analysis methods 4. Graphical representations 4.1 One dimensional graphics 4.2 Curves 4.3 Scatters 4.4 Cartography modules 5. Conclusion We present ADE-4, a multivariate analysis and graphical display software. Multivariate analysis methods available in ADE-4 include usual one-table methods like principal component analysis and correspondence analysis, spatial data analysis methods (using a total variance decomposition into local and global components, analogous to Moran and Geary indices), discriminant analysis and within/ between groups analyses, many linear regression methods including lowess and polynomial regression, multiple and PLS (partial least squares) regression and orthogonal regression (principal component regression), projection methods like principal component analysis on instrumental variables, canonical correspondence analysis and many other variants, coinertia analysis and the RLQ method, and several three-way table (k-table) analysis methods. Graphical display techniques include an automatic collection of elementary graphics corresponding to groups of rows or to columns in the data table, thus providing a very efficient way for automatic k-table graphics and geographical mapping options. A dynamic graphic module allows interactive operations like searching, zooming, selection of points, and display of data values on factor maps. The user interface is simple and homogeneous among all the programs; this contributes to make the use of ADE-4 very easy for non specialists in statistics, data analysis or computer science. Key words: Multivariate analysis, principal component analysis, correspondence analysis, instrumental variables, canonical correspondence analysis, partial least squares regression, coinertia analysis, graphics, multivariate graphics, interactive graphics, Macintosh, HyperCard, Windows 95. Corresponding author: Jean Thioulouse Laboratoire de Biométrie - Université Lyon 1 69622 Villeurbanne Cedex - France 1. Introduction ADE-4 is a multivariate analysis and graphical display software for Apple Macintosh and Windows 95 microcomputers. It is made of several stand-alone applications, called modules, that feature a wide range of multivariate analysis methods, from simple one-table analysis to three-way table analysis and two-table coupling methods. It also provides many possibilities for helpful graphical display in the process of analyzing multivariate data sets. It has been developed in the context of environmental data analysis, but can be used in other scientific disciplines (e.g., sociology, chemometry, geosciences, etc.), where data analysis is frequently used. It is freely available on the Internet network. Here, we wish to present the main characteristics of ADE-4, from three point of view: (1) user interface, (2) data analysis methods, and (3) graphical display capabilities. 2. The user interface ADE-4 is made of a series of small independent modules that can be used independently from each other or launched through a HyperCard interface. There are two categories of modules: computational modules and graphical ones, with a slightly different user interface. Computational modules present an "Options" menu that enables the user to choose between the possibilities available in the module. For example, in the PCA (principal component analysis) module, it is possible to choose between PCA on correlation or on covariance matrix (Figure 1A). Fig. 1. Elements of the user interface of computational modules. A: the Options menu serves to choose the desired method. B: the main dialog window allows the user to type in the parameters of the analysis (data file name, weighting options, etc.). When the user clicks on the hand icon buttons, special dialog windows (C) make easier the selection of these parameters. During time consuming operations (e.g., computation of the eigenvalues and eigenvectors of large matrices), a progress window (D) shows the state of the program. At the end of computations, a text report is generated (E) displaying the results of the analysis. According to the option selected by the user, a dialog window is displayed, showing the parameters required for the execution of the analysis (Figure 1B). The user can click on the buttons with a hand icon to set the values of these parameters through standard dialog windows (Figure 1C). The OK button starts the computations, and the Quit button quits the module. A progress window shows the computation steps while the analysis is being performed (Figure 1D), and a text report that contains a description of input and output files and of analysis results is created (Figure 1E). 2.2 Graphical modules Graphical modules also present an "Options" menu to choose the type of graphic. They have an additional "Windows" menu (Figure 2A). This menu can be used to choose one of the three parameter windows that allow an interactive definition of the graphical parameters. The user can thus freely modify the values of all the parameters and the resulting graphic is displayed in the "Graphics" window. Fig. 2. Elements of the user interface of graphical modules. A: the Options menu serves to choose the type graphic, and the Windows menu can be used to choose one of the three parameter windows (B, C, D and E). The main dialog window (B) works in the same way as in computation modules. The "Min/Max" window (C) allows to set the value of numerous graphic parameters, particularly the minimum and maximum of abscissas and ordinates, the number of horizontal and vertical graphics (for graphic collections), the graphical window width and height, legend scale options, etc. The "Row & Col. selection" window (D and E) has two states (File and Keyboard), corresponding to the way of entering the selection of rows making a collection of graphics. In the File state (D), the user can choose a file containing the qualitative variable which categories define the groups of rows, and in the Keyboard (D) state, he must type in the numbers of the rows belonging to each elementary graphic. The main dialog window (Figure 2B) allows to choose the input file and related parameters. The Copy, Save and Print buttons perform the corresponding actions on the graphic currently displayed in the "Graphics" window. The Draw button triggers the drawing, and the Quit button quits the module. In the "Min/Max" window (Figure 2C), the user can set the values of the minimum and maximum of abscissas and ordinates, the number of horizontal and vertical graphics (for graphic collections), the graphical window width and height, legend scale options, etc. In the "Row & Col. selection" (Figure 2D and 2E) window, he can choose the columns and the rows of the data set that will be used to make each elementary graphic of a collection. The rows can be chosen either through a selection file containing a qualitative variable which modalities define the groups corresponding to each graphic (Figure 2D), or by typing the number of the rows belonging to each group (Figure 2E). 2.3 HyperCard and WinPlus interface A HyperCard (Macintosh version) or WinPlus (Windows 95 version) stack (ADE*Base) can be used to launch the modules. This stack also displays the files that are in the current data folder, and provides a way to navigate through two other stacks: ADE*Data, and ADE*Biblio. ADE*Data is a library of c. 150 example data sets of varying size that can be used for trial runs of data analysis methods. Most of these data sets come from environmental studies. ADE*Biblio is a bibliography stack with more than 800 bibliographic references on the statistical methods and data sets available in ADE-4. 3. Data analysis methods The data analysis methods available in ADE-4 will not be presented here, due to lack of space. They are based on the duality diagram (Cailliez and Pagès, 1976; Escoufier 1987). In many modules, Monte-Carlo tests (Good 1993, chapter 13) are available to study the significance of observed structures. Three basic multivariate analysis methods can be applied to one-table data sets (Dolédec and Chessel, 1991). The corresponding modules are the PCA (principal components analysis) module for quantitative variables, the COA (correspondence analysis) module for contingency tables (Greenacre, 1984), and the MCA (multiple correspondence analysis) module for qualitative (discrete) variables (Nishisato, 1980; Tenenhaus and Young, 1985). A fourth module, HTA (homogeneous table analysis) is intended for homogeneous tables, i.e., tables in which all the values come from the same variable (for example, a toxicity table containing the toxicity of some chemical compounds toward several animal species, see Devillers and Chessel, 1995). The DDUtil (duality diagram utilities) module provides several interpretation helps that can be used with any of the methods available in the first four modules, namely: biplot representation (Gabriel 1971, 1981), inertia analysis for rows and columns (particularly for COA, see Greenacre 1984), supplementary rows and/or columns (Lebart et al., 1984), and data reconstitution (Lebart et al., 1984). The PCA module offers several options, corresponding to different duality diagrams: correlation matrix PCA, covariance matrix PCA, non centered PCA (Noy-Meir, 1973), decentered PCA, partially standardized PCA (Bouroche, 1975), within-groups standardized PCA (Dolédec and Chessel, 1987). See Okamoto (1972) for a discussion of different types of PCA. The COA module offers six options for correspondence analysis (CA): classical CA, reciprocal scaling (Thioulouse and Chessel, 1992), row weighted CA, internal CA (Cazes et al., 1988), decentered CA (Dolédec et al., 1995). The MCA module offers two options for the analysis of tables made of qualitative variables: Multiple Correspondence Analysis (Tenenhaus and Young, 1985) and Fuzzy Correspondence Analysis (Chevenet et al., 1994; Castella and Speight, 1996). 3.2 One table with spatial structures Environmental data very often include spatial information (e.g., the spatial location of sampling sites), and this information is difficult to introduce in classical multivariate analysis methods. The Distances module provides a way to achieve this, by using a neighboring relationship between sites. See Lebart (1969) for a presentation of this approach and Thioulouse et al. (1995) for a general framework based on variance decomposition formulas. Also available in this module are the Mantel test (Mantel, 1967), the principal coordinate analysis (Manly, 1994), and the minimum spanning tree (Kevin and Whitney, 1972). 3.3 One table with groups of rows When a priori groups of individuals exist in the data table, the Discrimin module can be used to perform a discriminant analysis (DA, also called canonical variate analysis), and between-groups or within-groups analyses (Dolédec and Chessel, 1989).. These three methods can be performed after a PCA, a COA, or an MCA, leading to a great variety of analyses. For example, in the case of DA, we can obtain after a PCA, the classical DA (Mahalanobis, 1936, Tomassone et al., 1988), after a COA, the correspondence DA, and after an MCA the DA on qualitative variables (Saporta, 1975, Persat et al., 1985). Monte-Carlo tests are available to test the significance of the between-groups structure. 3.4 Linear regression Three modules provide several linear regression methods. These modules are UniVarReg (for univariate regression), OrthoVar (for orthogonal regression), and LinearReg (for linear regression). Here also, Monte-Carlo tests are available to test the results of these methods. The UniVarReg module deals with two regression models: polynomial regression and Lowess method (locally weighted regression and smoothing scatterplots; Cleveland 1979, Cleveland and Devlin, 1988). The OrthoReg module performs multiple linear regression in the particular case of orthogonal explanatory variables. This is useful for example in PCR (principal component regression; Næs, 1984), or in the case of the projection on the subspace spaned by a series of eigenvectors (Thioulouse et al., 1995). The LinearReg module performs the usual multiple linear regression (MLR), and the first generation PLS (partial least squares) regression (Lindgren, 1994). See also Geladi and Kowalski (1986); Höskuldsson (1988) for more details on PLS regression. One module is dedicated to two-tables coupling methods based on projection onto vector subsapces (Takeuchi et al., 1982). It has eleven options that perform complex operations. The first six options allow to build orthonormal bases on which the projections can be made. The last five options provide several two-tables coupling methods, and mainly PCAIV (PCA on Instrumental Variables) methods. The "PCA on Instrumental Variables" option can be used with any statistical triplet from the PCA, COA and MCA modules, which corresponds for example to methods like CAIV (correspondence analysis on instrumental variables, Lebreton et al., 1988a, 1988b, 1991) or CCA (canonical correspondence analysis, ter Braak, 1987a, 1987b). 3.6 Coinertia analysis method There are two modules for coinertia analysis. The Coinertia module, which performs the usual coinertia analysis (Chessel and Mercier, 1993; Dolédec and Chessel, 1994; Thioulouse and Lobry, 1995; Cadet et al., 1994), and the RLQ module, which performs a three-table generalization of coinertia analysis (Dolédec et al., 1996). 3.7 K-table analysis methods Collections of tables (three-ways tables, or k-tables) can be analyzed with the STATIS module that features three distinct methods: STATIS (Escoufier, 1980; Lavit 1988; Lavit et al., 1994), the partial triadic analysis (Thioulouse and Chessel, 1987), and the analysis of a series of contingency tables (Foucart, 1978). The KTabUtil module provides a series of three-ways table manipulation utilities: k-table transposition, sorting, centering, standardization, etc. Two generalisations of k-tables coinertia analysis are also available. 4. Graphical representations ADE-4 features 14 graphical modules, that fall broadly in four categories: one dimensional graphics, curves, scatters, and geographical maps. Most modules have the possibility to draw automatically collections of graphics, corresponding to the columns of the data file (one graphic for each variable), to groups of rows (one graphic for each group), or to both (one graphic for each group and for each variable). This feature is particularly useful in multivariate analysis, where one always deals with many variables and/or groups of samples. Moreover, several modules have two versions, according to the way they treat the collections: elementary graphics can be either simply put side by side, or superimposed. Superimposition is done in the modules with a name ending with the "Class" 4.1 One dimensional graphics The Graph1D module is intended for one dimensional data representation, such as the values of one factor score. It has two options: histograms and labels. The histograms option computes the distribution of the values into classes and draws the corresponding histogram, with optionally the adjusted Gauss curve superimposed over it. The Labels option simply draws regularly spaced labels (that can be chosen by the user) vertically or horizontally along an axis; these labels are connected by lines to the corresponding coordinates on the axis. The columns and groups of rows corresponding to each elementary graphic of a collection can be chosen by the user. The Graph1DClass module is also intended for representing one dimensional data, but, as the "Class" suffix indicates, when there are groups of rows and when the corresponding graphics must be superimposed instead of placed side by side. Because superimposed histograms and labels are not convenient, this is used only for Gauss curves. Figure 3 shows an example of such graphic: each elementary graphic contains a collection of superimposed Gauss curves. Each curve corresponds to one group of rows in the data table. The successive elementary graphics correspond to several partitions of the set of rows (i.e., to several qualitative variables). All these parameters are interactively set by the user. Fig. 3. Example of graphic drawn with the Graph1DClass module: the eleven elementary graphics (numbered 1 to 11) correspond to eleven qualitative variables. In each elementary graphic, the Gauss curves represent the distribution of the samples belonging to the categories of each qualitative variable. For example, graphic number seven corresponds to the seventh qualitative variable, which have two categories. The two Gauss curves represent the mean and the variance of the samples belonging to each of these two categories. Only one quantitative variable of the data table is represented here. In this module, elementary graphics of a collection corresponding to groups of rows are superimposed, while graphics corresponding to the qualitative variables are placed side by side. Graphics corresponding to other columns of the data table (quantitative variables) would also be placed side by side. The Curves module draws curves, i.e., series of values (ordinates) that are plotted along an axis (abscissas). It features four options: Lines, Bars, Steps and Boxes. Here also, columns and groups of rows can correspond to the elementary graphics of a collection. Boxes are simply the classical "box and whiskers" display, showing the median, quartiles, minimum and maximum. The CurveClass module acts in the same way as the Curves module, except that the curves defined by the qualitative variable are superimposed in the same elementary graphic instead of being dispatched in several graphics. The CurveModels module allows to fit Lowess and polynomial models. This module automatically fits a model for each elementary graphic in a collection. 4.3 Scatters The most classical graphic in multivariate analysis is the factor map. The Scatters module is designed to draw such graphics, with several options. For all options, the user can interactively select the columns and the groups of rows that will be used to draw the elementary graphics that make a collection. The simplest option is Labels. For each point, it only draws a characters string (label) on the factor map (Figure 4A). The Trajectories option underlines the fact that the elements are ordered (for example in the case of time series) by linking the points with a line (Figure 4B). The Stars option computes the gravity center of each group of points and draws lines connecting each element to its gravity center (Figure 4C). The Ellipses option computes the means, variances and covariance of each group of points on both axes, and draws an ellipse with these parameters: the center of the ellipse is centered on the means, its width and height are given by the variances, and the covariance sets the slope of the main axis of the ellipse (Figure 4D). The Convex hulls option simply draws the convex hull of each set of points (Figure 4E). Ellipses and convex hulls are labeled by the number of the group. The Values option is slightly more complex. For each point on the factor map, it draws a circle or a square which size is proportional to a series of values (circle are for positive values, and squares for negative ones). These series of values can be chosen as the columns of a separate file (Figure 4F). This technique is particularly useful to represent data values on the factor map. Lastly, the "Match two scatters" option can be used when two sets of scores are available for the same points (this is frequently the case in co-inertia analysis and other two-table coupling methods). It draws an arrow starting from the coordinates of the point in the first set and ending at the coordinates in the second set (Figure 4G). Fig. 4. Example of graphics drawn with the Scatters module: labels (A), trajectories (B), stars (C), ellipses (D), convex hulls (E), circles and squares (F: circles are for positive values, squares for negative ones), and two scatters matching (G). The coordinates of points are given by two columns chosen in a table. In this module, elementary graphics of a collection correspond to groups of rows, except for the circles and squares option, in which they can correspond to groups of rows and also to the columns of the file containing the values to which circle and square sizes are proportional (in this case, if there are k groups and p columns, the number of elementary graphics will be equal to k.p). The ScatterClass module incorporates the Labels, Trajectories, Stars, Ellipses and Convex hulls options. It superimposes the elementary graphics corresponding to a collection. Figure 5 shows an example where eleven elementary graphics (corresponding to eleven qualitative variables) are represented. In each graphic, several convex hulls (corresponding to groups of points) are superimposed. The points themselves are not drawn. Fig. 5. Example of graphic drawn with the ScatterClass module. Like in figure 3, the eleven graphics correspond to eleven qualitative variables. The convex hulls containing the points belonging to the categories of the qualitative variable are drawn in each elementary graphic. The dots corresponding to each point have not been drawn. Like the Scatters module, ScatterClass can also draw label, trajectories, stars, and ellipses. The last module for scatter diagrams is ADEScatters (Thioulouse, 1996). It is a dynamic graphic module, in which the user can perform several actions that help interpreting the factor map: searching, zooming, selection of sets of points, interactive display of data values on the factor map. 4.4 Cartography modules Four cartography modules are available in ADE-4. They can be used to map either the initial (raw or transformed) data, or the factor scores resulting from a multivariate analysis. The Maps module has three options: the Labels option simply draws a label on the map at each sampling point, the Values option draws circles (positive values) and squares (negative values) with sizes proportional to the values of the data file, and the Neighboring graph option draws the edges of a neighboring relationship between the points. Fig. 6. Example of graphic drawn with the Levels and the Areas modules. The Levels module (A) draws contour curves with gray level patterns that indicate the curve value. The Areas module (B) draws grey level polygons on a geographical map. For both modules, collections correspond to the columns of the table containing the values displayed on the map (grey levels). Here, the seven contour curve maps and the seven grey level area maps correspond to seven columns of the data file. The Levels module draws contour curves on the map (Figure 6A). It can be used with sampling points having any distribution on the map: an interpolated regular grid, is computed before drawing. Contour curves are computed by a lowess regression over a number of neighbors chosen by the user (see Thioulouse et al., 1995 for a description and example of use of this technique in environmental data analysis). The Areas module draws maps with gray level polygons (Figure 6B), starting from a file containing the coordinates of the vertices of each of the areas making the map, and a second file corresponding to the gray levels. 5. Conclusion The computing power of today micro-computers is such that the time needed to perform the computations of multivariate analysis methods is no more a limiting factor. The time needed to compute all the eigenvalues and eigenvectors of a 100 x 100 matrix is just a few seconds. The limiting factor is rather the amount of field work needed to collect the data. This fact has several consequences on multivariate analysis software packages. Monte-Carlo-like methods (permutation tests) can, and should be used much more widely (Good, 1993 p. 8). Moreover, it is possible to use an interactive approach to multivariate analysis, trying several methods in just a few minutes. But this implies an easy to use graphical user interface, particularly for graphic programs, allowing to explore many ways of displaying data structures (data values themselves or factor scores). Trend surface analysis and contour curves are valuable tools for this purpose. We also need a graphical software able to display simultaneously all the variables of the dataset, and several groups of samples, corresponding for example to the experimental design (e.g., samples coming from several regions). We have tried to address these points in ADE-4. One of the benefits of having a systematic approach to elementary graphics collection in modules Graph1D, Graph1dClass, Curves, CurveClass, Scatters, and ScatterClass is the possibility to draw automatically all the graphics of a k-table analysis (i.e., the graphics corresponding to the analyses of all the elementary tables). ADE-4 can be obtained freely by anonymous FTP to biom3.univ-lyon1.fr, in the /pub/mac/ADE/ADE4 directory. Previous versions (up to version 3.7) has already been distributed to many research laboratories in France and other countries. A WWW (world-wide web) documentation and downloading page is available at : http://biomserv.univ-lyon1.fr/ADE-4.html , which also provides access to updates and user support through the ADEList mailing list. A sub-set of ADE-4 can be used on line on the Internet, through a WWW user interface called NetMul (Thioulouse and Chevenet, 1996) at the following address : http://biomserv.univ-lyon1.fr/NetMul.html . ADE-4 development was supported by the "Programme Environnement" of the French National Center for Scientific Research (CNRS), under the "Méthodes, Modèles et Théories" contract. Bouroche, J.M. (1975). Analyse des données ternaires: la double analyse en composantes principales. Thèse de 3deg. cycle, Université de Paris VI. Cadet, P., Thioulouse, J. and Albrecht, A. (1994). Relationships between ferrisol properties and the structure of plant parasitic nematode communities on sugarcane in Martinique (French West Indies). Acta Œcologica, 15, 767-780. Cailliez, F. and Pages, J.P. (1976). Introduction à l'analyse des données. SMASH: Paris. Castella, E. & Speight, M.C.D. (1996) Knowledge representation using fuzzy coded variables: an example based on the use of Syrphidae (Insecta, Diptera) in the assessment of riverine wetlands. Ecological Modelling, 85, 13-25. Cazes, P., Chessel, D. and Dolédec, S. (1988). L'analyse des correspondances internes d'un tableau partitionné : son usage en hydrobiologie. Revue de Statistique Appliquée 36, 39-54. Chessel, D. and Mercier, P. (1993). Couplage de triplets statistiques et liaisons espèces-environnement. In Biométrie et Environnement., Lebreton J.D. and Asselain B. (eds), 15-44. Paris: Masson. Chevenet, F., Dolédec, S. and Chessel, D. (1994). A fuzzy coding approach for the analysis of long-term ecological data. Freshwater Biology 31, 295-309. Cleveland, W.S. (1979). Robust locally weighted regression and smoothing scatterplots. Journal of the American Statistical Association 74, 829-836. Cleveland, W.S. and Devlin, S.J. (1988). Locally weighted repression: an approach to regression analysis by local fitting. Journal of the American Statistical Association 83, 596-610. Devillers, J. and Chessel, D. (1995) Can the enucleated rabbit eye test be a suitable alternative for the in vivo eye test? A chemometrical response. Toxicology Modelling, 1, 21-34. Dolédec, S. and Chessel, D. (1987). Rythmes saisonniers et composantes stationnelles en milieu aquatique I- Description d'un plan d'observations complet par projection de variables. Acta Œcologica, Œcologia Generalis 8, 403-426. Dolédec, S. and Chessel, D. (1989). Rythmes saisonniers et composantes stationnelles en milieu aquatique II- Prise en compte et élimination d'effets dans un tableau faunistique. Acta Œcologica, Œcologia Generalis 10, 207-232. Dolédec, S. and Chessel, D. (1991). Recent developments in linear ordination methods for environmental sciences. Advances in Ecology, India 1, 133-155. Dolédec, S. and Chessel, D. (1994). Co-inertia analysis: an alternative method for studying species-environment relationships. Freshwater Biology 31, 277-294. Dolédec, S., Chessel, D., Ter Braak, C.J.F. and Champely, S. (1996). Matching species traits to environmental variables: a new three-table ordination method. Environmental and Ecological Statistics 3 , 143-166. Dolédec, S., Chessel, D. and Olivier, J.M. (1995). L'analyse des correspondances décentrée: application aux peuplements ichtyologiques du Haut-Rhône. Bulletin Français de Pêche et de Pisciculture 336 , 29-40. Escoufier, Y. (1980). L'analyse conjointe de plusieurs matrices de données. In Biométrie et Temps. Jolivet, M. (eds), 59-76. Paris: Société Française de Biométrie. Escoufier, Y. (1987). The duality diagramm : a means of better practical applications. In: Development in numerical ecology. Legendre, P. and Legendre, L. (Eds.), 139-156. NATO advanced Institute, Serie G. Berlin: Springer Verlag. Lavit, Ch., Escoufier, Y., Sabatier, R. and Traissac, P. (1994). The ACT (Statis method). Computational Statistics and Data Analysis 18, 97-119. Foucart, T. (1978). Sur les suites de tableaux de contingence indexés par le temps. Statistique et Analyse des données 2, 67-84. Gabriel, K.R. (1971). The biplot graphical display of matrices with application to principal component analysis. Biometrika 58, 453-467. Gabriel, K.R. (1981). Biplot display of multivariate matrices for inspection of data and diagnosis. In Interpreting multivariate data. Barnett, V. (eds), 147-174. New York: John Wiley and Sons. Geladi, P. & Kowalski, B.R. (1986). Partial least-squares regression: a tutorial. Analytica Chimica Acta, 1, 185, 19-32. Good, P. (1993). Permutation tests. New-York: Springer-Verlag Greenacre, M. (1984). Theory and applications of correspondence analysis. London: Academic Press. Höskuldsson, A. (1988). PLS regression methods. Journal of Chemometrics 2, 211-228. Kevin, V. and Whitney, M. (1972). Algorithm 422. Minimal Spanning Tree [H]. Communications of the Association for Computing Machinery 15, 273-274. Lavit, Ch. (1988). Analyse conjointe de tableaux quantitatifs. Paris: Masson. Lebart, L. (1969). Analyse statistique de la contiguïté. Publication de l'Institut de Statistiques de l'Université de Paris 28, 81-112. Lebart, L., Morineau, L. and Warwick, K.M. (1984). Multivariate descriptive analysis: correspondence analysis and related techniques for large matrices. New York: John Wiley and Sons. Lebreton, J.D., Chessel, D., Prodon, R. and Yoccoz, N. (1988). L'analyse des relations espèces-milieu par l'analyse canonique des correspondances. I. Variables de milieu quantitatives. Acta Œcologica, Œcologia Generalis 9, 53-67. Lebreton, J.D., Richardot-Coulet, M., Chessel, D. and Yoccoz, N. (1988). L'analyse des relations espèces-milieu par l'analyse canonique des correspondances . II Variables de milieu qualitatives. Acta Œcologica, Œcologia Generalis 9, 137-151. Lebreton, J.D., Sabatier, R., Banco, G. and Bacou, A.M. (1991). Principal component and correspondence analyses with respect to instrumental variables : an overview of their role in studies of structure-activity and species-environment relationships. In Applied Multivariate Analysis in SAR and Environmental Studies. Devillers, J. and Karcher, W. (eds), 85-114. Dordrecht: Kluwer Academic Lindgren, F. (1994). Third generation PLS. Some elements and applications. Research Group for Chemometrics. Department of Organic Chemistry, 1-57. Umeå: Umeå University. Mahalanobis, P.C. (1936). On the generalized distance in statistics. Proceedings of the National Institute of Sciences of India 12, 49-55. Manly, B.F. (1994). Multivariate Statistical Methods. A primer. London: Chapman and Hall. Mantel, M. (1967). The detection of disease clustering and a generalized regression approach. Cancer Research 27, 209-220. Næs, T. (1984). Leverage and influence measures for principal component regression. Chemometrics and Intelligent Laboratory Systems, 5, 155-168. Nishisato, S. (1980). Analysis of caregorical data : dual scaling and its applications. London: University of Toronto Press. Noy-Meir, I. (1973). Data transformations in ecological ordination. I. Some advantages of non-centering. Journal of Ecology 61, 329-341. Okamoto, M. (1972). Four techniques of principal component analysis. Journal of the Japanese Statistical Society 2, 63-69. Persat, H., Nelva, A. and Chessel, D. (1985). Approche par l'analyse discriminante sur variables qualitatives d'un milieu lotique le Haut-Rhone français. Acta Œcologica, Œcologia Generalis 6, Saporta, G. (1975). Liaisons entre plusieurs ensembles de variables et codage de données qualitatives. Thèse de 3ème cycle, Université Paris VI. Takeuchi, K., Yanai, H. and Mukherjee, B.N. (1982). The foundations of multivariate analysis. A unified approach by means of projection onto linear subspaces. New York: John Wiley and Sons. Tenenhaus, M. and Young, F.W. (1985). An analysis and synthesis of multiple correspondence analysis, optimal scaling, dual scaling, homogeneity analysis ans other methods for quantifying categorical multivariate data. Psychometrika 50, 91-119. ter Braak, C.J.F. (1987a). The analysis of vegetation-environment relationships by canonical correspondence analysis. Vegetatio 69, 69-77. ter Braak, C.J.F. (1987b). Unimodal models to relate species to environment. Wageningen: Agricultural Mathematics Group. Thioulouse, J. and Chessel, D. (1987). Les analyses multi-tableaux en écologie factorielle. I De la typologie d'état à la typologie de fonctionnement par l'analyse triadique. Acta Œcologica, Œcologia Generalis 8, 463-480. Thioulouse, J. and Chessel, D. (1992). A method for reciprocal scaling of species tolerance and sample diversity. Ecology 73, 670-680. Thioulouse, J., Chessel, D., and Champely, S. (1995). Multivariate analysis of spatial patterns: a unified approach to local and global structures. Environmental and Ecological Statistics, 2, 1-14. Thioulouse, J., and Lobry, J.R. (1995). Co-inertia analysis of amino-acid physico-chemical properties and protein composition with the ADE package. Computer Applications in the Biosciences, 11, 3, Thioulouse, J. and Chevenet, F. (1996). NetMul, a World-Wide Web user interface for multivariate analysis sofware. Computational Statistics and Data Analysis 21, 369-372. Thioulouse, J. (1996). Towards better graphics for multivariate analysis: the interactive factor map. Computational Statistics 11, 11-21. Tomassone, R., Danzard, M., Daudin, J.-J. and Masson, J.P. (1988). Discrimination et classement. Paris : Masson.
{"url":"http://pbil.univ-lyon1.fr/ADE-4/article_statcomp1997.php?lang=eng","timestamp":"2014-04-16T22:30:08Z","content_type":null,"content_length":"45045","record_id":"<urn:uuid:74f92e8e-2576-4591-9d1e-e337e23ef29e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help Use the radius and tangent theorem to find the derivatives of: $f(x)=7-\sqrt{2x-x^2}$ Thanks in advance I believe that the "radius and tangent theorem" is the theorem from geometry that a line tangent to a circle is perpendicular to the radius of the circle to the point of tangency. Since the function $f(x)=7-\sqrt{2x-x^2}$ is the lower half of the circle $(x-1)^2+(y-7)^2=1$, the circle of radius 1 and center (1,7), a point (x,f(x)) is the endpoint of a radius whose other endpoint is the center (1,7), and thus with slope $\frac{7-f(x)}{1-x}$. Therefore the line tangent to our circle at that point is perpendicular to that radius, and so has opposite reciprocal slope; the tangent line at (x,f (x)) has slope $-\frac{1-x}{7-f(x)}=\frac{x-1}{7-f(x)}$. But the slope of the tangent line is f'(x), so one just plugs $f(x)=7-\sqrt{2x-x^2}$ into this $f'(x)=\frac{x-1}{7-f(x)}$ to get one's answer. --Kevin C.
{"url":"http://mathhelpforum.com/calculus/121127-derivative.html","timestamp":"2014-04-18T04:00:28Z","content_type":null,"content_length":"37502","record_id":"<urn:uuid:5e8a0249-1fea-4c08-8cb8-ba91c619908f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
You can find some really good resources for math test prep in the used bookstores in a college town. Some examples that I like are: (1) Humongous Book of ______________ Problems (fill in the blank with your math topic); (2) the REA Problem Solvers series; and (3) the Schaum's Outlines. If you don't live near a college town it might be worth a Saturday trip just to buy books. Alternately, all of these are available (used) through the Amazon Marketplace sellers at really low prices. You should preview each title of these book series that you might be considering to be sure you like the authors style. Each one is different. You may like one series' treatment of Pre-Calc but prefer a different series for Calculus. So how do you use these books ? They are an alternate resource for explanations of basic concepts and problem solving techniques. You should use them as 'hint mills' and sources of problems to... read more
{"url":"http://www.wyzant.com/resources/users/78494910/Blogs","timestamp":"2014-04-20T07:38:24Z","content_type":null,"content_length":"33504","record_id":"<urn:uuid:33fb4ef0-e94e-4d22-828c-6da5b39b52a1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Inside Rensselaer, December 14, 2012: Joyce McLaughlin Named Inaugural Fellow of American Mathematical Society Joyce McLaughlin Named Inaugural Fellow of American Mathematical Society Nonlinear analysis expert Joyce McLaughlin, Ford Foundation Professor in the Department of Mathematical Sciences, has been named an inaugural fellow of the American Mathematical Society (AMS). “In developing mathematical models that improve biomedical imaging of tissues, Joyce has made a direct contribution to our society, and she is richly deserving of this recognition by her peers.”—Laurie Leshin “We congratulate Dr. McLaughlin on being chosen as an inaugural fellow of the American Mathematical Society. In developing mathematical models that improve biomedical imaging of tissues, Joyce has made a direct contribution to our society, and she is richly deserving of this recognition by her peers,” said Laurie Leshin, dean of the School of Science at Rensselaer. “We are honored to count her among our growing list of society fellows in the School of Science.” The AMS recognized McLaughlin for her distinguished contributions to the creation, exposition, advancement, communication, and utilization of mathematics. As part of the honor, McLaughlin will be part of the first group of fellows of the AMS to be officially inducted during a ceremony as part of the Joint Mathematics Meeting in San Diego on Jan. 11, 2013. McLaughlin’s main research area is in nonlinear analysis as applied to parameter identification in inverse problems. “These are problems where the data one has is very indirectly related to the physical or biological property that is to be determined, and usually is imaged. So it is essential to utilize the mathematical model of the physical process that creates the data in order to create an image,” McLaughlin said. “These problems are ill-posed; that is, small changes in the data can produce large changes in the image so careful mathematical analysis is needed in order to create an accurate image.” McLaughlin was first known her for work in inverse spectral theory, in which natural frequencies and/or subsets of mode shapes, such as nodal sets, of a vibrating system are used to identify physical properties. Her work in this area was presented at the International Congress of Mathematicians in Zurich in 1994. More recently she has become known for her work in biomechanical imaging of tissue. The physical process that produces the data is the dynamic movement of tissue at a low amplitude of displacement (on the order of microns) and the model for that process is used to create images of biomechanical tissue properties. These images are being utilized, together with ultrasound or MRI images, as a new medical diagnostic tool. McLaughlin is an inaugural fellow of the Society for Industrial and Applied Mathematics, a member of the Scientific Board for the American Institute of Mathematics, and a winner of the AWM/SIAM 2004 Kovalevsky Lecture and Prize. McLaughlin received her bachelor’s degree from Kansas State, her master’s degree from the University of Maryland, College Park, and her doctoral degree in applied mathematics from the University of California, Riverside. For additional information on McLaughlin, see: http://www.math.rpi.edu/ms_faculty/profile/mclaughlin_j.html. For more information on the AMS fellowship, see: http://www.ams.org/profession/ams-fellows
{"url":"http://www.rpi.edu/about/inside/issue/v6n19/math.html","timestamp":"2014-04-19T11:58:27Z","content_type":null,"content_length":"12304","record_id":"<urn:uuid:9bf8cb80-bea1-4bb3-bffe-d7d3f19ac0c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Constraint validity checking - Patent # 6099575 - PatentGenius Constraint validity checking 6099575 Constraint validity checking (2 images) Inventor: Hardin, et al. Date Issued: August 8, 2000 Application: 09/102,850 Filed: June 23, 1998 Inventors: Hardin; Ronald H. (Pataskala, OH) Kurshan; Robert Paul (New York, NY) Assignee: Lucent Technologies Inc. (Murray Hill, NJ) Primary Teska; Kevin J. Assistant Jones; Hugh Attorney Or U.S. Class: 703/13; 703/17; 703/22 Field Of 395/500.38; 395/500.34; 395/500.43; 703/13; 703/17; 703/22 International G06F 17/50 U.S Patent 5163016; 5615137; 5768498; 5901073 Other RP. Kurshan, Computer Aided Verification Of Coordinating Process, Princeton University Press 1994.. Abstract: A method and apparatus for efficiently determining whether a set of constraints input to a verification tool are mutually contradictory or overconstraining. A set of constraints are mutually contradictory or overconstraining when they define values for system-model variables and/or inputs that are inconsistent with each other at a given state or group of states of a system-model state machine. It has been found that when a set of constraints assign inconsistent values at a given state or group of states of the system-model state space, the verification tool will treat the given state or group of states as a so-called non-returnable state. That is, the verification tool will not recognize any paths from the given state or group of states to a set of reset states. As a result, instead of having to analyze the values defined by every constraint input to the verification tool, it can be determined whether a set of constraints input to the verification tool are mutually contradictory or overconstraining by analyzing only the constraints enabled at the non-returnable states. Claim: We claim: 1. A method carried out on a computer of verifying properties of a system, the method comprising the steps of: applying to said system input signals that are subject to imposed constraints, using a verification tool, identifying states of said system that are non-returnable states, and when said step of using a verification tool determines that there exist non-returnable states in said system, outputting an alert signal that informs an operator that said imposed constraints are mutually inconsistent or that there is a designerror in said system, and conversely, when said step of using said verification tool determines that said imposed constraints are mutually consistent and no error determinations are made in the course of said applying said input signals, outputting asignal to said operator that indicates consistency of said properties. 2. A method carried out in a computer, for verifying properties of a system comprising the steps of: using a verification tool imposing a set of constraints on said system, that constrain input signals applied to said system and/or internal signals of said system, and performing a forward search beginning from a designated set of reset states toidentify a set of reachable states, said set of forward search states being states that said system reaches by inputting at each state of said system a complete set of inputs, subject to said constraints; using said verification tool, imposing said set of constraints on said system and performing a reverse search beginning from a designated set of reset states, by identifying states of said system from which said verification tool can reach a setof reset states by inputting at each state said complete constrained set of inputs, subject to said constraints, thereby identifying a set of reverse search states; identifying a set of non-returnable reachable states of said system, said set of non-returnable reachable states being the difference between the set of forward search states and the set of reverse search states; and analyzing said non-returnable reachable states to identify mutually inconsistent constraints or over-constraining constraints in said set of constraints, to identify constraint restrictions that, when modified, reduce or eliminate said set ofnon-returnable search states. 3. The method of claim 2 further comprising the step of identifying constraints enabled in a non-returnable state, that reflect assumptions that are inconsistent with each other, and changing those assumptions to eliminate inconsistencies,thereby converting said mutually inconsistent constraints into mutually consistent constraints. 4. The method of claim 1 wherein said system is embodied in software controlled processor. 5. A method carried out in a computer comprising the steps of: applying input signals to a system, to identify a set of non-returnable reachable states of said system said set of non-returnable reachable states being states in which a verification tool, programmed with a set of constraints, recognizes nopath to a set of reset states, said set of reset states being states in the system model state space states from which the system model is designed to begin operation and return to when the system is reset; and analyzing the constraints enabled in said set of non-returnable states to determine whether said set of constraints programmed in said verification tool are inconsistent with each 6. The method of claim 5 wherein said step of identifying comprises the steps of: using said verification tool programmed with said constrains, performing a forward search of said system-model state space to identify a set of forward search states, said set of forward search states being the states that said verification toolreaches from a designated set of reset states by inputting a complete set of inputs at each state of said system; and using said verification tool programmed with said constraints, performing a reverse search of the system-model state space from a designated set of reset states to identify a set of reverse search states, said set of reverse search states beingthe states from which said verification tool reaches the set of reset states by inputting said complete set of inputs at each state; said non-returnable states being the difference between said set of forward search states and said set of reverse search states. 7. The method of claim 5 wherein said step of analyzing comprises the step of comparing values allowed by constraints enabled in said non-returnable states to determine whether the values are consistent with each other. 8. The method of claim 7 further comprising the step of: identifying a smallest set of constraints that require inconsistent values in a non-returnable state as a set of inconsistent constraints. 9. The method of claim 8 further comprising the step of: adjusting values allowed by said set of inconsistent constraints so that said set of constraints are mutually consistent. 10. An apparatus comprising: a means for developing a set of mutually-consistent constraints by analyzing assumptions defined by constraints enabled in a non-returnable state of a state space representing the behavior of the system model, said non-returnable state being astate from which there is no path through the state space to a set of reset states; and a verification tool programmed with said set of mutually consistent constraints. 11. The apparatus of claim 10 wherein said verification tool comprises a processor coupled to a search engine, a memory, a constraint-value analyzer and a value adjuster. 12. The apparatus of claim 11 further comprising a user interface coupled to said verification tool, said user interface including a computer keyboard and a display which enable a programmer to control an operation of the verification tool. 13. The apparatus of claim 12 further comprising peripheral devices including a printer and a modem, said printer and said modem being connected to said verification tool. 14. The apparatus of claim 13 wherein said verification tool is operable to perform a forward search of said system-model state space to identify a set of forward search states, said set of forward search states being the states that saidverification tool reaches by inputting a complete set of inputs at each state. 15. The apparatus of claim 14 wherein said verification tool is further operable to perform a reverse search of said system-model state space to identify a set of reverse search states, said set of forward search states being the states thatsaid verification tool reaches by inputting a complete set of inputs at each state, said non-returnable states thereby being the difference between the set of forward search states and the set of reverse search states. 16. The apparatus of claim 15 wherein said constraint-value analyzer comprises means for determining whether constraints enabled in a non-returnable state define values that are inconsistent with each other, said constraints that defineinconsistent values being inconsistent constraints. 17. The apparatus of claim 16 wherein said value adjuster comprises means for adjusting values assigned by said inconsistent constraints to obtain the set of mutually consistent 18. A method for determining whether constraints input to a verification tool are mutually contradictory, the method comprising the step of: determining whether, during a search performed by the verification tool, a set of values assigned by a set of constraints enabled in some state of a system-model state space results in a non-null set of non-returnable states of said system-model,said non-returnable state being a state in which the verification tool recognizes no path to a set of reset states, said set of reset states being states in the system model state space states from which the system model is designed to begin operationand to which the system model is designed to return when the system is reset; and concluding that said constraints are mutually contradictory when said set of non-returnable states of said system-model is non-null. 19. The method of claim 18 wherein said step of determining comprises the steps of: using said verification tool programmed with said constraints, performing a forward search of said system-model state space to identify a set of forward search states, said set of forward search states being the states that said verification toolreaches by inputting a complete set of inputs at each state; and using said verification tool programmed with said constraints, performing a reverse search of the system-model state space to identify a set of reverse search states, said set of reverse search states being the states from which said verificationtool reaches the set of reset states by inputting said complete set of inputs at each state; said non-returnable states being the difference between said set of forward search states and said set of reverse search states. 20. The method of claim 19 further comprising the step of: identifying constraints that assign inconsistent values in a non-returnable state as a set of mutually-contradictory constraints. 21. The method of claim 20 further comprising the step of: adjusting said inconsistent values to convert said mutually contradictory constraints to mutually consistent constraints. 22. A method of verifying properties of a system, the method comprising the step of employing a verification tool that constrains applied signals to said system and/or internal signals of said system pursuant to specified constraints, whichconstraints are developed by redefining assumptions defined by test constraints enabled in a non-returnable state of said system, said non-returnable state being a state from which there is no path through to a set of reset states when a verificationtool programmed with said test constraints searches said non-returnable state. 23. The method of claim 8 where said identifying the smallest set of constraints is performed with a binary search. Description: FIELD OF THE INVENTION The present invention relates to the testing of system designs, and more particularly to an apparatus and method for checking the validity of constraints used during formal BACKGROUND OF THE INVENTION An ongoing problem in the design of large systems is verifying that the system will behave in the manner intended by its designers. One approach has been to simply try out the system, either by building and testing the system itself or bybuilding and testing a model of the system. Since there is no guarantee that an untested system will work as expected, building the system itself can be an expensive proposition. Thus, those skilled in the art have migrated toward building and testinga model of the system, or system model, through software. A system model can be said to be a computer program or block of code that, when executed, simulates the intended properties, or functions, of the system. Basically, the system model is designed to accept inputs, perform functions and generateoutputs in the same manner as would the actual system. To do this, the system model uses variables, called system-model variables, that are programmed within the code to take on certain values depending on the values of the inputs to the system model. That is, as different values are fed to the system-model inputs, the system-model variables are assigned different values that indicate how the system model functions or behaves in response to the inputs. Thus, by controlling the value of the inputs tothe system model and monitoring the values of the system-model variables, a system designer can test or observe the behavior of the system model in response to different sets of inputs, and determine whether the system model exhibits, or performs, theintended behaviors or properties of the system. One method of testing a system model in such a manner is called formal verification. In formal verification, a verification tool is used to convert the system model into a finite state machine. A finite state machine is a set of states andstate transitions which minimic the operation of the system model in response to given sets of inputs, or input vectors. In general, each state of a finite state machine represents a specific assignment of values to a set of system-model variablesand/or inputs, and thus represents a specific behavior or property of the system model. Each state transition defines the values that a set of system-model variables and/or inputs must take on for the state machine to transition from one state toanother state. The state machine thereby provides a roadmap of how a system model will behave (i.e. the states the system model will enter) in response to the values input to the system-model inputs. As a result, once the verification tool converts thesystem model into such a finite state machine, the tool can test the properties or behaviors of the system model by checking which states the system-model state machine enters in response to a given set of inputs. To illustrate, conventional verification tools, such as the verification tool described by R. P. Kurshan in Computer Aided Verification of Coordinating Processes, Princeton University Press 1994, are designed to test all the properties orbehaviors of a system model by performing a so-called full search of the system-model state space (i.e. the set of states and state transitions that form the system-model state machine). Conventionally, the verification tool begins the full search byinputting a "complete set of inputs" at a set of initial states, or "set of reset states," of the system-model state machine. The term "complete set of inputs" as used herein refers to every possible set of values that the system-model inputs canpossibly assume when the system model is in operation, in every possible sequence. The term "set of reset states" as used herein refers to those states from which the system model is designed to begin operation and/or return to after an operation iscompleted. As the complete set of inputs are fed to the state machine at the set of reset states, the verification tool identifies the values that the system-model variables and/or inputs take on, and identifies the state transitions that are enabled bythose values. Once the enabled state transitions are identified, the verification tool identifies the states, called "next states," to which the state machine can transition as a result of the inputs. Once the verification tool identifies all the nextstates that are reached as a result of the inputs, it continues the search by inputting the complete set of inputs at each of the next states and by identifying the new set of next states to which the state machine transitions as a result of the inputs. This process is repeated until the set of next states identified by the verification tool are all states that have already been reached (i.e. identified as next states) during the search. The set of states that are reached during the search arehereinafter referred to as the set of reachable states. By inputting the complete set of inputs at each state of the state machine, the set of reachable states is guaranteed to include every state of the system-model state space. As a result, theverification tool performing a full search is guaranteed to check every property or behavior of the system model. In some instances, however, a system designer may wish to check only a portion of the properties or behaviors of a system model. For example, in some instances, the system designer may wish to check the system-model properties that conform witha subset of the inputs that may be possible during the operation of the system. In such instances, it is not necessary to search or reach all of the states of the system-model state machine, and thus it is not necessary to check the behavior of thesystem-model state machine in response to the complete set of inputs. Instead, the verification tool only needs to search or reach those states that represent the properties or behaviors the designer wishes to check, and thus only needs to check thebehavior of the system-model state machine in response to a fraction of the complete set of inputs. One method for directing the verification tool to check only a portion of the properties or behaviors of the system-model is to program the verification tool with so-called constraints. A constraint is a logical expression composed of anenabling condition and an assumption. The enabling condition defines the condition that must be true for the assumption to be invoked. The assumption defines the value or values that the verification tool shall assign to certain system-model variablesand/or inputs when the enabling condition is true. An enabled constraint can therefore direct the verification tool to assign certain values to certain system-model variables and/or inputs, regardless of the values they are programmed to take on withinthe system-model. When programmed with such a constraint, the verification tool will be directed to recognize only those state transitions that require the system-model variables and/or inputs to take on values that are consistent with the values defined by theconstraints. As a result, the verification tool will only reach those states that represent an assignment of values (i.e. values of the system-model variables and inputs) that are consistent with the values defined by the constraint. Thus, by carefullychoosing the values which the constraint assigns to certain system-model variables and inputs, the verification tool can be directed to reach or search only those states that represent the properties or behaviors which the system-designer wishes tocheck. For example, the constraint "IF(power-on)Assume(x=y)" will direct a verification tool to assume that the value of system-model variable x equals the value of system-model variable y when the enabling condition, power.sub.-- on, is true. Onceenabled, the constraint will direct the verification tool to only search or reach those states or state transitions where the inputs cause the value of system-model variable x to be equal to the value of system-model variable y. The constraint thereforedefines a specific set of reachable states for the verification tool. Thus, if the specific set of reachable states includes all the states that represent the system-model property being checked, the constraint is said to properly limit the search. Oftentimes, the system designer is required to program the verification tool with a plurality of constraints in order to properly limit the search. When programmed with a plurality of constraints, the verification tool will only search or reachthe states that are consistent with every value defined by every constraint enabled in that state. Thus, it is important that the constraints enabled in each state assign values that properly define the set of reachable states. A problem occurs, however, when the constraints enabled in a given state assign values that are inconsistent with each other. Such a set of enabled constraints are said to be mutually-contradictory at the given state. For example, the set ofconstraints: Constraint1=IF(power-on)Assume (x=y); Constraint2= IF(temperature>30)Assume(y-z); and Constraint3=IF(time>1)Assume(x.noteq.z); are mutually contradictory at any state in which all the enabling conditions are true at the same time. To illustrate, at a state wherein power.sub.-- on,temperature>30, and time>1 are all true, Constraints 1-3 will direct the verification tool to assume x=y=z.noteq.x, or x.noteq.x, which is inconsistent, and thus mutually contradictory. When a set of constraints are mutually contradictory (i.e. assign inconsistent values) at a given state of the search, the system-model variables and/or inputs can not take on a set of values that would be consistent with the values assigned byevery enabled constraint at the same time. As a result, the verification tool performing the search will not recognize any state transition from that given state to any other state, no matter what inputs are fed to the state machine. This means thatthe verification tool will not reach or search any states beyond the given state in which the mutually contradictory constraints are enabled. As a result, the verification tool may fail to reach all the states associated with the system-model propertyor behavior being checked. When this happens the verification tool may not be able to identify all the errors associated with the property being checked. Thus, in order to check whether a verification tool's report of "no error" is accurate, the systemdesigner should always check whether the set of constraints input to the verification tool are mutually contradictory. A related and more subtle version of this problem occurs when the constraints, although not mutually contradictory at a single state, are inconsistent with each other in a group of states. A set of constraints are inconsistent in a group ofstates when the values assigned by constraints enabled in the group of states direct the verification tool to perpetually search only the states in the group. When this happens, the verification tool performing the search will not recognize any statetransition from the group of states to any other state or group of states, no matter what inputs are fed to the state machine. This means that the verification tool will not reach or search any states beyond the given group of states in which theinconsistent constraints are enabled. As a result, the verification tool may fail to reach all the states associated with the system-model property or behavior being checked, and thus the verification tool may not be able to identify all the errorsassociated with the property being checked. When this happens, the constraints are said to "overconstrain" the search of the system-model state space. Thus, in addition to checking whether a set of constraints are mutually contradictory in a singlestate to determine whether a verification tool's report of "no error" is accurate, the system designer should also check whether the set of constraints assign inconsistent values in a group of states. The conventional method for determining whether a set of constraints input to a verification tool are mutually contradictory at a single state or are inconsistent in a group of states is to analyze all the assumptions (i.e. value assignments)defined by all of the constraints. The object of such an analysis is to determine whether it is possible for the values assigned by one constraint to be inconsistent with the values assigned by another constraint at each state and each group of states. This requires that the values assigned by each constraint be analyzed for consistency with the values assigned by every other constraint at each state and each group of states of the state machine. Since some constraints may assign values as a functionof other variables and/or inputs in each state, such an analysis requires a detailed understanding and/or determination of the mathematical relationship between the system-model variables and/or inputs, and an understanding or determination of the set ofvalues that the system-model variables and/or inputs can assume in response to any given set of inputs at each state of the system-model state space. For systems having a large number of states and/or a large number of variables and inputs, such a taskcan be very time-consuming and can require a large amount of computational resources. SUMMARY OF THE INVENTION We have found an efficient method for determining whether a set of constraints are mutually contradictory. Instead of having to analyze the values assigned by every constraint input to the verification tool, we have found that only the set ofconstraints that are enabled in a so-called set of non-returnable states of the system-model state space need be analyzed. The term "set of non-returnable states" as used herein refers to a single state or a group of states in the system-model statespace from which there is no path to the set of reset states when the system-model variables and/or inputs are assigned values in accordance with that defined by the constraints input to the verification tool. Our finding is based on the realization that many systems are non-terminating. That is, many systems do not terminate their operation after performing a given function, but rather can return to an initial state or set of reset states from whichthey can then perform the same or another function. For example, after making a calculation, a calculator can return to zero (i.e. the reset state) when the "clear" button is pressed, and thereafter perform the same calculation or any other function ofthe calculator. In view of this, we realized that when a set of mutually contradictory or inconsistent constraints enabled in a given state or group of states direct a verification tool to ignore all state transitions from that given state or group ofstates, the set of mutually contradictory or inconsistent constraints will have caused the verification tool to treat the given state or group of states as a set of non-returnable states. As a result, we realized that by identifying a state or group ofstates which a verification tool, programmed with a set of constraints, treats as non-returnable, one will have identified a state or group of states in which the set of constraints may have assigned inconsistent values. Thus, we have found that byanalyzing the values assigned by the constraints enabled in an identified non-returnable state or non-returnable group of states, it can be determined whether the set of constraints input to the verification tool are mutually contradictory oroverconstraining. In particular embodiments, the set of non-returnable states or group of states are identified by first performing a so-called forward search wherein the verification tool programmed with the set of constraints starts at the set of reset states ofthe system-model state machine and identifies all the states and group of states that it can possibly reach by inputting the complete set of inputs at each state. Each state and each group of states identified during the forward search are referred toherein as the forward search states. Once the forward search states are identified, the verification tool then performs a so-called reverse search wherein it identifies all the states and group of states from which it can reach the set of reset statesby inputting the complete set of inputs at each state. Each state and each group of states identified during the reverse search are referred to herein as the reverse search states. Each state included in the set of forward search states that is notincluded in the set of reverse search states is then identified as a non-returnable state. Since the set of forward search states and the set of reverse search states include both individual states and groups of states, it can be appreciated that theterm non-returnable state as used herein refers to both individual states and groups of states from which there is no return path to the set of reset states. The values assigned by the constraints enabled in the identified non-returnable states are thenanalyzed to determine whether they assign values that are inconsistent with each other. The constraints that assign values that are inconsistent with each other in a single state are identified as mutually contradictory constraints, and the constraintsthat assign values that are inconsistent with each other in a group of states are identified as a set of overconstraining constraints. Thus, constraints that are neither mutually contradictory nor overconstraining are referred to herein as mutuallyconsistent constraints. Advantageously, once a set of mutually-contradictory constraints and/or a set of overconstraining constraints are identified, a system-designer can adjust and/or eliminate the inconsistent value-assignments defined by the contradictory and/oroverconstraining constraints, and thereby increase the probability that a verification tool, programmed with the set of adjusted or mutually consistent constraints, will accurately check the entire portion of the state space representing the propertiesor behaviors that the system-designer wishes to These and other features of the invention will become more apparent from the detailed description of illustrative embodiments of the invention when taken with the drawings. The scope of the invention, however, is limited only by the claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1. is a block diagram of an illustrative embodiment of a method for performing a search of a system-model state space using a verification tool programmed with a set of mutually-consistent constraints developed in accordance with theprinciples of the present invention. FIG. 2 is a diagram of a system-model state space for illustrating an application of the method of FIG. 1. FIG. 3 is a diagram of the forward search states reached when a verification tool, programmed with a set of constraints, performs a forward search of the system-model state space shown in FIG. 2 in accordance with the method shown in FIG. 1. FIG. 4 is a diagram of the reverse search states reached when a verification tool, programmed with a set of constraints, performs a reverse search of the system-model state space shown in FIG. 2 in accordance with the method shown in FIG. 1. FIG. 5 is a block diagram of an illustrative embodiment of a verification tool for performing a search in accordance with the method shown in FIG. 1. DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS OF THE INVENTION As stated above, one method for directing a verification tool to check only a portion of the properties or behaviors of a system-model is to program the verification tool with so-called constraints. A constraint is a logical expression composedof an enabling condition and an assumption. The enabling condition defines the condition that must be true for the assumption to be invoked. The assumption defines the value or values that the verification tool shall assign to certain system-modelvariables and/or inputs when the enabling condition is true. An enabled constraint can therefore direct the verification tool to assign certain values to certain system-model variables and/or inputs, regardless of the values they are programmed to takeon within the system-model. When programmed with such a constraint, the verification tool will be directed to recognize only those state transitions that require the system-model variables and/or inputs to take on values that are consistent with the values defined by theconstraints. As a result, the verification tool will only reach those states or groups of states that represent an assignment of values (i.e. values of the system-model variables and inputs) that are consistent with the values defined by the constraint. Thus, by carefully choosing the values which the constraint assigns to certain system-model variables and inputs, the verification tool can be directed to reach or search only those states that represent the properties or behaviors which thesystem-designer wishes to check. Oftentimes, the system designer is required to program the verification tool with a plurality of constraints in order to properly limit the search. When programmed with a plurality of constraints, the verification tool will only recognize astate transition from a given state to another state when the state transition defines a value assignment that is consistent with every value defined by every constraint enabled in that state. Thus, it is important that the constraints enabled in eachstate assign values that properly define the set of reachable states. A problem occurs, however, when the constraints enabled in a given state or a given group of states assign values that are inconsistent with each other. Such a set of enabledconstraints are said to be mutually-contradictory or overconstraining at the given state or group of states. When a set of constraints are mutually contradictory (i.e. assign inconsistent values) at a given state of the search, the system-model variables and/or inputs can not take on a set of values that would be consistent with the values assigned byevery enabled constraint at the same time. As a result, the verification tool performing the search will not recognize any state transition from that given state to any other state, no matter what inputs are fed to the state machine. This means thatthe verification tool will not reach or search any states beyond the given state in which the mutually contradictory constraints are enabled. As a result, the verification tool may fail to reach all the states associated with the system-model propertyor behavior being checked. When this happens the verification tool may not be able to identify all the errors associated with the property being checked. Thus, in order to check whether a verification tool's report of "no error" is accurate, the systemdesigner should always check whether the set of constraints input to the verification tool are mutually contradictory. Similarly, when a set of constraints are inconsistent with each other in a given group of states, the system-model variables and/or inputs can not take on a set of values that would enable the verification tool to transition out of the group ofstates to any other state, no matter what inputs are fed to the state machine during the search. As a result, a set of overconstraining constraints may prevent the verification tool from reaching all the states associated with the system-model propertyor behavior being checked. When this happens, the verification tool may not be able to identify all the errors associated with the property being checked. Thus, in addition to checking whether a set of constraints are overconstraining in order todetermine whether a verification tool's report of "no error" is accurate, a system designer should also check whether the set of constraints input to the verification tool are overconstraining. As states above, the conventional method for determining whether a set of constraints input to a verification tool are mutually contradictory or overconstraining is to analyze all the assumptions (i.e. value assignments) defined by all of theconstraints. The object of such an analysis is to determine whether it is possible for the values assigned by one constraint to be inconsistent with the values assigned by another constraint. This requires that the values assigned by each constraint beanalyzed for consistency with the values assigned by every other constraint at each state and each group of states of the state machine. Since some constraints may assign values as a function of other variables and/or inputs in each state, such ananalysis requires a detailed understanding and/or determination of the mathematical relationship between the system-model variables and/or inputs, and an understanding or determination of the set of values that the system-model variables and/or inputscan assume in response to any given set of inputs at each state of the system-model state space. For systems having a large number of states and/or a large number of variables and inputs, such a task can be very time-consuming and can require a largeamount of computational resources. The present invention provides a means for reducing the time and/or computational resources needed to determine whether a set of constraints are mutually contradictory or overconstraining. Instead of analyzing the assumptions orvalue-assignments defined by every constraint input to the verification tool, a method in accordance with the principles of present invention requires that only the values assigned by the constraints enabled in a so-called set of non-returnable states of the system-model state space need be analyzed. As stated above, the term "set of non-returnable states" as usedherein refers to both individual states and groups of states in the system-model state space from which there is no path to the set of reset states when the system-model variables and/or inputs are assigned values in accordance with that defined by theconstraints input to the verification tool. As stated above, our finding is based on the realization that many systems are non-terminating. That is, many systems do not terminate their operation after performing a given function, but rather can return to an initial state or set of resetstates from which they can then perform the same or another function. In view of this, we realized that when a set of mutually contradictory constraints or a set of overconstraining constraints enabled in a given state or group of states direct averification tool to ignore all state transitions from that given state or group of states, the set of mutually contradictory constraints or overconstraining constraints will have caused the verification tool to treat or recognize the given state orgroup of states as a non-returnable state. As a result, we realized that by identifying the states which a verification tool, programmed with a set of constraints, treats or recognizes as non-returnable, one will have identified the individual statesand groups of states in which the set of constraints may have assigned inconsistent values. Thus, we have found that by analyzing the values assigned by the constraints enabled in a non-returnable state (i.e. individual states and groups of states fromwhich the verification tool does not recognize a path to the set of reset states), it can be determined whether the set of constraints input to the verification tool are mutually contradictory or Once identified, the mutually contradictory or overconstraining constraints can be re-defined so that their assumptions do not assign values that are inconsistent with each other at any state or group of states during a search of the system-modelsate space. The result is a set of mutually consistent constraints which can be used by the verification tool (i.e. to perform a search) to verify the system-model properties or behaviors which a designer wishes to check. Referring now to FIG. 1, there is shown an illustrative embodiment of a method 10 for performing a search of a system-model state space using a verification tool programmed with a set of mutually-consistent constraints developed in accordancewith the principles of the present invention. As shown, method 10 begins at step 11 wherein a verification tool programmed with test constraints is used to perform a forward search of a system-model state space. The states and groups of states reachedduring the forward search are identified, at step 12, as forward search states. The verification tool programmed with the set of test constraints is then used, at step 13, to perform a reverse search of the system-model state space. The states andgroups of states reached during the reverse search are identified, at step 14, as reverse search states. At step 15 the states included in the set of forward search states that are not included in the set of reverse search states are identified asnon-returnable states. The constraints enabled in a given non-returnable state (i.e. either an individual state or a group of states) are analyzed, at step 16, to determine which of the enabled constraints defines assumptions or values-assignments thatare inconsistent with each other. The enabled constraints that define inconsistent value assignments are then re-defined, at step 17, so that they no-longer define inconsistent value assignments and such that the set of test constraints are no-longermutually contradictory or overconstraining, i.e. they are mutually consistent. A verification tool programmed with the set of mutually consistent constraints is then used, at step 18, to search the system-model state space in a conventional manner. Advantageously, method 10 enables a system designer to make sure that the constraints used to test given properties and/or behaviors of a system model are mutually consistent, and thus increase the probability that a verification tool programmedwith the set of mutually-consistent constraints will check the entire portion of the state space representing the properties or behaviors that the system-designer wishes to check. As a result, method 10 provides a means for testing properties of asystem model with less probability that the constraints programmed into the verification tool will cause the verification tool to miss an error in a portion of the system-model state space representing the behavior or property the designer wishes tocheck. Referring now to FIG. 2 there is shown a diagram of a system-model state space or state machine 20 for illustrating how a conventional verification tool, programmed with a set of constraints, will perform steps 11-18 of method 10. As shown,state machine 20 has a set of reset states 1, states 2-5, state transitions 21-32, and system-model inputs A and B. It should be pointed out that even though states 2-5 are shown as individual states in FIG. 1, each state is intended to represent both anindividual state or a group of states. Thus, the following discussion of how a verification tool programmed with a set of constraints will perform steps 11-18 of method 10 is intended to illustrate, for example, how both individual states and groups ofstates may be identified as non-returnable states in accordance with the principles of the present invention In operation, starting in set of reset states 1, state machine 20 will transition to state 2 when system-model inputs A and B each take on a value of 1. When in state 2, state machine 20 will transition to state 3 when system-model inputs A andB each take on a value of 1, and transition to set of reset states 1 either when system model input A takes on a value of or when system model input A takes on a value of 1 and system model variable B takes on a value of 0. When in state 3, statemachine 20 will transition to state 4 when system-model inputs A and B each take on a value of 1, transition to state 2 when system-model inputs A and B each take on a value of 0, and transition to set of reset states 1 either when system-model input Atakes on a value of I and system-model input B takes on a value of 0 or when system-model input A takes on a value of 0 and system-model input B takes on a value of 1. When in state 4, state machine 20 will transition to state 5 when system-model inputsA and B each take on a value of 1, transition to state 3 when system-model inputs A and B each take on a value of 0, and transition to set of reset states 1 either when system-model input A takes on a value of 0 and system-model input B takes on a valueof 1 or when system-model input A takes on a value of 1 and system-model input B takes on a value of 0. When in state 5, state machine 20 transitions to state 4 when system-model inputs A and B each take on a value of 0, and transitions to set of resetstates 1 either when system-model input A takes on a value of 1 or when system-model input A takes on a value of 0 and system-model input B takes on a value of 1. To illustrate how a search of state machine 20 is performed in accordance with the steps of method 10, it is assumed that the verification tool is programmed with the following As described above, a forward search is a search wherein the verification tool, programmed with the set of constraints, starts at a set of reset states of the system-model state machine and identifies all the states that it can possibly reach byinputting the complete set of inputs at each state. Referring now to FIG. 3 there is shown a state space 36 illustrating the states reached when a conventional verification tool programmed with the above-listed constraints performs a forward search ofstate machine 20 in accordance with step 11 of method 10. To illustrate, in performing the forward search of state machine 20, the verification tool will start at set of reset states 1, input a complete set of inputs to system-model inputs A and B, andidentify the set of next states to which state machine 20 can transition given the set of constraints. Since none of the above-listed constraints are enabled in set of reset states 1, the verification tool will identify state 2 as the set of states towhich state machine 20 can transition (i.e. when system-model inputs A and B each take on a value of 1). As a result, the verification tool will recognize or allow state transition 22 so that machine 20 transitions to next state 2. When in state 2, the verification tool will again input the complete set of inputs to state machine 20 and identify set of reset states 1 (i.e. when system-model inputs A and B each take on a value of 0) and state 3 (i.e. when system-model inputsA and B each take on a value of 1) as the set of next states. Since the verification tool has already reached or searched set of reset states 1, the verification tool will recognize and allow state transition 24 so that state machine 20 transitions tostate 3. When in state 3, the verification tool will again input the complete set of inputs to state machine 20 and identify set of reset states 1 (i.e. when system-model input A=1 and B=0, or when A=0 and B=1), state 2 (i.e. when system-model inputs Aand B each take on a value of 0) and state 4 (i.e. when system-model inputs A and B each take on a value of 1) as the set of next states. Since Constraint1 directs the verification tool to assume that system-model input A equals system-model input Bwhen in state 3, the verification tool will be directed to ignore state transition 25. And, since state 2 has already been reached or searched, the verification tool will only recognize and allow state transition 27 so that state machine 20 transitionsto state 4. When in state 4, the verification tool will again input the complete set of inputs to state machine 20 and identify set of reset states 1 (i.e. when system-model input A=1 and B=0, or when A=0 and B=1), state 3 (i.e. when system-model inputs Aand B each take on a value of 0) and state 5 (i.e. when system-model inputs A and B each take on a value of 1) as the set of next states. Since, however, Constraint2 and Constraint3 direct the verification tool to assume that system-model input A andsystem-model input B be assigned a value of 0 (i.e. they are equal in value) and Constraint4 directs the verification tool to assume that system-model inputs A and B are not equal in value, the verification tool will not recognize any state transition inwhich all the constraints are satisfied. That is, the constraints will direct the verification tool to assume values for system-model inputs A and B that are inconsistent with each other. As a result, the verification tool will not recognize any statetransition from state 4. Thus, the forward search performed in accordance with step 11 of method 10 would not reach state 5 of state machine 20. Moreover, as shown in FIG. 3, the set of forward search states that would be identified in step 12 ofmethod 10 includes only set of reset states 1 and states 2-4. The same verification tool programmed with the above-listed constraints can perform a reverse search of state machine 20 by identifying the states from which the verification tool programmed with the above-listed constraints can reach orrecognize a path through state machine 20 to set of reset states 1. Referring now to FIG. 4 there is shown a state space 40 illustrating the states reached when a conventional verification tool programmed with the above-listed constraints performs areverse search of state machine 20 in accordance with step 13 of method 10. To illustrate, in performing the reverse search of state machine 20, the verification tool will first identify each state having a direct state transition to set of reset states1 (i.e. direct-transition states 2-5), input a complete set of inputs to system-model inputs A and B at each direct-transition state, and determine whether state machine 20 has a path from the direct-transition states to set of reset states 1, given theset of constraints. Since Constraint1 directs the verification tool to assume system-model input A equals system-model input B in state 3, the verification tool will not recognize state transition 25 of state machine 20. In addition, since Constraints2-4 direct the verification tool to assign inconsistent values to system-model inputs A and B (i.e. A=0=B.noteq.A, or A.noteq.A), the verification tool will not recognize state transition 29 of state machine 20. As a result, the verification tool willonly recognize state transitions 22 and 32, and thus will only identify states 2 and 5 as having a direct path to set of reset states 1. To continue the reverse search, the verification tool will then identify those states having a direct state transition to states 2 and 5 (i.e. indirect-transition states 3 and 4), input a complete set of inputs to system-model inputs A and B atthe identified indirect transition states, and determine whether state machine 20 recognizes a path from the identified indirect-transition states to states 2 and/or 5. To illustrate, the verification tool will identify states 3 and 4 as having a directtransition to states 2 and 5, respectively. When in state 3, Constraint1 will direct the verification tool to assume system-model input A equals system-model input B. Since state transition 26 only requires that system-model inputs A and B each take ona value of 0 (i.e. they are equal), the verification tool will identify state 3 as having a direct path to state 2, and thus a path in state machine 20 to set of reset states 1. When in state 4, however, Constraints 2-4 will direct the verification toolto assign inconsistent values to system-model inputs A and B (i.e. A=0=B.noteq.A, or A.noteq.A). As a result, the verification tool will not recognize state transition 30 which connects state 4 to state 5 in state machine 20. Thus, the verificationtool will not recognize and path from state 4 to set of reset states 1. Consequently, as shown in FIG. 4, the set of reverse search states that would be identified in step 14 of method 10 would include only set of reset states 1 and states 2, 3 and 5. In accordance with step 15 of method 10, the set of non-returnable states of state machine 20 is determined by identifying the states or group of states in the set of forward search states that are not included in the set of reverse searchstates. Since the set of forward search states includes set of reset states 1 and states 2-4, and the set of reverse search states includes set of reset states 1 and states 2, 3 and 5, the set of non-returnable states in state machine 20 includes onlystate 4. Once state 4 is identified as a non-returnable state, the constraints enabled in state 4 (i.e. constraints 2-4 ), or the constraints enabled in the group of states represented by state 4, are analyzed in accordance with step 16 of method 10 todetermine whether they define value-assignments that are inconsistent with each other. Since Constraint2 and Constraint3 direct the verification tool to assume that both system-model input A and system-model input B have a value of 0 (i.e. they areequal), and Constraint4 directs the verification tool to assume that system-model input A does not equal system-model input B, constraints 2-4 will be identified in step 16 of method 10 as defining assumptions that are mutually contradictory, oroverconstraining in the case of state 4 representing a group of states. Depending on which properties or states the system designer wishes to check, the assumptions defined by any one or combination of mutually-contradictory or overconstraining constraints 2-4 are changed in accordance with step 17 of method 10, sothat the constraints are mutually consistent. For example, if in a particular embodiment the system designer wishes to search only states 104 of state machine 20, constraint4 can be changed to direct the verification tool to assume that system-modelvariable A equals system-model variable B. To illustrate constraint 4 can be changed to the expression "IF(state=4)ASSUME(A=B)." Such a change to constraint4 would transform constraints1-4 into a set of mutually consistent If a verification tool, programmed with the set of mutually consistent constraints, performs a search of the state machine 20 in accordance with step 18 of method 20, the verification tool will recognize every state transition except statetransition 30 which defines the conditions for state machine 20 to transition from state 4 to state 5. Thus, the set of mutually consistent constraints will direct the verification tool to only check or search the behaviors represented by states 1-4, asdesired by the Referring now to FIG. 5, there is shown an illustrative embodiment of an apparatus 50 for performing a search in accordance with method 10 of FIG. 1. As shown, apparatus 50 has a verification tool 51 including a processor 52, a search engine 58,a memory 55, a constraint-value analyzer 53 and a value adjuster 54. Verification tool 51 is connected to a user interface 57 and other peripherals 56. Verification tool 51 is operable to use search engine 58, processor 52 and memory 55 to perform a search of a system-model state machine in a conventional manner. That is, search engine 58 is operable to feed inputs to the system-model inputsand monitor the behavior of the state machine in response to those inputs in a conventional manner. Processor 52 is operable to perform calculations and other functions required by search engine 58 to perform the search. Memory 55 is a conventionalrandom access memory (RAM) operable to store the values input by the search engine to the system-model inputs and store the corresponding values assumed by system-model variables and/or inputs as a result of the inputs. Verification tool 51 is also operable to use constraint-value analyzer 53 to compare the values assigned by each constraint enabled in a given non-returnable state and identify the constraints that define value-assignments that are inconsistentwith each other (i.e. mutually contradictory or overconstraining). In particular embodiments constraint-value analyzer 53 is a computer program accessible to processor 52. Verification tool 51 is further operable to use value adjuster 54 to change the values assigned by mutually-contradictory or overconstraining constraints so that their value-assignments are mutually consistent. In particular embodiments, valueadjuster 54 is a computer program accessible to processor 52. Other peripherals 56 includes, for example, a printer for printing out the values stored in memory 55, and a modem for connecting verification tool 51 to, for example, a database in which verification tool 51 can store the results of a search bysearch engine 58. User interface 57 includes, for example, a conventional keyboard and a display to enable a programmer to, for example, control the operation of verification tool 51 and input constraints to memory 55. When performing the steps of method 10 shown in FIG. 1, verification tool 51 uses search engine 58 to perform both a forward search and a reverse search of a system-model state space to identify a set of forward search states and a set of reversesearch states, as described above. Processor 52 is then used to identify a set of non-returnable states as the states or groups of states included in the set of forward search states that are not included in the set of reverse search states. Once theset of non-returnable states is identified, processor 52 identifies the constraints enabled in a given non-returnable state (i.e. an individual state or a group of states that is identified as non-returnable) and feeds the enabled constraints toconstraint value analyzer 53 which identifies the constraints that define inconsistent value-assignments as either mutually-contradictory constraints in the case of the given non-returnable state being an individual state, or as overconstrainingconstraints in the case of the given non-returnable state being a group of states. The mutually-contradictory constraints or overconstraining constraints are fed to value adjuster 54 which adjusts at least one of the constraints so that they assignvalues that are consistent with each other, or mutually consistent. The mutually consistent constraints are then fed back to search engine 58 which performs a search of the system-model state space. The results of the search can then be printed outthrough other peripherals 56, stored in memory 55 and/or displayed through user interface 57. For clarity of explanation, the embodiments of the present invention shown in FIGS. 1 and 5 and described above is merely illustrative. Thus, for example, it will be appreciated by those skilled in the art that the block diagram shown anddescribed in FIG. 5 herein represents a conceptual view of illustrative circuitry embodying an apparatus for performing a search in accordance with the principles of the invention. Similarly, it will be appreciated by those skilled in the art that theblock diagram shown and described in FIG. 1 herein represents a conceptual view of illustrative steps embodying a method for performing a search of a system-model state space with a set of mutually consistent constraints developed in accordance with theprinciples of the invention. In the claims hereof any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements which performs that function or b)software in any form (including, therefore, firmware, microcode or the like) combined with appropriate circuitry for executing the software to perform the function. The invention defined by such claims resides in the fact that the functionalitiesprovided by the various recited means are combined and brought together in the manner in which the claims call for. Applicant thus regards any means which can provide those functionalities as equivalent as those shown herein. It will be appreciated by those skilled in the art that they will be able to devise various arrangements which, though not explicitly shown or described herein, embody the principles of the present invention and thus are within its spirit andscope. * * * * * Randomly Featured Patents
{"url":"http://www.patentgenius.com/patent/6099575.html","timestamp":"2014-04-21T08:14:37Z","content_type":null,"content_length":"78824","record_id":"<urn:uuid:1ff02d35-bcfa-4465-955a-a01198c95c54>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
Fort Lee, NJ Algebra Tutor Find a Fort Lee, NJ Algebra Tutor ...I also have experience tutoring elementary school students on their reading and writing, and teaching English to Spanish-speaking adults in Manizales, Colombia. I base my teaching style on making our sessions relaxed and fun. At the same time, I design each of my lessons with the goal of imparting concrete, measurable skills. 13 Subjects: including algebra 1, algebra 2, Spanish, English ...My diversity of teaching experience has reaffirmed my belief that each student possesses an individualized manner of learning best and that teachers and tutors must work to pinpoint these pathways to lead students in the direction of personal academic success. Having worked with a number of stud... 33 Subjects: including algebra 1, English, Spanish, reading ...This way students learn to think through problems on their own. I include repetition to help ensure concepts stick. And I use technology to my advantage. 4 Subjects: including algebra 1, SAT math, elementary math, prealgebra ...Since 2004, I have been tutoring students in mathematics one-on-one. My approach to mathematics tutoring is creative and problem-oriented. I focus on proofs, derivations and puzzles, and the natural progression from one math problem to another. 9 Subjects: including algebra 2, algebra 1, calculus, geometry ...Additionally, because I have a Ph.D. in Psychology, I am able to approach tutoring with a psychological approach designed to reduce anxiety and increase confidence in students. This method has helped raise several of my students' grades from Ds and Fs to As and Bs. Throughout my academic career, I have always loved learning and teaching others what I have learned. 18 Subjects: including algebra 1, reading, French, English Related Fort Lee, NJ Tutors Fort Lee, NJ Accounting Tutors Fort Lee, NJ ACT Tutors Fort Lee, NJ Algebra Tutors Fort Lee, NJ Algebra 2 Tutors Fort Lee, NJ Calculus Tutors Fort Lee, NJ Geometry Tutors Fort Lee, NJ Math Tutors Fort Lee, NJ Prealgebra Tutors Fort Lee, NJ Precalculus Tutors Fort Lee, NJ SAT Tutors Fort Lee, NJ SAT Math Tutors Fort Lee, NJ Science Tutors Fort Lee, NJ Statistics Tutors Fort Lee, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Fort_Lee_NJ_Algebra_tutors.php","timestamp":"2014-04-20T23:36:58Z","content_type":null,"content_length":"23936","record_id":"<urn:uuid:5b5ee1a7-5d49-4251-a56e-f6ad07bcb47c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - prove by using definition of limit lim f(x)x->0= does not exist fx={1 if x is rational, 0 if x is not rational} by using definition of limit My teacher had asked us to this question,but next lesson She does not explain how to solve this, maybe she will ask us to it in exam. but i know why it does not exist but i can't explain help
{"url":"http://www.physicsforums.com/showpost.php?p=2948691&postcount=1","timestamp":"2014-04-18T15:52:07Z","content_type":null,"content_length":"8638","record_id":"<urn:uuid:2fdeb2d5-31cb-45c3-928d-810317dd4d28>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
The single ring theorem Guionnet, Alice and Krishnapur, Manjunath and Zeitouni, Ofer (2011) The single ring theorem. In: Annals of Mathematics, 174 (2). pp. 1189-1217. The_single.pdf - Published Version Restricted to Registered users only Download (245Kb) | Request a copy We study the empirical measure LA of the eigenvalues of nonnormal square matrices of the form A(n) = U(n)T(n)V(n), with U(n), V(n) independent Haar distributed on the unitary group and T(n) diagonal. We show that when the empirical measure of the eigenyalues of T(n) converges, and T(n) satisfies some technical conditions, L(An) converges towards a rotationally invariant measure mu on the complex plane whose support is a single ring. In particular, we provide a complete proof of the Feinberg-Zee single ring theorem [6]. We also consider the case where U(n), V(n) are independently Haar distributed on the orthogonal group. Actions (login required)
{"url":"http://eprints.iisc.ernet.in/41835/","timestamp":"2014-04-20T00:52:08Z","content_type":null,"content_length":"23994","record_id":"<urn:uuid:9e4dfaa8-07c9-4d10-a0ab-4909a2efd208>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Which formula you reckon?! Last edited by User Name; May 24th 2008 at 10:49 PM. Hello, User Name! I'll number the formulas like this: . $\begin{array}{cc} (1) & (2) \\ (3) & (4) \\ (5) \end{array}$ Formula (1): . $V \;=\;\pi\left(\frac{a+b}{2}\right)^2$ . . $\pi r^2$ is the area of the circle. . $\frac{a+b}{2}$ is the average of the two radii. Therefore, (1) gives the area of the "average circle". . . I seriously doubt that this happens to be the volume of the doughnut. Formula (3): . $V \:=\:\frac{\pi}{3}(a^3 + b^3)$ . . $\frac{4}{3}\pi r^3$ is the volume of a sphere. . . We have: . $\frac{1}{4}\left[\frac{4}{3}\pi a^3 + \frac{4}{3}\pi b^3\right]$ This is the volume of a sphere of radius $a$ . . plus the volume of a sphere of radius $b$ . . divided by 4. This is too large to be the volume of the doughnut. Formula (4): . $V \;=\;\pi^3(b^2-a^2)$ We have: . $\pi^2\left(\pi b^2 - \pi a^2\right)$ . . $\pi b^2$ is the area of the outer circle. . . $\pi a^2$ is the area of the inner circle. . . $\pi b^2 - \pi a^2$ is the area of the "ring". I don't think multiplying by $\pi^2$ will give us the volume of the dougnut. Formula (5): . $V \;=\;\frac{\pi^2}{3}(a + b)^3$ We have: . $\frac{\pi}{4}\left[\frac{4}{3}\pi (a+b)^3\right]$ This the volume of a sphere of radius $a+b$ ... (quite large!) . . multipled by $\frac{\pi}{4}$ This is far too large to be the volume of the doughtnut. By elimination, . $(2)\;\;V \:=\:\frac{\pi^2}{4}(a+b)(b-a)^2$ .is the volume of the doughnut. Thanks Soroban Once again I think Your right as usual
{"url":"http://mathhelpforum.com/math-topics/39528-formula-you-reckon.html","timestamp":"2014-04-19T23:39:47Z","content_type":null,"content_length":"41446","record_id":"<urn:uuid:091573d9-ebdc-432f-a1d5-a89fa594bc17>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
A 5.0kg Circular Plate Of Radius 10.0cm Is Rotating ... | Chegg.com A 5.0kg circular plate of radius 10.0cm is rotating about the axis of the plate at 4.0rad/s. (a) what is the moment of inertia of the plate about it's axis? (b) what is the angular momentum of the plate? (c) what is the rotational kinetic energy of the plate?
{"url":"http://www.chegg.com/homework-help/questions-and-answers/50kg-circular-plate-radius-100cm-rotating-axis-plate-40rad-s-moment-inertia-plate-s-axis-b-q1064177","timestamp":"2014-04-21T06:33:55Z","content_type":null,"content_length":"20464","record_id":"<urn:uuid:fef1e21e-ac3b-4389-9e71-c966541966b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Documentation of the PMIP models (Bonfils et al. 1998) PMIP Documentation for CCSR1 Center for Climate System Research: Model CCSR/NIES 5.4 02 T21/L20 1995 PMIP Representative(s) Dr. Ayako Abe-Ouchi, Center for Climate System Research (CCSR), University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo, 153 Japan, Phone: +81-3-5453-3955; Fax: +81-3-5453-3964; e-mail: WWW URL: http://climate3.ccsr.u-tokyo.ac.jp/home.html (in Japanese); Model Designation CCSR/NIES AGCM 5.4 02 (T21 L20) 1995 Model Identification for PMIP PMIP run(s) 0fix, 6fix, 21fix Number of days in each month: 30 30 30 30 30 30 30 30 30 30 30 30 Model Lineage Model CCSR/NIES AGCM (T21 L20) 1995 is based on a simple global atmospheric model first developed at the University of Tokyo (cf. Numaguti 1993), and further refined as a collaboration between CCSR and the National Institute of Environmental Studies (NIES). It is intended for use as a community climate model. The model is identical to the latest AMIP model exept for different initial conditions and the Earth's orbital parameters. Model Documentation Numaguti and others (1996) Development of an Atmospheric General Circulation Model.(prepared for submittion. available upon request) A summary of model features including fundamental equations is provided by Numaguti et al. (1995). The spectral formulation of atmospheric dynamics follows closely Bourke (1988). The radiation scheme is described by Nakajima and Tanaka (1986) and Nakajima et al. (1996). The convective parameterization is based on the work of Arakawa and Schubert (1974) and Moorthi and Suarez (1992). Cloud formation is treated prognostically after the method of Le Treut and Li (1991). Gravity-wave drag is parameterized as in McFarlane (1987). The planetary boundary layer (PBL) is simulated by the turbulence closure scheme of Mellor and Yamada (1974, 1982)[10,11]. The representation of surface fluxes follows the approach of Louis (1979), with inclusio of adjustments recommended by Miller et al. (1992) for low winds over the oceans. Numerical/Computational Properties Horizontal Representation Spectral (spherical harmonic basis function) with transformation to a Gaussian grid for calculation of nonlinear quantities and some physics. Horizontal Resolution Spectral triangular 21 (T21), roughly equivalent to a 5.6 x 5.6 degree latitude/longitude grid. dim_longitude*dim_latitude: 64*32 Vertical Domain Surface to 8 hPa. For a surface pressure of 1000 hPa, the lowest atmospheric level is at a pressure of about 995 hPa. Vertical Representation Sigma coordinates with discretization following the vertical differencing scheme of Arakawa and Suarez (1983) that conserves global mass integrals of potential temperature and total energy for frictionless adiabatic flow . Vertical Resolution There are 20 unevenly spaced sigma levels. For a surface pressure of 1000 hPa, 5 levels are below 800 hPa and 8 levels are above 200 hPa. The model uses a sigma coordinate in the vertical as: 1: 0.99500 2: 0.97999 3: 0.94995 4: 0.89988 5: 0.82977 6: 0.74468 7: 0.64954 8: 0.54946 9: 0.45447 10: 0.36948 11: 0.29450 12: 0.22953 13: 0.17457 14: 0.12440 15: 0.08468 16: 0.05980 17: 0.04493 18: 0.03491 19: 0.02488 20: 0.00830 Computer/Operating System The PMIP simulation was run on a HITAC S-3800 computer using a single processor in the VOS3 operational environment. Computational Performance For the PMIP experiment, about 0.3 minutes of HITAC S-3800 computation time per simulated day. Initial conditions of model atmosphere, atmospheric state, soil moisture, and snow cover/depth were obtained from a AMIP run. Time Integration Scheme(s) Semi-implicit leapfrog time integration with an Asselin (1972) time filter. The time step length is 40 minutes. Shortwave and longwave radiative fluxes are recalculated every 3 hours, but with the longwave fluxes assumed constant over the 3-hour interval, while the shortwave fluxes are assumed to vary as the cosine of the solar zenith angle. Orography is smoothed (see Orography). Spurious negative atmospheric moisture values are filled by borrowing from the vertical level immediately below, subject to the constraint of conservation of global moisture. Sampling Frequency For the PMIP simulation, the model history is written once per 24-hour period. Dynamical/Physical Properties Atmospheric Dynamics Primitive equation dynamics are expressed in terms of vorticity and divergence,temperature, specific humidity, cloud liquid water, and surface pressure. Eighth-order linear (Ñ^8) horizontal diffusion is applied to vorticity, divergence, temperature, specific humidity, and cloud liquid water on constant sigma surfaces. Stability-dependent vertical diffusion of momentum, heat, and moisture in the planetary boundary layer (PBL) as well as in the free atmosphere follows the Mellor and Yamada (1974, 1982)[10,11 ] level-2 turbulence closure scheme. The eddy diffusion coefficient is diagnostically determined as a function of a Richardson number modified to include the effects of condensation. The diffusion coefficient also depends on the vertical wind shear and on the square of an eddy mixing length with an asymptotic value of 300 m. Cf. Numagati et al. (1995) for further details. See also Planetary Boundary Layer and Surface Fluxes. Gravity-wave Drag Orographic gravity-wave drag is parameterized after McFarlane (1987). Deceleration of the resolved flow by dissipation of orographically excited gravity waves is a function of the rate at which the parameterized vertical component of the gravity-wave momentum flux decreases in magnitude with height. This momentum-flux term is the product of local air density, the component of the local wind in the direction of that at the near-surface reference level, and a displacement amplitude. At the surface, this amplitude is specified in terms of the mesoscale orographic variance, and in the free atmosphere by linear theory, but it is bounded everywhere by wave saturation values. See also Orography. Solar Constant/Cycles The solar constant is the AMIP-prescribed value of 1365 W/(m^2). The orbital parameters and seasonal insolation distribution are calculated after PMIP recommendations. Both seasonal and diurnal cycles in solar forcing are simulated. The carbon dioxide concentration is the AMIP-prescribed value of 345, 280 and 200 ppm for 0fix, 6 fix and 21fix run, respectively. Radiative effects of water vapor, oxygen, ozone, nitrous oxide (0.3 ppm, globally uniform), and methane (1.7 ppm, globally uniform) are included. Monthly zonal ozone profiles are specified from data of Keating and Young (1985) and Dütsch (1978), and they are linearly interpolated for intermediate time points. Although the model is able to treat radiative effects of aerosols, they are not included for the PMIP simulation. See also Radiation. The radiative transfer scheme is based on the two-stream discrete ordinate method (DOM) and the k-distribution method described in detail by Nakajima et al. (1996)).The radiative fluxes at the interfaces of each vertical layer are calculated considering solar incidence, absorption, emission, and scattering by gases and clouds, with the flux calculations being done in 18 wavelength regions. Band absorption by water vapor, carbon dioxide, ozone, nitrous oxide, and methane is considered in from 1 to 6 subchannels for each wavelength region. Continuum absorption by water vapor, oxygen, and ozone also is included. Rayleigh scattering by gases and absorption by clouds are considered as well. See also Chemistry. The radiative flux in each wavelength region is calculated as a sum of the products of the fluxes over the subchannels and their respective k-distribution weights, where each subchannel's fluxes are calculated by the two-stream DOM. The optical depth of each subchannel is estimated as the sum of the optical thicknesses of band absorption and of continuum absorption by gases. The transmissivity, reflectivity, and source function in each layer then are calculated as functions of optical depth, single-scattering albedo, asymmetry factor, cutoff factor, Planck function, solar incidence, and solar zenith angle. At each layer interface, fluxes are computed by the adding technique. In the presence of clouds, radiative fluxes are weighted according to the convective and large-scale cloud fractions of each grid box. The fluxes are computed by treating clouds as a mixture of scattering and absorbing water and ice particles in the shortwave; shortwave optical properties and longwave in the emissivity are functions of optical depth. Fluxes therefore depend on the prognostic liquid water content (LWC) as well as on the fraction of ice cloud (see Cloud Formation). Radiative transfer in large-scale and convective clouds is treated separately, assuming random and full overlap, respectively, in the vertical. Cf. Numaguti et al. (1995) for further details. Penetrative and shallow cumulus convection are simulated by the Relaxed Arakawa-Schubert (RAS) scheme of Moorthi and Suarez (1992) , a modification of the Arakawa and Schubert (1974) parameterization. The RAS scheme predicts mass fluxes from a spectrum of clouds that have different entrainment/detrainment rates and levels of neutral buoyancy (i.e., different cloud-top heights). The thermodynamic properties of the convective clouds are determined from an entraining plume model and the vertical profile of cloud liquid water (see Cloud Formation) is calculated from the difference between the adiabatic total water mixing ratio (a function of the grid-scale specific humidity) and the saturated specific humidity at the same level, given a prescribed vertical profile of precipitation. The predicted convective mass fluxes are used to solve budget equations that determine the impact of convection on the grid-scale fields of temperature (through latent heating and compensating subsidence) and moisture (through precipitation and detrainment). The vertical mass flux at the base of each cloud type is predicted from the cloud work function A, defined as the integral over the cloud depth of the product of the mass flux (with a linear vertical profile assumed) and the buoyancy (proportional to the difference between the cloud virtual temperature and that of the grid-scale environment at the same height). Because a nonzero cloud-base mass flux implies a positive-definite work function, the former is determined assuming that the work function vanishes in a specified time scale T > that is longer than the convective time step t of 80 minutes. In the RAS scheme, the new cloud-base mass flux at each time step is estimated by the method of virtual displacement: the amount of grid-scale warming and drying expected from a unit mass flux is calculated, and a new cloud work function A' is determined; the new cloud-base mass flux M' then is derived from a simple proportionality relation. The grid-scale mass flux is obtained by summing over the contributions from the spectrum of cloud types. The profile of the mass flux associated with convective downdrafts is also simulated from a fixed fraction of the evaporation of convective precipitation (see Precipitation). Updated values of convective cloud fraction and convective liquid water content (LWC) for the grid box also are determined from the grid-scale mass flux (see Cloud Formation and Radiation). Cf. Numaguti et al. (1995) for further details. Cloud Formation The convective cloud fraction in a grid box is estimated as proportional to the grid-scale convective mass flux. The grid-scale liquid water content (LWC) at a given height due to convective cloud is determined by a sum over the cloud-type spectrum of the products of LWC and mass flux for each cloud type (see Convection) Large-scale (stratiform) cloud formation is determined from prognostic cloud liquid water content (LWC) following Le Treut and Li (1991) . The stratiform LWC follows a conservation equation involving rates of large-scale water vapor condensation, evaporation of cloud droplets, and the transformation of small droplets to large precipitating drops (see Precipitation). The stratiform LWC (including ice content) also determines the large-scale cloud fraction (see below) and cloud optical properties (see Radiation). The fraction of stratiform cloud C in any layer is determined from the probability that the total cloud water (liquid plus vapor) is above the saturated value, where a uniform probability distribution with prescribed standard deviation is assumed. (For purposes of the radiation calculations, the square root of C is taken as the cloud fraction). At each time step, new values of LWC and vapor are determined by iteration, subject to conservation of moist internal energy. The portion of C that is ice cloud is assumed to vary as a linear function of the temperature departure below the freezing point 273.15 K, with all of C being ice cloud if the temperature is < 258.18 K. Cf. Numaguti et al. (1995) and Le Treut and Li (1991) for further details. The autoconversion of cloud liquid water into precipitation is estimated from the prognostic liquid water content (LWC) divided by a characteristic precipitation time scale which is an exponential function of temperature (see Cloud Formation). Precipitation conversion is distinguished for liquid vs ice particles. Snow is assumed to fall when the local wet-bulb temperature is less than the freezing point of 273.15 K, with melting of falling snow occurring if the wet-bulb temperature exceeds this value. See also Snow Cover. Falling liquid precipitation evaporates proportional to the difference between the saturated and ambient specific humidities and inversely proportional to the terminal fall velocity (cf. Kessler (1969)). Falling ice and snow melts if the ambient wet-bulb temperature exceeds the freezing point (273.15 K); evaporation may follow, as for falling liquid precipitation. Planetary Boundary Layer The Mellor and Yamada (1974, 1982) [10,11] level-2 turbulence closure scheme represents the effects of the PBL. The scheme is used to determine vertical diffusion coefficients for momentum, heat, and moisture from the product of the squared mixing length (whose asymptotic value is 300 m), the vertical wind shear, and a Richardson number that is modifed to include the effect of condensation on turbulent fluxes. (A diffusion coefficient is never allowed to fall below 0.15 m^2/s.) Cf. Numaguti et al. (1995) for further details. See also Surface Fluxes. Raw orography is obtained from the ETOPO5 dataset (cf. NOAA/NGDC, 1989) at a resolution of 5 x 5 minutes for 0fix and 6fix run. It is modified after PMIP recommendation for 21fix run using the difference of the orographic difference between present and 21ka. Orographic variances required for the gravity-wave drag scheme are obtained from the same dataset. Orography is smoothed by first expanding the grid point data in spectral space, then filtering according to the formula [1-(n/N)^4], where n is the spectral wavenumber and N = 21 corresponds to the horizontal resolution of the model. Finally, the smoothed spectral data is returned to the T21 Gaussian grid. See also Gravity-wave Drag. The prescribed monthly climatological SSTs for 0fix and 6fix integrations were made by averaging AMIP monthly sea surface temperature fields, with daily values determined by linear interpolation. For 21fix ? ? ? ? Sea Ice For 0fix and 6fix run, monthly AMIP sea ice extents are prescribed. The thickness of the ice can vary: the local thickness is determined from the observed fractional coverage multiplied by a constant 1 m. The surface temperature of the ice is predicted from a surface energy balance that takes account of conduction heating from the ocean below. The temperature of the underlying ocean is assumed to be 273.15 K, the freezing point of the sea ice. Snow may accumulate on sea ice, but modifies only the thermal conductivity of the ice. See also Surface Fluxes and Snow Cover. For 21fix run, sea ice edge of CLIMAP data is used for Feb. and Aug., the shape of sea ice edge for other months is determined by considering summer SST over both hemisphere. Grid with lower summer SST is freeze earlier and melt later. Snow Cover Precipitation falling on a surface with skin temperature < 273.15 K accumulates as snow, and a snowpack melts if the skin temperature exceeds this value. Fractional coverage of a grid box is determined by the ratio of the local snow mass to a critical threshold of 200 kg/(m^2). Sublimation of snow contributes to the surface evaporative flux (see Surface Fluxes), and snowmelt augments soil moisture and runoff (see Land Surface Processes). Snow cover alters the evaporation efficiency and permeability of moisture, as well as the albedo, roughness, and thermal properties of the surface (see Surface Characteristics). Surface Characteristics The surface is classified according to the 32 vegetation types of Matthews (1983), but with only the locally dominant type specified for each grid box. The stomatal resistance of the vegetation is a prescribed spatially uniform value, but is set to zero in desert areas. Over ice surfaces, the roughness length is a constant 1 x 10^-3 m. Over ocean, the roughness length is a function of the surface momentum flux, following the formulation of Miller et al. (1992). Over land, roughness lengths are assigned according to vegetation type following Takeuchi and Kondo (1981). IN areas with snow cover, the roughness length is decreased proportional to the square root of the fractional snow cover. The roughness length for calculation of surface momentum fluxes is 10 times the corresponding value for heat and moisture fluxes. Over ice surfaces, the albedo is a constant 0.7 (unaffected by snow accumulation). Over ocean, the albedo depends on sun angle and the optical thickness of the atmosphere. The albedos of the land surface are specified according to vegetation type from the data of Matthews (1983). For snow-covered land, the albedo increases over that of the background proportional to the square root of the fractional snow cover. Longwave emissivity is everywhere specified to be 1.0 (i.e., blackbody emission). See also Surface Fluxes and Land Surface Processes. Surface Fluxes Solar absorption at the surface is determined from the albedo, and longwave emission from the Planck equation with prescribed emissivities (see Surface Characteristics). The representation of turbulent surface fluxes of momentum, heat, and moisture follows Monin-Obukhov similarity theory as expressed by the bulk formulae of Louis (1979). The requisite wind, temperature, and humidity values are taken to be those at the lowest atmospheric level (see Vertical Domain). The associated drag/transfer coefficients are functions of the surface roughness (see Surface Characteristics) and vertical stability expressed as a function of a modified Richardson number (see Planetary Boundary Layer). The effect of free convective motion is incorporated into the surface wind speed following Miller et al. (1992), and the surface wind speed also is not allowed to fall below 4 m/s. For calculation of the moisture flux over ocean, ice, and snow-covered surfaces, the evaporation efficiency beta is unity. Over partially snow-covered grid boxes, beta increases as the square root of the snow fraction (see Snow Cover). The evaporation efficiency over vegetation is limited by the specified stomatal resistance. Cf. Numaguti et al. (1995) for further details. See also Land Surface Land Surface Processes The skin temperature of soil and land ice is predicted by a heat diffusion equation that is discretized in 3 layers with a zero-flux lower boundary condition; heat capacity and conductivity are spatially uniform values. Surface snow is treated as part of the uppermost soil layer, and thus modifies its heat content, as well as the heat conduction to lower layers. Soil liquid moisture is predicted in a single layer according to the "bucket" formulation of Manabe et al. (1965)) The moisture field capacity is a spatially uniform 0.15 m, with surface runoff occurring if the predicted soil moisture exceeds this value. Snowmelt contributes to soil moisture, but if snow covers a grid box completely, the permeability of the soil to falling liquid precipitation becomes zero. For partial snow cover, the permeability decreases proportional to increasing snow fraction (see Snow Cover). Soil moisture is depleted by surface evaporation; the evaporation efficiency beta (see Surface Fluxes) is not determined solely by the ratio of soil moisture to its saturation value, but is limited by the specified stomatal resistance of the vegetation. Other effects of vegetation, such as the interception of precipitation by the canopy and its subsequent reevaporation, are not included. See also Surface Characteristics and Surface Fluxes. Last update November 9, 1998. For further information, contact: Céline Bonfils (pmipweb@lsce.ipsl.fr )
{"url":"http://pmip.lsce.ipsl.fr/docs/ccsr1doc.html","timestamp":"2014-04-21T12:40:44Z","content_type":null,"content_length":"24325","record_id":"<urn:uuid:57a9a52c-d532-427d-b703-414220ef5790>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Sakaé Fuchino, Saharon Shelah and Lajos Soukup On a theorem of Shapiro We show that a theorem of Leonid B. Shapiro which was proved under MA, is actually independent from ZFC. We also give a direct proof of the Boolean algebra version of the theorem under MA(Cohen). Downloading the paper appeared in Math. Japonica 40(1994), 199-206.
{"url":"http://www.renyi.hu/pub/setop/shapiro.html","timestamp":"2014-04-18T14:12:18Z","content_type":null,"content_length":"1280","record_id":"<urn:uuid:b8a798bc-54c0-4d52-a8da-33c68700c403>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
n Quantum Information Poster Abstracts The poster boards are 6 feet wide by 3 feet high, and they mount on poles that are 6 feet tall. Posters should be no more than 3 feet wide but can be as high as presenters wish. 33x44 portrait orientation is preferred. Asma Al-Qasimi Dept. of Physics, University of Toronto Coauthors: Daniel F. V. James We present the results of our studies [1,2] of the dynamics of entanglement in both qubit and continuous variable quantum systems (CVQS) undergoing decoherence. In particular, we consider the decay of quantum entanglement, quantified by the concurrence, of a pair of qubits, each of which is interacting with a reservoir at finite temperature T. For a broad class of initially entangled states, we demonstrate that the system always becomes disentangled in a finite time i.e. entanglement sudden death (ESD) occurs. Our class of states includes all states which previously had been found to have long-lived entanglement in zero temperature reservoirs. Our general result is illustrated by an example. Further, we study analogous phenomena in general two-mode-N-photon states undergoing pure dephasing. We show that for these states, ESD never occurs. These states are generalizations of the so-called High NOON states, shown to decrease the Rayleigh limit of ? to ? /N, which promises great improvement in resolution of interference patterns if states with large N are physically realized. However, we show that in dephasing NOON states, the time to reach some critical visibility scales inversely with N2. On the practical level, this shows that as N increases, the visibility degrades much faster, which is likely to be a considerable drawback for any practical application of these states. [1] A. Al-Qasimi and D. F. V. James, Sudden death of entanglement at finite temperature Physical Review A 77, 012117 (2008); arXiv:0707.2611. [2] A. Al-Qasimi and D. F. V. James, Nonexistence of Entanglement Sudden Death in High NOON States, Optics Letters 34, 268-270 (2009); arXiv:0810.0550. Optimal entanglement generation in hybrid quantum repeaters Koji Azuma Osaka University Coauthors: Naoya Sota, Ryo Namiki, Sahin Kaya Ozdemir, Takashi Yamamoto, Masato Koashi, Nobuyuki Imoto We propose a realistic protocol to generate entanglement between quantum memories at neighboring nodes in hybrid quantum repeaters. Generated entanglement includes only one type of error, which enables efficient entanglement distillation. In contrast to the known protocols with such a property, our protocol with ideal detectors achieves the theoretical limit of the success probability and the fidelity to a Bell state, promising higher efficiencies in the repeaters. We also show that the advantage of our protocol remains even with realistic threshold detector Paper reference: arXiv:0811.3100 An infinite sequence of additive channels Kamil Bradler McGill University We introduce a new (infinite) class of channels for which the additivity of the Holevo capacity holds. The additivity of the simplest channel of the class induces the additivity of another one resulting in the domino effect. Moreover, for some of the channels we prove the existence of a single-letter formula for the quantum capacity and conjecture it holds for all of the channels. Finally, we prove the additivity of the classical capacity for an infinite-dimensional channel for which a single-letter formula of the quantum capacity is already known and which appears in the context of quantum field theory in curved spacetime. ref: quant-ph/arXiv:0903.1638 Stefano Carretta Universita' di Parma and CNR-INFM S3 national research center Coauthors: P. Santini, G. Amoretti, F. Troiani, A. Ghirri, A. Candini, M. Affronte, G. Timco, F. Tuna, R.J. Pritchard, E.J.L. McInnes, and R.E.P. Winpenny Molecular nanomagnets have been recently proposed as a novel route to a spin-based implementation of quantum-information processing. In this perspective, Cr7Ni antiferromagnetic (AF) rings [1] possess a number of appealing features: a level scheme suited for the qubit encoding and manipulation [2], stability of the magnetic core under chemical manipulation, and relatively long and chemically engineerable decoherence times [3]. By analyzing EPR, INS, magnetization and specific heat data, we show that Cr7Ni rings can be chemically linked to each other without altering their internal magnetic structure. We demonstrate that the inter-ring magnetic coupling can be tuned by choosing the linker, and we present calculations showing how maximally entangled states could be generated in a tripartite system comprising two Cr7Ni rings and a Cu ion by realistic microwave pulse sequences [4]. In addition, pulse sequences allowing to perform single and two-qubit gates with these systems will be discussed. [1] S. Carretta, P. Santini, G. Amoretti, T. Guidi, J. R. D. Copley, Y. Qiu, R. Caciuffo, G. Timco, and R. E. P. Winpenny, Phys. Rev. Lett. 98 (2007) 167401. [2] F. Troiani, M. Affronte, S. Carretta, P. Santini, G. Amoretti, Phys. Rev. Lett. 94 (2005) 190501. [3] M. Affronte, S. Carretta, G. Timco, R. Winpenny, Chem. Commun., 2007, 1789. [4] G. A. Timco, S. Carretta, F. Troiani, F. Tuna, R. J. Pritchard, E. J. L. McInnes, A. Ghirri, A. Candini, P. Santini, G. Amoretti, M. Affronte and R. E. P. Winpenny, Nature Nanotechnology, 4 (2009) 173. Paper reference: Nature Nanotechnology 4, 173 (2009). Scattering: a viable resource for control-limited entanglement distribution Francesco Ciccarello University of Palermo Coauthors: M. Paternostro, G.M. Palma, M. Zarcone The setup composed of a mobile particle, such as an electron or a photon, scattered by static quantum centers such as magnetic impurities or multi-level atoms, enjoys many intriguing physical features. I shall illustrate how these can be exploited in order to efficiently establish maximum entanglement between the spin degrees of freedom of remote particles in control-limited situations [1-4]. In particular, I will try to shed light onto the pivotal issue: "How much time is needed for two scattering spins to reach their entangled steady state?". The answer to this question [4] reveals that the mechanism behind entanglement formation via dynamical scattering is characterised by features that are deeply different from the case of non-mobile coupled spins. Counter-intuitively, in the scattering case the time required for setting maximum entanglement is independent of the spin-spin coupling strength. Remarkably, it depends solely on kinetic parameters specifying the mobile particle wavepacket. In the case of photons such parameters are tunable, which is a major advantage from the viewpoint of QIP with limited control. [1] A. T. Costa, S. Bose, and Y. Omar, Phys. Rev. Lett. 96, 230501 (2006). [2] F. Ciccarello, G. M.Palma, M. Zarcone, Y Omar & V. R. Vieira, New J. Phys 8,214 (2006); J. Phys. A 40, 7993 (2007), Las. Phys. 17, 889 (2007); F. Ciccarello, G. M.Palma & M. Zarcone, Phys. Rev. B 75, 205415 (2007) F.Ciccarello, G. M.Palma, M.Paternostro, M. Zarcone and Y Omar, to appear on Sol. Stat. Sci. (2008). [3] F. Ciccarello, M.Paternostro, M. S. Kim, and G. M. Palma, Phys. Rev. Lett. 100, 150501 (2008); Int. J. Quant. Inf. 6,759 (2008). [4] F. Ciccarello, M.Paternostro, G. M. Palma, M. Zarcone, arXiv:0812.0755v1 [quant-ph]. Decoherence effects in interacting qubits under the influence of correlated envrionments Sumanta Das Oklahoma State University Coauthors: G. S. Agarwal It is now well understood that one needs entanglement for both quantum logic operations and computations. Numerous methods of producing qubit-qubit entanglement have been investigated during the past decade. A method, which is of particular interest in the context of quantum logic gate operations with systems like ion-traps and semiconductor nanostructures, relies on the coherent interactions among the qubits. However generation of entanglement does not alone serve the purpose. One also needs sustained entanglement among the qubits as a computation progress. Note that sustained entanglement can only be achieved if the quantum mechanical system under evolution is completely isolated from its surrounding. In practice though the system interacts with its environment and looses its coherence, thereby degrading entanglement. Thus the study of dynamical evolution of two entangled qubits coupled to environmental degrees of freedom is of fundamental importance in quantum information sciences. In recent years numerous studies have been done in this respect. One study in particular predicted a remarkable new behavior in the entanglement dynamics of a bi-partite system. It reported that a mixed state of an initially entangled two qubit system, under the influence of a pure dissipative environment becomes completely disentangled in a finite time [1]. This was termed as Entanglement Sudden Death (ESD) and was recently observed [2]. Even though numerous investigations on ESD in a variety of systems have been done so far, the question of ESD in interacting qubits remains more or less open. In this paper we investigate the entanglement dynamics for a system of interacting qubits in contact with various correlated models of the environment. Note that correlated environments are known to be less harmful to entanglement and can be tailored to be decoherence free. Thus it is imperative to investigate the effect of coherent interaction among the qubits on the entanglement in correlated environments. We present explicit analytical results for the time dependence of concurrence by considering an initial mixed state of the qubits [1] with phase dependent coherence for correlated dissipative (T1 effect) and dephasing (T2 effect) models of the environment. For the case of non-interacting qubits and correlated dissipative environment, we find no ESD. The entanglement can show a substantial slower decay. However in presence of inter-qubit interactions entanglement exhibits dark and bright periodic features but no ESD. The dark periods signify the time interval during which the qubits remain disentangled. We attribute this feature to the competition between coherent and dissipative interactions. For the case of non-interacting qubits and correlated dephasing environment we find delayed ESD. The delay depends on the dephasing rates. However in presence of interaction among the qubits, we find that entanglement exhibits dark and bright periodic behavior which eventually leads to ESD. In addition we find that the onset of dark and bright periods is sensitive to the initial coherence. For both models we find that the frequency of dark and bright periods depends on the strength of interaction between the qubits as well as on the correlated decay and dephasing rates. 1. T. Yu, and J. H. Eberly, Phys. Rev. Lett. 93, 140404 (2004); J. H. Eberly, and T. Yu, Science 316, 579 (2007). 2. M. P. Almeida et. al., Science 316, 579 (2007); J. Laurat, et. al. Phys. Rev. Lett. 99, 180504 (2007). Highly efficient energy transfer in photosynthetic complexes and quantum anti-Zeno effect Keisuke Fujii Kyoto University Coauthors: Katsuji Yamamoto Recently it has been confirmed experimentally that long-lived electronic quantum coherence plays an importation role in the highly efficient energy transfer process in photosynthesis [1]. Then several investigations were conducted to understand the underlying mechanism leading to this high efficiency [2]. They revealed that environment induced decoherence can collaborate with coherence to enhance the energy transfer efficiency. Here we consider a simplified model of photosynthetic complexes and investigate the mechanism underlying the enhancement of the efficiency of energy transfer from a view point of quantum anti-Zeno effect. We also propose and analyze an experimentally feasible setup, which demonstrates the enhancement of energy transfer in an artificial physical system. We believe that understanding the mechanism underlying the enhancement of energy transfer in photosynthetic complexes gives a new way to optimize a control of quantum systems based on the sophisticated natural system, for example, to design optimized solar cells. [1] G. S. Engel, et al., Nature 446, 782 (2007). [2] M. B. Plenio, et al., New J Phys. 10, 113019 (2008); P. Rebentrost, et al., New J Phys. 11, 033003 (2009). Measurement-induced entanglement localization on a three-photon system Miroslav Gavenda Palacky University Olomouc Coauthors: Radim Filip (Palacky University Olomouc), Eleonora Nagali (Sapienza University of Rome), Fabio Sciarrino (Sapienza University of Rome), Francesco De Martini (Sapienza University of Rome) Quantum entanglement a key resource in quantum information tasks willingly interacts with surrounding systems. Due to this interaction the entangled system under interest couples to surrounding systems and the amount of entanglement is reduced or even lost and cannot serve to its application purpose. If the entanglement is completely destroyed by the coupling, distillation protocol does not work and other correcting protocols were suggested such as unlocking of hidden entanglement or entanglement localization. Entanglement localization can concentrate back redistributed entanglement at least partially from the surroundings just by measurement on the surrounding system and proper feed-forward quantum correction. We deal with the situation when the input state is maximally entangled state of two qubits and another qubit serves as a surrounding system. Because the surrounding qubit is inaccessible before the coupling it is in an unknown state. The qubits in our case are represented by polarization states of single photons. In this presentation we extensively study the influence of coherence between the surrounding photon and one photon from the entangled pair on the localization protocol which is parametrized by the probability p that the surrounding photon is indistinguishable. After the coupling between photons, represented by transmissivity T of the beamsplitter, the entanglement of the input state is reduced and for some T entanglement is completely redirected to the surrounding photon. We theoretically prove that for any linear coupling it is possible to localize non-zero entanglement back to the pair just by proper polarization sensitive detection of photon in surrounding photon (after coupling the surrounding photon is accessible). After measurement on the surrounding photon we may use additional single-copy filtration on both photons from the pair to further raise up the concurrence. Single-copy filtration probabilistically attenuates one polarization relatively to an orthogonal one. Qualitatively this localization is independent on the level of coherence between coupling photons. Further we show that single-copy filtration produces the state violating the Bell inequalities for any p and T (except point [0, 0]). The theoretical results were experimentally tested using polarization entangled photons created in SPDC process. An extension of the localization procedure was calculated for multiple consecutive couplings to the independent surrounding Paper reference: Phys. Rev. A 79 060304 (2009) High Quality Source of Entangled Photons based on a Polarizing Sagnac Interferometer Configuration Deny Hamel IQC, University of Waterloo Coauthors: Chris Erven, Rainer Kaltenbaek, Gregor Weihs, Kevin Resch We have built a compact source of entangled photon pairs based on a polarizing Sagnac interferometer. This type of source can incorporate very long nonlinear crystals, in our case a 25mm PPKTP crystal, which was pumped using a 404 nm laser diode. The spectral brightness has been measured to be approximately 50 000 pairs/(somWonm). Using quantum state tomography, we have found the tangle of the two photon state to be of T = 0.968. This kind of entangled photon source finds applications in several key optical quantum technologies. One example is quantum key distribution, as the key generation rates are currently limited by the brightness and entanglement quality of available entangled photon sources. Probabilities for Tripartite SLOCC Entanglement Transformations Wolfram Helwig University of Toronto Coauthors: Wei Cui, Hoi-Kwong Lo For a tripartite pure state of three qubits, it is well known that there are two inequivalent classes of genuine tripartite entanglement, namely the GHZ-class and the W-class. Any two states within the same class can be transformed into each other with stochastic local operations and classical communication (SLOCC) with a non-zero probability. The optimal conversion probability, however, is only known for special cases. Here, we derive new lower and upper bounds for the optimal probability of transformation from a GHZ-state to other states of the GHZ-class. A key idea in the derivation of the upper bounds is to consider the action of the LOCC protocol on a different input state, namely 1/v2 [\ket000 - \ket111], and demand that the probability of an outcome remains bounded by 1. Moreover, we generalize some of our results to the case where each party holds a higher-dimensional system. In particular, we found that the GHZ state generalized to three qutrits, i.e., \ketGHZ3 = 1 /v3 [ \ket000 + \ket111 + \ket222 ] , shared among three parties can be transformed to any state of the W-class with probability 1 via LOCC. Acknowledgments: This work has been supported by CIFAR, CIPI, Connaught, CRC, MITACS, NSERC, QuantumWorks and the University of Toronto. Their support is gratefully acknowledged. [1] W. Dür et al., Phys. Rev. A 62, 062314 (2000) [2] E. Chitamber et al., Phys. Rev. Lett. 101, 140502 (2008) Machine Learning for Adaptive Quantum Measurement Alexander Hentschel Institute for Quantum Information Science at the University of Calgary Coauthors: Barry C. Sanders Classical measurement strategies are bounded in precision by the standard quantum limit. For instance, the estimation of an interferometric phase with a classical measurement strategy can only yield a precision ~ 1/vN at a cost of N photons. In contrast, quantum measurements could dramatically improve the scaling to 1/N, known as the Heisenberg limit. Employing feedback can enable Heisenberg limited measurements, yet devising such feedback protocols is complicated and often involves clever guesswork. We sidestep this challenge by bringing machine learning techniques to autonomously devise quantum measurement protocols. Specifically, our approach is based on a self-learning particle swarm algorithm that can be trained on a real world experiment. Our method does not require any knowledge about the physical system, and our algorithm can learn to account for all experimental imperfections. By accounting for imperfections, our algorithm makes time-consuming error modelling and extensive calibration avoidable. I explain our technique for the case of interferometric phase estimation, which has applications to atomic clocks and gravitational wave detection. In particular, I show how our machine learning algorithm can design adaptive measurement protocols that estimate an interferometric phase with statistical errors close to the Heisenberg limit and better than any other protocol to date. Furthermore, I outline how our method is applicable for devising quantum control protocols. A fibre-coupled waveguide source of entangled photon pairs at telecom wavelengths used for an entanglement distribution network Isabelle Herbauts Institute for Quantum Computing Coauthors: Hannes Huebel (Institute for Quantum Computing, Waterloo, ON, Canada), Bibiane Blauensteiner (Faculty of Physics, University of Vienna, Austria), Thomas Jennewein (Institute for Quantum Computing, Waterloo, ON, Canada), Andreas Poppe (Austrian Research Centers GmbH ARC, Vienna, Austria), Anton Zeilinger (Faculty of Physics, University of Vienna, and Austrian Academy of Sciences, Many quantum communication protocols rely on the distribution of entangled photon pairs. We present a high-brightness fibre-coupled source of polarisation entanglement based on spontaneous parametric down-conversion (SPDC) in waveguides at telecom wavelengths (1550nm). Using fibre-pigtailed ppLN crystals, each containing an inscribed waveguide, we can profit from the clean and efficient c2 SPDC process in waveguides, and also from a stable and efficient coupling to single mode fibres. The crystals are arranged in an Mach-Zehnder interferometric setup and are pumped by a fiber-coupled cw laser, operating at 775nm. The whole system - from the pump laser up to the receivers - is fibres and waveguides based, resulting in maximal stability, minimal losses and the advantage of readily integrable telecom components in the 1550nm range. Since the phase-matching condition for degenerate type-I SPDC is very broad, the source yields entangled pairs over a wavelength range of 30nm. Due to the tight energy restriction from the narrowband pump photons, entangled pairs are symmetrically located in wavelength around 1550nm. This property was used to fan out the pairs to different fibers by means of a standard 8-channel DWDM multiplexer, resulting in four pairs of entangled channels. Coincidences were detected with two InGaAs detectors, the first operated in a quasi free running mode and the other triggered from the detection event of the first. Although the quasi free running detector had a duty cycle of only 10%, coincidence rates of up to 450Hz were measured for each channel pair. Entangled Bell-states of fidelities up to 93% are created without subtraction of the background counts. With subtraction, the fidelities rise to 99%, indicating the proper function of our source. We furthermore implemented a 4-user network with a single central source distributing entanglement to any two pairs of users using optical switches in the fibres from the source. With those incorporated switches, we achieved a two-photon entanglement distribution between any two users of the network on demand, opening the path to more complex quantum network architectures. The SPDC pair-production, in a gain regime where higher order terms can no longer be neglected, leads not only to high count rates but also the emergence of multi-photon quantum states beyond the usual EPR pairs. While these are mainly of interest for fundamental investigations, they can also be used to increase the security and efficiency of adapted QKD schemes. Tight Noise Thresholds for Quantum Computation with Perfect Stabilizer Operations Mark Howard University of California, Santa Barbara Coauthors: Wim van Dam (University of California, Santa Barbara) An important question in quantum computing is how much noise can be tolerated by a universal gate set before its quantum-computational power is lost. There have been a number of recent results concerning the power of a computational model with perfect stabilizer operations in addition to some imperfect non-stabilizer resource. This resource could be access to a non-Clifford gate or a supply of non-stabilizer states. Here we show a tight threshold exists for all non-Clifford single-qubit gates undergoing depolarizing noise. This is in contrast to the situation wherein non-stabilizer qubit states are used; the thresholds in that case are not currently known to be tight. Photon bunching in parametric down-conversion with continuous wave excitation Hannes Huebel Institute for Quantum Computing Coauthors: I. Herbauts (Institute for Quantum Computing, Waterloo ON, Canada), S. Bettelli (University of Vienna), B. Blauensteiner (University of Vienna), A. Poppe (Austrian Institute of Technology) Light sources based on spontaneous parametric down-conversion (SPDC) have become an essential component in the toolbox of quantum optics laboratories in the last decade, allowing state-of-the-art experiments on entanglement and quantum information. The topic of multi-pair emission of such sources is both relevant for applied quantum information, and to more fundamental quantum optics experiments, since multi-pair emission affects the purity of the investigated state. We present here the first direct observation that the light field produced in SPDC via a continuous excitation (cw-laser) is bunched, i.e. it has a thermal statistics. Previous results with continuous pumped SPDC have so far been interpreted as poissonian statistics. We give an explanation why this has been the case. The thermality of the cw-pumped field should come to no surprise, since investigations with pulsed pumping demonstrated full photon bunching as early as 1998. We give experimental evidence that the statistics of a continuous wave pumped SPDC field are thermal by looking at photon emission from a single, 532nm cw-laser pumped, nonlinear crystal (periodically-poled KTP of 30mm) for type-I down-conversion, in a single output arm only. A direct characterisation of the photon number statistics is provided by the second-order temporal correlation function g(2), using a Hanbury Brown-Twiss setup, in which only the signal photons of a SPDC source are registered to determine the marginal photon statistics. The data is then compared to the g(2) of a fully thermal field of the same coherence length, but smeared due to our insufficient timing resolution. The agreement with our measured data is excellent, and hence confirm the thermal statistics of cw-pumped SPDC. Our approach shows the feasibility of investigating photon number statistics with compact cw-pumped sources and might prove fruitful in the development of SPDC sources using cw-pumping in a time domain not accessible to experiments with pulsed pumping, which are limited by the pulse repetition rate. In particular, in the rapid progress of entangled-photon sources based on SPDC, achieving higher photon-pair rates will immediately lead to higher double-pair production. The work presented here will be extended in the future to cover cw-SPDC sources used for production of entanglement. Paper reference: arXiv:0810.4785 Accidental cloning of a single photon qubit in two-channel continuous-variable teleportation Toshiki Ide Kinki University A single-photon input is automatically cloned in a continuous-variable (CV) quantum teleportation because a photon number is not preserved. In a two-mode polarization case CV teleportation randomly creates clones in each polarization mode. Cloning fidelity of the input polarization is limited because of no-cloning theorem. We analyze how close the output is to optimal cloning and show that nearly optimal cloning is achieved at experimentally feasible squeezing levels. Weak measurements of photon polarization by two-path-interference Masataka Iinuma Hiroshima University Coauthors: Gen Taguchi, Holger F. Hofmann, and Yutaka Kadoya We present the experimental realization of a weak measurement of photon polarization based on the two-path interference between the horizontally (H) and vertically (V) polarized components of the input light. If the polarization is unchanged, no interference is observed and the output is independent of input polarization. To obtain a non-zero measurement strength, the polarizations in the arms of the interferometer are rotated towards each other, resulting in an interference that distingishes between the diagonal polarizations. Effectively, we control the measurement back-action which eliminates the distingishability of H and V polarization. When the initial separation of H and V is reversed at a beam splitter, the interference effects automatically produce the measurement corresponding to this coherent back-action effect. In optical systems, such an interference based approach to weak measurements may be easier to set up and control than a weak interaction of polarization and spatial propagation in birefringent materials. In particular, it may be a convenient method for the performance of large numbers of weak measurements in optical networks. Local expansion of W states using linear optics Rikizo Ikuta Osaka University Coauthors: Toshiyuki Tashima, Takashi Yamamoto, Masato Koashi, Nobuyuki Imoto Recently, a number of proposals have been made on local expansion of multipartite entangled states, which give a systematic way of producing entanglement over many qubits. In the case of W states, deterministic local expansion is impossible even in principle. In recent studies, probabilistic optical gates for locally expanding an N-photon W state to an (N+1)- or (N+2)-photon W state were proposed. These gates use polarization-dependent beamsplitters(PDBSs) or polarization-independent beamsplitters(BSs), and use 1- or 2-photon Fock state as an ancilla. In this study, we discuss the maximum success probability of such gates composed of linear optics and an ancilla mode in a Fock state. We consider the case of mixing 1-photon from an N-photon W state with an ancillary n-photon Fock state by a PDBS, and then applying arbitrary linear optical operations to each output port to produce an (N+n)-photon W state. We have derived an upper bound on the success probability, which is achieved by a PDBS and (n-1) BSs. In the case of n=2, the optimal success probability was found to be higher than that of the expanding gate proposed before. Signatures of "Quantum Interference" with a Classical Interferometer Rainer Kaltenbaek Institute for Quantum Computing and Departmen of Physics and Astronomy, University of Waterloo Coauthors: Jonathan Lavoie, Kevin J. Resch Chirped-pulse interferometry (CPI) captures the metrological advantages of quantum Hong-Ou-Mandel (HOM) interferometry in a completely classical system. Modified HOM interferometers are the basis for a number of seminal quantum-interference effects. Here, the corresponding modifications to CPI allow for the first observation of classical analogues to the HOM peak and quantum beating. They also allow a new classical technique for generating phase super-resolution exhibiting a coherence length dramatically longer than that of the laser light, analogous to increased two-photon coherence lengths in entangled states. Paper reference: Phys. Rev. Lett. 102, 243601 (2009) Spiral Zone Plates Aleksander Marek Kubica University of Warsaw The main goal of this paper is to study optical properties of Spiral Zone Plates. SZPs are apertures focusing light in a way similar to Fresnel Zone Plates which have been investigated for a few dozen years. SZPs can be used to generate optical vortices or construct optical tweezers. First, I produced SZPs in a simple and inexpensive way. In my research I took photographs of foci. I prepared an experimental setup, which included home-made components such as a guide rail with a camera and a electronic control system. I discovered and investigated many dependencies between the shape of SZP, the intensity of focused light and the appearance of foci. I proposed and experimentally verified formulae describing a position of foci and a number of bright lines observed in a picture of a focus. I wrote computer programs processing the experimental data. Experimental results were compared with my numerical simulations and I noticed qualitative agreement between the two. I hope that it will be possible to apply results of our research e.g. in adaptive optics. Decoherence Supression via environment preparation Olivier Landon-Cardinal Universite de Montreal Coauthors: Richard MacKenzie Decoherence provides an elegant framework to explain why an open quantum system coupled to its environment will exhibit a set of preferred states, usually ruling out a coherent superposition of arbitrary states. This framework relies essentially on the interaction between the system and its environment. In the simplest model of decoherence, it was readily realized that there exist initial state of the environment that allow for decoherence-free unitary evolution of the quantum system. We investigate the conditions under which such special initial states do exist in a framework where the quantum system interacts with its environment and the environment also evolves by itself. The results obtained underline the crucial role of the environment's self-evolution. The ability to identify those special initial states and to prepare them might be used in order to store quantum states. Indeed, even if the environment cannot be controlled, it might be possible to prepare it in a specific initial state. However, our results restrict what can be expected from such a technique. More precisely, we obtain a mathematical characterization for the existence of an initial state allowing decoherence-free evolution in presence of an interaction hamiltonian and a self-evolution of the environment. This result is stated in terms of the structure of the two hamiltonians. We also present topological evidences that indicate that pairs of hamiltonians allowing for decoherence-free evolution are rare among pairs of hamiltonians. Decoherence of a quantum gyroscope Olivier Landon-Cardinal Universite de Montreal Coauthors: Richard MacKenzie A quantum reference frame can be used as a valuable physical ressource by allowing the measurement of relational observables on other systems. In a theory restricted by a superselection rule, e.g. lacking a reference direction in space, those measurements would otherwise be impossible to perform. The study of the dynamics of a quantum reference frame is of great importance for the design of quantum information processors since components of quantum hardware can be subject to quantum fluctuations and decoherence effects. One important question is to quantify the longevity of the reference, i.e. how many times it can be used before producing unreliable measurements. Poulin and Yard have shown that a quantum gyroscope prepared in a coherent state would evolve semi-classically and be useful to measure spin-½ particles along a direction. Here, we push analysis further by focusing on the decoherence of the reference. We demonstrate that their model is equivalent to another physically motivated model where the quantum reference interacts with several spin-½ particle, one after the other, through a Heisenberg interaction. Techniques used to establish this result could be used for interaction with spins of higher-order. We also show that a superposition of two coherent states will decohere into a statistical mixture of those two states. Preliminary results indicate that these coherent states minimize purity loss. The study of the decoherence of such a quantum reference exhibits an interesting transition between quantum and semi-classical behaviour. An experimental test of Svetlichny's inequality Jonathan Lavoie University of Waterloo, Institute for Quantum Computing Coauthors: R. Kaltenbaek and K. Resch It is well known that quantum mechanics is incompatible with local realistic theories. Svetlichny showed, through the development of a Bell-like inequality, that quantum mechanics is also incompatible with a restricted class of nonlocal realistic theories for three particles where any two-body nonlocal correlations are allowed. In the present work, we experimentally generate three-photon GHZ states to test Svetlichny's inequality. Our states are fully characterized by quantum state tomography using an overcomplete set of measurements and have a fidelity of (84±1)% with the target state. We measure a convincing, 3.6 standard deviations, violation of Svetlichny's inequality and rule out this class of restricted nonlocal realistic models. Fast Error Correction of Codeword-Stabilized Quantum Codes Yunfan Li University of California, Riverside Coauthors: Ilya Dumer, Leonid Pryadko Codeword stabilized (CWS) codes is a general class of quantum codes that also includes stabilizer codes. The main goal of this paper is to simplify error correction of CWS codes. Our analysis shows that these codes may require complicated nonlocal error-correcting measurements that have exponential complexity in code length instead of the polynomial complexity known for stabilizer codes. To simplify error correction, quasilinear CWS codes (with a partial group word operator set) are first introduced in this paper. It is then shown that for these codes nonlocal measurements can be decomposed into simple local measurements. Secondly, a new error correction scheme is proposed, which exponentially reduces the total number of decoding measurements, and also converts a generic CWS code into a quasilinear CWS code. As a result, both the number and the complexity of nonlocal measurements are reduced exponentially for general CWS codes. A SIC-POVM via Weak Measurements Zachari Medendorp University of Toronto Coauthors: Fabian Torres, Krister Shalm, Aephraim Steinberg, Chris Fuchs A symmetric informationally complete positive operator valued measurement(SICPOVM) is performed on a Qutrit. Weak measurements allow us to obtain statistics in the 9 SICPOVM bases. Realization of a qubit with three p-Wave Superfluid Vortices Mikio Nakahara Kinki University Coauthors: Tetsuo Ohmi We show that Majorana fermions trapped in three vortices in a p-wave superfluid form a qubit in a topological quantum computing (TQC). It has been already proposed that a qubit may be implemented with two or four Majorana fermions, where a qubit operation is performed by exchanging the positions of vortices. The set of quantum gates thus obtained is, however, a discrete subset of the relevant unitary group. Here, we propose a new scheme, where three Majorana fermions form a qubit. We show that continuous qubit operations are possible by exchange of the positions of the Majorana fermions complemented with dynamical phase change. Towards experimental demonstration of solid state quantum memory. Sergey V. Polyakov Coauthors: E.A. Goldschmidt, S.E. Beavan, J. Fan, and A.L. Migdall We report on our experimental progress in implementing quantum memory in Pr-doped crystals, based on the original proposal of Duan, Lukin, Cirak and Zoller (DLCZ). We optimized state preparation sequence theoretically, and have an experimental evidence of successful selection and initialization of an ensemble of ions. Preliminary experiments with Raman scattering suggest the need to employ an extremely narrow (1MHz FWHM) filter to separate non-classical light from the ensemble from a strong pump beam. We have developed and tested such filter via hole-burning in a separate Pr-doped Gaussian and Non-Gaussian Entanglement in Coupled Waveguides Amit Rai Oklahoma State University Coauthors: Sumanta Das and G. S. Agarwal This work on the generation of entanglement and its survival in coupled waveguides is motivated by the possibility of using coupled waveguides as basic units of quantum circuits. We study entanglement in terms of quantitative measures and examine the robustness of waveguide structures in retaining the entanglement. We first study the dynamics of entanglement for a non-Gaussian state. We assume that single photons are coupled to each of the two waveguides. In this case the initially separable state evolves into an entangled state. We quantify the entanglement of the state at time t using the log negativity E which is related to the negative eigen-values of the transposed density matrix of the system. The log negativity E is a non-negative quantity and a non-zero value of would mean that the state is entangled. We found that the entanglement quantified by the log negativity shows an oscillatory behavior and the system gets entangled and disentangled periodically. Further we investigate the dynamics of entanglement for Gaussian states. For this purpose we assume that squeezed light is injected in each of the two waveguides. We evaluate the logarithmic negativity to study the time evolution of entanglement. We find that log negativity oscillates between non-zero and zero value. This suggests that during its propagation the separable input state evolves into an entangled state and vice versa due to the coupling between the waveguide modes. We also address the question of de-coherence in coupled waveguides by considering the leakage of the modes in case of an initial separable Gaussian states. We find that substantial amount of entanglement persists between the waveguide modes even for considerable decay rates. 1. J. C. F. Matthews, A. Politi, A. Stefanov, and J. L. O'Brien, Nature Photonics 3, 346 (2009). 2. D. W. Berry and H. M. Wiseman, Nature Photonics 3, 317 (2009). 3. A. Politi, M. J. Cryan, J. G. Rarity, S. Yu, J. L. O'Brien, Science 320, 646 (2008). Measuring Bohmian Trajectories of a Photon Using Weak Measurement Sylvain RAVETS University of Toronto Coauthors: Sacha KOCSIS & Boris BRAVERMAN In the article "Grounding Bohmian Mechanics in Weak Measurements and Bayesianism", Howard Wiseman proposed an operational definition for a measurement of what he calls the "naively observable velocity" of a particle. We use this proposal to experimentally reconstruct the trajectories of a single photon in a double-slit interferometer. These trajectories have been theoretically derived in "Bohmian Trajectories for Photons". Spatial Super-Resolution with Triphoton N00N States L. A. Rozema Centre for Quantum Information & Quantum Control and Institute for Optical Sciences, University of Toronto Coauthors: L. K. Shalm, A. M. Steinberg, M. N. O'Sullivan and R. W. Boyd The proposal that quantum entanglement can lead to sub-Rayleigh resolution in optics has received much attention lately. Here we present an experiment in which three entangled photons are used to demonstrate spatial super-resolution. Robust entanglement generation in a system based on a nitrogen- vacancy centre via numerically-optimised control pulses Ressa Said Macquarie University Coauthors: Jason Twamley We discuss schemes for the generation of an entangling gate between the electronic and nuclear spins in the system of a single nitrogen- vacancy centre and an adjacent Carbon-13 atom in diamond, which is robust against two types of systematic errors: pulse-length and off- resonance errors. The errors occur when the apparatus controlling the dynamics of the system operates in an imprecise but reproducible manner. We investigate effects when these systematic errors are present in various pulse sequences which, if perfectly executed, yield the desired entangling gate. We examine their effect in a basic sequential application of rectangular pulses of microwave and radio- frequency radiation on the composite electron nuclear spin system. Furthermore, when exposed to such systematic errors we compare the performance of this basic sequential pulse with more robust pulses: composite pulses, and numerically-optimised pulses produced from a modified algorithm derived from gradient ascent pulse engineering (GRAPE). The sequential pulses have previously been used to generate two-qubit entanglement [1], while composite pulses are known to be capable of correcting systematic errors in nuclear magnetic resonance (NMR) experiments [2]. The GRAPE algorithm was also initially developed in NMR experiments to produce pulses that minimise the time required to implement a target unitary operator [3] and some quantum algorithmic elements [4]. GRAPE pulses have been experimentally demonstrated in a single qubit trapped ion system [5]. They have also been used to implement high-fidelity single qubit operations in a noisy environment due to random telegraph noise in superconducting solid-state qubits [6]. We adapt and develop the methods of composite pulses and the GRAPE algorithm to be able operate in the the combined electron and nuclear spin system when exposed to these types of systematic errors, to produce a highly robust entangling gate. We have found that the gate created by the GRAPE numerically-optimised pulses is more robust against systematic errors and has faster implementation time than that required by the corresponding composite pulses. [1] F. Jelezko and J. Wrachtrup, J. Phys. Condens. Matter, 16, R1089 (2004). [2] H. K. Cummins, G. Llewellyn, and J. A. Jones, Phys. Rev. A 67 042308 (2003). [3] N. Khaneja, T. Reiss, C. Kehlet, T. Schulte-Herbruggen and S.J. Glaser, J. Magn. Reson., 172, 296 (2005). [4] T. Schulte-Herbruggen, A. Sporl, N. Khaneja, and S. J. Glaser, Phys. Rev. A, 72, 042331 (2005). [5] N. Timoney, V. Elman, S. Glaser, C. Weiss, M. Johanning, W. Neuhauser, and Chr. Wunderlich, Phys. Rev. A, 77, 052334 (2008). [6] M. Mottonen, R. de Sousa, J. Zhang, and K. B. Whaley, Phys. Rev. A, 73, 022332 (2006). Paper reference: arXiv:0903.3827 Switchable effective interactions in molecule-based quantum gates. Paolo Santini Universita' di Parma and S3-CNR-INFM Coauthors: S. Carretta, G. Amoretti Magnetic molecules containing strongly exchange-coupled transition-metal ions have been recently proposed as promising candidates for qubit encoding and manipulation [1,2]. A major obstacle of this approach is the need to vary the value of molecule-molecule exchange couplings during each two-qubit operation. This can be hardly done directly in a controlled way. We show that molecules made of isosceles antiferromagnetic triangles constitute advantageous units to solve this problem [3]. In fact, the peculiar spin structure of their low-energy wavefunctions enables switchable effective intermolecular couplings in presence of permanent microscopic interactions. Another promising scheme is a magnetic link between neighboring nanomagnets made by a magnetic complex (e.g., a dimer) with a singlet ground-state. Effective interactions can be switched on by exciting the linking complex to a magnetic state. System of this type have just begun to be synthesized. [1] M.N. Leuenberger and D. Loss, Nature 410, 789 (2001). [2] F. Troiani, A. Ghirri, M. Affronte, S. Carretta, P. Santini, G. Amoretti, S. Piligkos, G. Timco and R. E. P. Winpenny, Phys. Rev. Lett. 94, 207208 (2005). [3] S. Carretta, P. Santini, G. Amoretti, F. Troiani, and M. Affronte, Phys. Rev. B 76, 024408 (2007). Optimised control of SCRAP in a lambda-type three-level system. Johann-Heinrich Schönfeldt Macquarie University Coauthors: Stojan Rebic, Jason Twamley Inhomogeneous broadening of energy levels is one of the principal limiting factors for achieving ßlow" or ßtationary" light in solid state media by means of electromagnetically induced transparency (EIT), a quantum version of stimulated Raman adiabatic passage (STIRAP). Stark-shift-chirped rapid-adiabatic-passage (SCRAP) has been shown to be far less sensitive to inhomogeneous broadening than STIRAP, a population transfer technique to which it is closely related. We further optimise the pulses used in SCRAP to be even less sensitive to inhomogeneous broadening in a lambda-type three-level system. The optimised pulses perform at a higher fidelity than the standard gaussian pulses for a wide range of detunings (i.e. large inhomogeneous broadening). Paper reference: arXiv:0905.0052v1 Demonstration of a Unitarily Correctable Code using entangled photons Kurt Schreiter Institute for Quantum Computing, University of Waterloo Coauthors: A. Pasieka, R. Kaltenbaek, K.J. Resch, and D.W. Kribs Noise poses a challenge for any real-world implementation in quantum information science. The theory of error correction deals with this problem via methods to encode and recover quantum information in a way that is resilient against that noise. Unitarily correctable codes are an error correction technique wherein a single unitary recovery operation is applied without the need for an ancilla Hilbert space. Here, we present the ?rst optical implementation of a non-trivial unitarily correctable code for a noisy quantum channel with no decoherence-free subspaces or noiseless subsystems. We show that recovery of our initials states is achieved with high ?delity (= 0.97), quantitatively proving the efficacy of this unitarily correctable code. Quantum Systems and Control Theory: Constraints by Symmetry and by Relaxation Thomas Schulte-Herbrüggen TU-Munich, Dept. Chemistry Coauthors: Uwe SANDER We investigate the universality of multi-qubit controlled systems in architectures of various symmetries in coupling type and topology. Explicit reachability sets under symmetry constraints are provided. Thus for a given experimental coupling architecture practical decision problems can be solved in a unified way: (i) can a target Hamiltonian be simulated? (ii) can a target gate be synthesised? (iii) to which extent is the system observable by a given set of detection operators? -- Constraints by relaxation are sketched in the framework of Lie semigroups. Its practical implications reach from Markovianity of quantum channels to average Liouvillians. In turn, lack of symmetry is a convenient necessary condition for full controllability. Though much easier to assess than the well-established Lie-algebra rank condition, is not yet sufficient. We present simple further conditions that add to lack of symmetry ensuring full controllability and universality of the controlled hardware set-up. Paper reference: arXiv:0904.4654, arXiv:0811.3906 Stability and Utility of Chiral Spin Currents for Quantum Computing in Quantum Dots Catherine Stevenson Dalhousie University Coauthors: Jordan Kyriakidis (Dalhousie University) Within degenerate subspaces, states are commonly classified according to symmetries present in the underlying Hamiltonian. However, alternate classifications may be preferable with respect to stability and coherence, particularly if they exhibit many-body correlations. We have surveyed the low-lying degenerate eigenstates of two-dimensional quantum dots containing both spatial [O(1)] and spin [O(3)] rotational symmetry. The three-particle system in particular is four-fold degenerate with S = 1/2 whose states are conventionally described by the quantum numbers (Lz, Sz) = (±1, ±1/2). Transitions between these states are essentially single-particle transitions. However, states with a definite spin chirality may be alternatively created from these states, each carrying a well-defined chiral spin current. These are classified by quantum numbers (C, Q) = (±1, ±1), where C is the chirality and Q is a topological charge associated with the spin winding. Within a degenerate subspace, C and Q are conserved quantities. These are many-body correlated states and can be expected to yield longer lifetimes in the presence of single-particle perturbations compared to states classified by Lz and Sz. Our calculations employ configuration-interaction techniques for confined particles with long-range Coulomb repulsion. We directly compute the correlated many-body eigenstates of the system containing thousands of Slater determinants. We present conclusive evidence of statistical and Coulomb induced chiral spin textures within energetically degenerate subspaces, and present results examining their stability with respect to single-particle perturbations. Quantum phase estimation in the presence of phase noise in qubit systems Berihu Teklu Universita` di Milano Coauthors: Stefano Olivares, Matteo G. A Paris We address quantum estimation protocols designed to estimate physical parameters of qubit gates in the presence of phase noise. We derive analytical formulae for the precision of phase estimation obtainable with qubit probe and show the optimality of equatorial qubit probes. We explicitly evaluate quantum Fisher information and show that ultimate quantum limit to precision may be achieved by an observable measurable with current technology. An experimental setup for the implementation of the suggested measurement is discussed in some details. Paper reference: in preparation Comparison of maximum-likelihood and linear reconstruction schemes in quantum measurement tomography Peter Turner University of Tokyo Coauthors: Takanori Sugiyama, Mio Murao The effects on quantum states caused by measurement apparatuses can be described in general by sets of completely positive maps called instruments. There exists a linear reconstruction scheme for the instrument describing a given measurement apparatus from experimental data, but the scheme has the disadvantage that it can give unphysical reconstructions. In this poster we propose a maximum-likelihood reconstruction scheme that addresses this disadvantage. We show that our scheme always gives a physical reconstruction, and that it does so more efficiently than the linear scheme. Fock-Space Coherence in Multilevel Quantum Dots Eduardo Vaz Dalhousie University Coauthors: Jordan Kyriakidis A Fock-space coherence can occur in systems where states with different particle numbers are simultaneously available. In such systems -- for example, a quantum dot with open transport channels -- the lifetime and robustness of these coherent states may be long lived relative to the more common single-particle-number coherence -- for example, a Hilbert-space coherence between spin states. We have developed a microscopic, non-Markovian theory to investigate real-time features of this Fock-coherence in multilevel quantum dots far from equilibrium. In a model where the dominant relaxation mechanism is through sequential tunnelling transport, we observe a decoupling between the evolution of the Fock-space coherence and that of the occupation probabilities for the dot states. In experimentally relevant parameter regimes, the lifetime of the Fock-space coherence is dramatically increased even when the Hilbert-space coherence between states with same particle number decays to zero. This is a dramatic example of how a many-body coherence can remain robust even in the presence of rather large single-particle noise. Optimal control of a qubit coupled to a Non-Markovian Environment Frank K. Wilhelm Coauthors: P. Rebentrost, I. Serban, T. Schulte-Herbrueggen, F.K. Wilhelm A central challenge for implementing quantum computing in the solid state is decoupling the qubits from the intrinsic noise of the material. We investigate the implementation of quantum gates for a paradigmatic, non-Markovian model: a single-qubit coupled to a two-level system that is exposed to a heat bath. We systematically search for optimal pulses using a generalization of the novel open systems gradient ascent pulse engineering algorithm. Next to the known optimal bias point of this model, there are optimal pulses which lead to high-fidelity quantum operations for a wide range of decoherence parameters. Paper reference: Phys. Rev. Lett. 102, 090401 (2009). A Light-Matter Interface for Quantum Information Processing Xingxing Xing Department of Physics and CQIQC, University of Toronto Coauthors: Luciano Cruz, Amir Feizpour, Aephraim M. Steinberg We will present our design and preliminary data of a light-matter interface for tasks such as two-qubit quantum gate in quantum information. Using cavity resonance, we modify the spectrum of spontaneously parametric down-conversion to match the atomic resonance. We use laser cooled and trapped atoms to see the interaction between light and matter. Deterministic Quantum Phase Gates for Two Atoms Trapped in Separate Cavities Mediated by An Optical Fiber Zhen-Biao Yang Fuzhou University Coauthors: Shi-Biao Zheng (Department of Physics, Fuzhou University, P. R. China) The existence of quantum computing algorithms shows that a quantum computer can solve specific problems that are intractable with classical computers [1]. This discovery has stimulated a flurry of research into this mathematical concept. It has been shown that any quantum computational operation can be decomposed into a series of quantum logic gates, and two-qubit controlled phase gate and one-qubit gate are universal for constructing a quantum computer [2]. For the physical implementation of quantum computation, a quantum system is needed. The real quantum system should satisfy five requirements as follows [3]: (i) qubits can be initialized to arbitrary values; (ii) gate operation on specific qubits can be turned on and off at will; (iii) qubits can be read easily; (iv) quantum gates faster than decoherence time; (v) the system should be scalable. Several physical systems are competent, such as trapped ions [4], cavity QED [5], superconducting circuits [6], semiconductor quantum dots [7], linear optics [8], impurity spins in solids [9], etc. Cavity quantum electrodynamics (QED) [10], which concerns the interaction of atoms and photons within miniature cavities, provides a natural setting for quantum logic operation. Atomic systems are qualified to act as qubits, as appropriate internal electronic states can coherently store information over very long timescale; moreover, the qubit information can be written in and read out easily by the well-established optical pumping techniques [11]. Photons are suitable for distribution of information throughout the system. High-finesse cavities can provide good insulation against the environment and thus can hold photons over long enough timescale before dissipation. The best cavity reported so far is an ultrahigh finesse microwave Fabry-Perot resonator, which can in principle allow millions of times of gate operations [12]. The strong coupling between atoms and cavity modes can be used to perform logic operations [10]. The switching on and off of the atom-field interaction can be achievable through simple control light pulses. Many of the experimental demonstrations and theoretical schemes for quantum logic operation including quantum state preparations are based on a single cavity. In principle, atomic qubits can be increased on a large scale and controlled optionally to interact with the cavity in order to satisfy specific computing purposes. However, with the increase of the number of the atomic qubits, the manipulations on not only qubit-cavity interaction but also on qubits themselves become difficult, because unexpected crossing-interaction from qubits that are temporarily out of consideration might arise and thus influence or spoil the useful interaction. In order to solve this problem, the best way is to use many linked cavities, each contains single or several atoms and can act as a network node for logic operation. There are two ways to link separate cavities. If the neighboring cavities are linked via detection of leaking photons [13], then it allows only a probabilistic gate. In another way, if the neighboring cavities are directly linked to coherently couple with each other [14], then it realizes a deterministic gate. One way or another, such types of quantum connectivity can essentially allow the distribution of quantum computation across a network [15]. We are focused theoretically on the deterministic two-atom quantum phase gates trapped in separate cavities that are connected by a short optical fiber [16]. The interaction between the atom and the local cavity as well as between the fiber and two cavities is resonant. In this way, the communication between the two atomic qubits is established. We use asymmetric encoding for two qubits. Taking advantage of an additional atomic ground state decoupled from the atom-cavity interaction, the entire system only evolves in the single-excitation subspace. Through the selection of atom-cavity, fiber-cavity coupling strengths and interaction time, we find conditional pi-phase gate with remarkable high fidelity can be achieved. We study the stability of the gates and the influences of dissipation and show that such gates are robust. The gates possess the following features: (i) the gate operation time is very short due to the resonant interaction in the coherenct evolution of the entire system; (ii) dissipation due to spontaneous emission and photon leakage is greatly reduced as the interation for the gate operation involves only one excitation; (iii) the gates are valid for a wide range of optional systems parameters thus are very flexible; (iv) the gates are scalable. [1] P. Shor, in Proceedings of the 35th Annual Symposium on the Foundation of Computer Science (IEEE Computer Society Press, Los Alomitos, CA, 1994), pp.124-134; L. K. Grover, Phys. Rev. Lett. 80, 4329 (1998). [2] D. P. DiVincenzo, Phys. Rev. A 51, 1015 (1995); D. P. DiVincenzo, Science 270, 255 (1995); S. Lloyd, Phys. Rev. Lett. 75, 346 (1995). [3] D. P. DiVincenzo, Fortschritte der Physik, 48 771 (2000). [4] J. I. Cirac and P. Zoller, Phys. Rev. Lett. 74, 4091 (1995); C. Monroe, D. M. Meekhof, B. E. King, W. M. Itano, and D. J. Wineland, Phys. Rev. Lett. 75, 4714 (1995); D. Kielpinski, C. Monroe, and D. J. Wineland, Nature 417, 709 (2002). [5] P. Berman, 1994, Ed., Cavity Quantum Electrodynamics (Academic Press); S. Haroche and J. M. Raimond, Exploring the Quantum (Oxford University Press, Oxford, 2006). [6] D. Est`eve, Superconducting qubits, in Proceedings of the Les Houches 2003 Summer School on Quantum Entanglement and Information Processing, edited by D. Esteve, J.-M. Raimond (Elsevier, 2004). [7] D. Loss, and D. P. DiVincenzo, Phys. Rev. A 57, 120 (1998). [8] E. Knill, R. Laflamme, and G.J. Milburn, Nature 409, 46 (2001); R. Raussendorf, and H. J. Briegel, Phys. Rev. Lett. 86, 5188 (2001). [9] B. Kane, Nature 393, 133 (1998). [10] J. M. Raimond, M. Brune, and S. Haroche, Rev. Mod. Phys. 73, 565 (2001); [12] S. Kuhr et al., Appl. Phys. Lett. 90, 164101 (2007). [13] S. Bose, P. L. Knight, M. B. Plenio, and V. Vedral, Phys. Rev. Lett. 83, 5158 (1999); S. B. Zheng, Phys. Rev. A 77, 044303 (2008). [14] J. I. Cirac, P. Zoller, H. J. Kimble, and H. Mabuchi, Phys. Rev. Lett. 78 3221 (1997); T. Pellizzari, Phys. Rev. Lett. 79 5242 (1997). [15] H. J. Kimble, Nature 453, 1023 (2008). [16] Z. B. Yang, H. Z. Wu, W. J. Su, and S. B. Zheng, Accepted by Phys. Rev. A. Direct observation of Hardy's paradox by joint weak measurement with an entangled photon pair Kazuhiro Yokota Osaka University Coauthors: Takashi Yamamoto, Masato Koashi, Nobuyuki Imoto We implemented a joint weak measurement of the trajectories of two photons in a photonic version of Hardy's experiment. The joint weak measurement has been performed via an entangled meter state in polarization degrees of freedom of the two photons. Unlike Hardy's original argument in which the contradiction is inferred by retrodiction, our experiment reveals its paradoxical nature as preposterous values actually read out from the meter. Such a direct observation of a paradox will give us a new insight into the spooky action of quantum mechanics. Paper reference: arXiv:0811.1625 Coherent control of vibrational states in optical lattices via interference between one- and two-phonon excitation Chao Zhuang University of Toronto Coauthors: Chris Paul, Samansa Maneshi, Nick Chisholm, Luciano Cruz, Aephraim Steinberg We demonstrate that the control of quantum vibrational states in an optical lattice can be achieved by using interference between two-phonon excitation at w and one-phonon excitation at 2w. We use this technique to improve the ratio of coherent coupling to loss in our system. In our experiment, 85 Rb atoms are trapped in a vertical optical lattice, leading to a tilted-washboard potential when the effect of gravity is considered. While neighboring vibrational states in one well may be coherently coupled by sinusoidal drive of the lattice displacement at the secular frequency w, this also leads to leakage into higher excited states and eventual loss from the lattice. We use coherent control to mitigate this problem, by adding a simultaneous parametric drive at 2w, directly coupling states of the same parity. The resonant drive corresponds to Raman scattering of laser beams phase-modulated (PM) at w, while the parametric drive corresponds to Raman scattering of laser beams amplitude-modulated (AM) at 2w. We demonstrate experimentally that quantum interference between the absorption of two PM quanta and one AM quantum can be used to control the branching ratio, and specifically, to improve the ratio of coherent coupling to loss. Back to top
{"url":"http://www.fields.utoronto.ca/programs/scientific/09-10/CQIQCIII/abstracts2.html","timestamp":"2014-04-20T23:50:28Z","content_type":null,"content_length":"94948","record_id":"<urn:uuid:0d03699c-5772-4963-9b9c-fa77aa585b1d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Weston, MA 02493 Harvard 2X: Success for Business, English, Law, Math, and Test Prep ...I like the thorough way that ACT math tests pre-algebra, elementary algebra, intermediate algebra, coordinate geometry, plane geometry, and . Anyone who has a good grounding in these topics and reads the questions carefully can do almost perfectly.... Offering 10+ subjects including trigonometry
{"url":"http://www.wyzant.com/_trigonometry_tutors.aspx","timestamp":"2014-04-21T13:46:29Z","content_type":null,"content_length":"59488","record_id":"<urn:uuid:4e8d1a62-79a4-4f1e-8803-c3736bb083f0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure 20: (a) The shear deformation of a torus generates a (projective) non-Abelian geometric phase , which is a generator of a projective representation modular transformation. The last shear-deformed torus is the same as the original torus after a coordinate transformation: , . (b) The squeezing deformation of a torus generates a (projective) non-Abelian geometric phase , which is the other generator of a projective representation modular transformation. The last squeeze-deformed torus is the same as the original torus after a coordinate transformation: , .
{"url":"http://www.hindawi.com/journals/isrn/2013/198710/fig20/","timestamp":"2014-04-19T00:18:20Z","content_type":null,"content_length":"9087","record_id":"<urn:uuid:3c0d77bf-388c-456a-babe-13b77daaaa53>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Provided by: orpie is a console-based RPN calculator with an interactive visual CAUTION: while this manpage should be suitable as a quick reference, it may be subject to miscellaneous shortcomings in typesetting. The definitive documentation is the user manual provided with Orpie in PDF This section describes how to use Orpie in its default configuration. After familiarizing yourself with the basic operations as outlined in this section, you may wish to consult the orpierc(5) manpage to see how Orpie can be configured to better fit your needs. The interface has two panels. The left panel combines status information with context-sensitive help; the right panel represents the calculator’s stack. (Note that the left panel will be hidden if Orpie is run in a terminal with less than 80 columns.) In general, you perform calculations by first entering data on to the stack, then executing functions that operate on the stack data. As an example, you can hit 1<enter>2<enter>+ in order to add 1 and 2. ENTERING REAL NUMBERS To enter a real number, just type the desired digits and hit enter. The space bar will begin entry of a scientific notation exponent. The ’n’ key is used for negation. Here are some examples: Keypresses Resulting Entry 1.23<enter> 1.23 1.23<space>23n<enter> 1.23e-23 1.23n<space>23<enter> -1.23e23 Orpie can represent complex numbers using either cartesian (rectangular) or polar coordinates. See PERFORMING BASIC COMMAND OPERATIONS to see how to change the complex number display mode. A complex number is entered by first pressing ’(’, then entering the real part, then pressing ’,’ followed by the imaginary part. Alternatively, you can press ’(’ followed by the magnitude, then ’<’ followed by the phase angle. The angle will be interpreted in degrees or radians, depending on the current setting of the angle mode (see PERFORMING BASIC COMMAND OPERATIONS). Examples: Keypresses Resulting Entry (1.23, 4.56<enter> (1.23, 4.56) (0.7072<45<enter> (0.500065915655126, 0.50006591... (1.23n,4.56<space>10<enter> (-1.23, 45600000000) You can enter matrices by pressing ’[’. The elements of the matrix may then be entered as described in the previous sections, and should be separated using ’,’. To start a new row of the matrix, press ’[’ again. On the stack, each row of the matrix is enclosed in a set of brackets; for example, the matrix would appear on the stack as [[1, 2][3, 4]]. Examples of matrix entry: Keypresses Resulting Entry [1,2[3,4<enter> [[1, 2][3, 4]] [1.2<space>10,0[3n,5n<enter> [[ 12000000000, 0 ][ -3, -5 ]] [(1,2,3,4[5,6,7,8<enter> [[ (1, 2), (3, 4) ][ (5, 6), (... ENTERING DATA WITH UNITS Real and complex scalars and matrices can optionally be labeled with units. After typing in the numeric portion of the data, press ’_’ followed by a units string. The format of units strings is described in the UNITS FORMATTING section. Examples of entering dimensioned data: Keypresses Resulting Entry 1.234_N*mm^2/s<enter> 1.234_N*mm^2*s^-1 (2.3,5_s^-4<enter> (2.3, 5)_s^-4 [1,2[3,4_lbf*in<enter> [[ 1, 2 ][ 3, 4 ]]_lbf*in _nm<enter> 1_nm An exact integer may be entered by pressing ’#’ followed by the desired digits. The base of the integer will be assumed to be the same as the current calculator base mode (see PERFORMING BASIC COMMAND OPERATIONS to see how to set this mode). Alternatively, the desired base may be specified by pressing space and appending one of {b, o, d, h}, to represent binary, octal, decimal, or hexadecimal, respectively. On the stack, the representation of the integer will be changed to match the current base mode. Examples: Keypresses Resulting Entry #123456<enter> # 123456‘d #ffff<space>h<enter> # 65535‘d #10101n<space>b<enter> # -21‘d Note that exact integers may have unlimited length, and the basic arithmetic operations (addition, subtraction, multiplication, division) will be performed using exact arithmetic when both arguments are A variable name may be entered by pressing ’@’ followed by the desired variable name string. The string may contain alphanumeric characters, dashes, and underscores. Example: Keypresses Resulting Entry @myvar @ myvar Orpie also supports autocompletion of variable names. The help panel displays a list of pre-existing variables that partially match the name currently being entered. You can press ’<tab>’ to iterate through the list of matching variables. As a shortcut, keys <f1>-<f4> will enter the variables (‘‘registers’’) @ r01 through @ r04. Orpie includes definitions for a number of fundamental physical constants. To enter a constant, press ’C’, followed by the first few letters/digits of the constant’s symbol, then hit enter. Orpie offers an autocompletion feature for physical constants, so you only need to type enough of the constant to identify it uniquely. A list of matching constants will appear in the left panel of the display, to assist you in finding the desired choice. The following is a list of Orpie’s physical constant symbols: Symbol Physical Constant NA Avagadro’s number k Boltzmann constant Vm molar volume R universal gas constant stdT standard temperature stdP standard pressure sigma Stefan-Boltzmann constant c speed of light eps0 permittivity of free space u0 permeability of free space g acceleration of gravity G Newtonian gravitational constant h Planck’s constant hbar Dirac’s constant e electron charge me electron mass mp proton mass alpha fine structure constant phi magnetic flux quantum F Faraday’s constant Rinf ‘‘infinity’’ Rydberg constant a0 Bohr radius uB Bohr magneton uN nuclear magneton lam0 wavelength of a 1eV photon f0 frequency of a 1eV photon lamc Compton wavelength c3 Wien’s constant All physical constants are defined in the Orpie run-configuration file; consult the orpierc(5) manpage if you wish to define your own constants or change the existing definitions. Orpie can also parse input entered via an external editor. You may find this to be a convenient method for entering large matrices. Pressing ’E’ will launch the external editor, and the various data types may be entered as illustrated by the examples below: Data Type Sample Input String exact integer #12345678‘d, where the trailing letter is one of the base characters {b, o, d, h} real number -123.45e67 complex number (1e10, 2) or (1 <90) real matrix [[1, 2][3.1, 4.5e10]] complex matrix [[(1, 0), 5][1e10, (2 <90)]] variable @myvar Real and complex numbers and matrices may have units appended; just add a units string such as ‘‘_N*m/s’’ immediately following the numeric portion of the expression. Notice that the complex matrix input parser is quite flexible; real and complex matrix elements may be mixed, and cartesian and polar complex formats may be mixed as well. Multiple stack entries may be specified in the same file, if they are separated by whitespace. For example, entering (1, 2) 1.5 into the editor will cause the complex value (1, 2) to be placed on the stack, followed by the real value 1.5. The input parser will discard whitespace where possible, so feel free to add any form of whitespace between matrix rows, matrix elements, real and complex components, etc. Once some data has been entered on the stack, you can apply operations to that data. For example, ’+’ will add the last two elements on the stack. By default, the following keys have been bound to such Keys Operations + add last two stack elements - subtract element 1 from element 2 * multiply last two stack elements / divide element 2 by element 1 ^ raise element 2 to the power of element 1 n negate last element i invert last element s square root function a absolute value function e exponential function l natural logarithm function c complex conjugate function ! factorial function % element 2 mod element 1 S store element 2 in (variable) element 1 ; evaluate variable to obtain contents As a shortcut, function operators will automatically enter any data that you were in the process of entering. So instead of the sequence 2<enter>2<enter>+, you could type simply 2<enter>2+ and the second number would be entered before the addition operation is applied. As an additional shortcut, any variable names used as function arguments will be evaluated before application of the function. In other words, it is not necessary to evaluate variables before performing arithmetic operations on them. One could bind nearly all calculator operations to specific keypresses, but this would rapidly get confusing since the PC keyboard is not labeled as nicely as a calculator keyboard is. For this reason, Orpie includes an abbreviation syntax. To activate an abbreviation, press ’’’ (quote key), followed by the first few letters/digits of the abbreviation, then hit enter. Orpie offers an autocompletion feature for abbreviations, so you only need to type enough of the operation to identify it uniquely. The matching abbreviations will appear in the left panel of the display, to assist you in finding the appropriate operation. To avoid interface conflicts, abbreviations may be entered only when the entry buffer (the bottom line of the screen) is empty. The following functions are available as abbreviations: Abbreviations Functions inv inverse function pow raise element 2 to the power of element 1 sq square last element sqrt square root function abs absolute value function exp exponential function ln natural logarithm function 10^ base 10 exponential function log10 base 10 logarithm function conj complex conjugate function sin sine function cos cosine function tan tangent function sinh hyperbolic sine function cosh hyperbolic cosine function tanh hyperbolic tangent function asin arcsine function acos arccosine function atan arctangent function asinh inverse hyperbolic sine function acosh inverse hyperbolic cosine function atanh inverse hyperbolic tangent function re real part of complex number im imaginary part of complex number gamma Euler gamma function lngamma natural log of Euler gamma function erf error function erfc complementary error function fact factorial function gcd greatest common divisor function lcm least common multiple function binom binomial coefficient function perm permutation function trans matrix transpose trace trace of a matrix solvelin solve a linear system of the form Ax = b mod element 2 mod element 1 floor floor function ceil ceiling function toint convert a real number to an integer type toreal convert an integer type to a real number add add last two elements sub subtract element 1 from element 2 mult multiply last two elements div divide element 2 by element 1 neg negate last element store store element 2 in (variable) element 1 eval evaluate variable to obtain contents purge delete a variable total sum the columns of a real matrix mean compute the sample means of the columns of a real matrix sumsq sum the squares of the columns of a real matrix var compute the unbiased sample variances of the columns of a real matrix varbias compute the biased (population) sample variances of the columns of a real matrix stdev compute the unbiased sample standard deviations of the columns of a real matrix stdevbias compute the biased (pop.) sample standard deviations of the columns of a matrix min find the minima of the columns of a real matrix max find the maxima of the columns of a real matrix utpn compute the upper tail probability of a normal distribution uconvert convert element 2 to an equivalent expression with units matching element 1 ustand convert to equivalent expression using SI standard base units uvalue drop the units of the last element Entering abbreviations can become tedious when performing repetitive calculations. To save some keystrokes, Orpie will automatically bind recently-used operations with no prexisting binding to keys <f5>-<f12>. The current autobindings can be viewed by pressing ’h’ to cycle between the various pages of the help panel. In addition to the function operations listed in the section EXECUTING BASIC FUNCTION OPERATIONS, a number of basic calculator commands have been bound to single keypresses: Keys Operations \ drop last element | clear all stack elements <pagedown> swap last two elements <enter> duplicate last element (when entry buffer is empty) u undo last operation r toggle angle mode between degrees and radians p toggle complex display mode between rectangular and polar b cycle base display mode between binary, octal, decimal, hex h cycle through multiple help windows v view last stack element in a fullscreen editor E create a new stack element using an external editor P enter 3.14159265 on the stack C-L refresh the display <up> begin stack browsing mode Q quit Orpie In addition to the function operations listed in the section EXECUTING FUNCTION ABBREVIATIONS, there are a large number of calculator commands that have been implemented using the abbreviation syntax: Abbreviations Calculator Operation drop drop last element clear clear all stack elements swap swap last two elements dup duplicate last element undo undo last operation rad set angle mode to radians deg set angle mode to degrees rect set complex display mode to rectangular polar set complex display mode to polar bin set base display mode to binary oct set base display mode to octal dec set base display mode to decimal hex set base display mode to hexidecimal view view last stack element in a fullscreen editor edit create a new stack element using an external editor pi enter 3.14159265 on the stack rand generate a random number between 0 and 1 (uniformly distributed) refresh refresh the display about display a nifty ‘‘About Orpie’’ screen quit quit Orpie BROWSING THE STACK Orpie offers a stack browsing mode to assist in viewing and manipulating stack data. Press <up> to enter stack browsing mode; this should highlight the last stack element. You can use the up and down arrow keys to select different stack elements. The following keys are useful in stack browsing mode: Keys Operations q quit stack browsing mode <left> scroll selected entry to the left <right> scroll selected entry to the right r cyclically ‘‘roll’’ stack elements downward, below the selected element (inclusive) R cyclically ‘‘roll’’ stack elements upward, below the selected element (inclusive) v view the currently selected element in a fullscreen editor E edit the currently selected element with an external editor <enter> duplicate the currently selected element The left and right scrolling option may prove useful for viewing very lengthy stack entries, such as large matrices. The edit option provides a convenient way to correct data after it has been entered on the UNITS FORMATTING A units string is a list of units separated by ’*’ to indicate multiplication and ’/’ to indicate division. Units may be raised to real-valued powers using the ’^’character. A contrived example of a valid unit string would be "N*nm^2*kg/s/in^-3*GHz^2.34". Orpie supports the standard SI prefix set, {y, z, a, f, p, n, u, m, c, d, da, h, k, M, G, T, P, E, Z, Y} (note the use of ’u’ for micro-). These prefixes may be applied to any of the following exhaustive sets of units: String Length Unit m meter ft foot in inch yd yard mi mile pc parsec AU astronomical unit Ang angstrom furlong furlong pt PostScript point pica PostScript pica nmi nautical mile lyr lightyear String Mass Unit g gram lb pound mass oz ounce slug slug lbt Troy pound ton (USA) short ton tonl (UK) long ton tonm metric ton ct carat gr grain String Time Unit s second min minute hr hour day day yr year Hz Hertz String Temperature Unit K Kelvin R Rankine Note: No, Celsius and Fahrenheit will not be supported. Because these temperature units do not share a common zero point, their behavior is ill-defined under many operations. String ‘‘Amount of Substance’’ Unit mol Mole String Force Unit N Newton lbf pound force dyn dyne kip kip String Energy Unit J Joule erg erg cal calorie BTU british thermal unit eV electron volt String Electrical Unit A Ampere C Coulomb V volt Ohm Ohm F Farad H Henry T Tesla G Gauss Wb Weber Mx Maxwell String Power Unit W Watt hp horsepower String Pressure Unit Pa Pascal atm atmosphere bar bar Ohm Ohm mmHg millimeters of mercury inHg inches of mercury String Luminance Unit cd candela lm lumen lx lux Note: Although the lumen is defined by 1_lm = 1_cd * sr, Orpie drops the steridian because it is a dimensionless unit and therefore is of questionable use to a calculator. String Volume Unit ozfl fluid ounce (US) cup cup (US) pt pint (US) qt quart (US) gal gallon (US) L liter All units are defined in the Orpie run-configuration file; consult the orpierc(5) manpage if you wish to define your own units or change the existing definitions. Orpie is Free Software; you can redistribute it and/or modify it under the terms of the GNU General Public License (GPL), Version 2, as published by the Free Software Foundation. You should have received a copy of the GPL along with this program, in the file ‘‘COPYING’’. Orpie includes portions of the ocamlgsl [1] bindings supplied by Olivier Andrieu, as well as the curses bindings from the OCaml Text Mode Kit [2] written by Nicolas George. I would like to thank these authors for helping to make Orpie possible. Orpie author: Paul Pelzl <pelzlpj@eecs.umich.edu> Orpie website: http://www.eecs.umich.edu/~pelzlpj/orpie Feel free to contact me if you have bugs, feature requests, patches, etc. I would also welcome volunteers interested in packaging Orpie for various platforms. [1] http://oandrieu.nerim.net/ocaml/gsl/ [2] http://www.nongnu.org/ocaml-tmk/ [3] http://www.gnu.org/software/gnu-arch/. orpierc(5), orpie-curses-keys(1)
{"url":"http://manpages.ubuntu.com/manpages/hardy/man1/orpie.1.html","timestamp":"2014-04-18T18:17:19Z","content_type":null,"content_length":"33612","record_id":"<urn:uuid:9fe327ab-c1ca-4109-9fdd-5b8adf642952>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Use HLookup and VLookup functions to find records in large worksheets By Colin Wilcox If you work with large lists in Excel, you can use lookup functions to retrieve individual records from those lists quickly. This column explains how to use two of the functions: VLOOKUP and HLOOKUP. Applies to Microsoft Office Excel 2003 Microsoft Excel 2000 and 2002 A friend of mine recently came to me with a problem: "I'm trying to use the lookup function in Excel, and it's not working," she fumed. "Can you document them for me in plain English?" My friend manages a large Web site. She uses Microsoft Access to store and manage data about the number of hits that her site receives, and she imports the data into Microsoft Excel for analysis. To make the data easier to find, she places the records in several small worksheets instead of one huge sheet. She'd heard that lookup functions could save time by finding related data in the various So, let's start with the basics: You use lookup functions to find related records in large worksheets. When you use a lookup function, you're essentially saying, "Here's a value. Go to another location, find a match for my value, and then show me the words or numbers that reside in a cell that corresponds to that matching value." If it helps, you can think of that third value as your search result. The tips in this article explain how to use two of the most popular lookup functions: VLOOKUP and HLOOKUP. In the function names, the V stands for vertical and the H stands for horizontal. You use VLOOKUP when you need to search through one or more columns of information, and you use HLOOKUP when you need to search through one or more rows of information. Use VLOOKUP to search through columns of data To start, download the Excel 2002 sample file: Lookup Function Sample Data. The file uses fictitious data that demonstrates my friend's problem. The file contains two worksheets: Page Views and Pages. The Page Views sheet contains a set of IDs that uniquely identify each site page, plus information on the number of hits each page received during September 2002. The Pages worksheet contains the page IDs and the names of the pages that correspond to each ID. The page IDs appear in both worksheets because the source database uses a normalized data structure. In that structure, the IDs enable users to find the data for a given page. For a gentle introduction to normalized data structures, see Design Access databases with normal forms and Excel. Because the data resides in columns, we'll use the VLOOKUP function to enter a page ID on the first worksheet and return the corresponding page name from the second worksheet. Follow these steps: 1. In the Page Views worksheet, click cell E3 and type VLOOKUP. 2. In cell E4, type Result. 3. Click cell F4 and type this formula in either the cell or the formula bar: Note #N/A appears in cell F4 because the function expects to find a value in cell F3, but that cell is empty. You'll add a value to cell F3 in the next step. For more information about fixing #N/A errors, see Correct a #N/A error. 4. Copy the value from cell A4 into cell F3, and then press ENTER. Home Page appears in cell F4. 5. Repeat steps 3 and 4 using the value in cell A5. Comics & Humor appears in cell F4. Without having to navigate to the second worksheet, you found out which pages receive the majority of visits from site users. That's the value of the lookup functions. You can use them to find records from large data sets with less time and effort. Understanding the parts of the function The function that you used in the previous section performed several discrete actions. The following figure describes each action: The following table lists and describes the arguments that you use with the function. As needed, the information explains how to fix #VALUE and #REF errors that may crop up when you use the functions. You need to know this information to use the function successfully. The HLOOKUP function uses the same syntax and arguments. Part Required? Purpose =VLOOKUP() Yes Function name. Like all functions in Excel, you precede the name with an equal sign (=) and place the required information (or, in geek terms, the arguments) in parentheses =HLOOKUP() after the function name. In this case, you use commas to separate all parameters or arguments. Your search term: the word or value that you want to find. In this case, the search term is the value that you enter into cell F3. You could also embed one of the page ID numbers directly into the function. Excel Help calls this part of the function the lookup_value. F3 Yes If you don't specify a search value, or you reference a blank cell, Excel displays the #N/A error message. The range of cells that you want to search. In this case, the cells reside on another worksheet, so the worksheet name (Pages) precedes the range values (A2:B39). The exclamation point (!) separates the sheet reference from the cell reference. If you only want to search through a range residing on the same page as the function, remove the sheet name and exclamation point. Pages! Yes You can also use a named range in this part of the function. For example, if you assigned the name "Data" to a range of cells on the Pages worksheet, you could use 'Pages'!Data. A2:B39 Excel Help calls this part of the function the table_array value. If you use a range lookup value of TRUE, then you must sort the values in the first column of your table_array argument in ascending order. If you don't, the function cannot return accurate results. The column in your defined range of cells that contains the values you want to find. For example, column B in the Pages worksheet contains the page names that you want to find. Since B is the second column in the defined range of cells (A2:B39), the function uses 2. If your defined range included a third column, and the values you wanted to find resided in that column, you would use 3, and so on. Remember that the column's physical position in the worksheet does not matter. If your cell range starts at column R and ends at column T, you use 1 to refer to column R, 2 to refer to column S, and so on. Excel Help calls this part of the function the col_index_num value. If you use the HLOOKUP function, Excel Help calls this part the row_index_num value, and you follow the same 2 Yes guidelines. Note If you use the wrong value in this argument, Excel displays an error message. You can make either of these errors: ● If the value is less than 1, Excel displays #VALUE!. To fix the problem, enter a value of 1 or greater. For more information about #VALUE! errors see Correct a #VALUE! error ● If the value exceeds the number of columns in the cell range, Excel displays #REF! because the formula can't reference the specified number of columns. For more information about fixing #REF errors, see Correct a #REF! error. Exact match. If you use FALSE, VLOOKUP returns an exact match. If Excel cannot find an exact match, it displays the #N/A error message. For more information about fixing #N/A errors, see Correct a #N/A error. False Optional If you set the value to TRUE or leave it blank, VLOOKUP returns the closest match to your search term. If you set the value to TRUE, you must sort the values in the first column of your table array in ascending order. Excel Help calls this part of the function the range_lookup value. General guidelines for using the VLOOKUP function Keep these rules in mind as you use the VLOOKUP function: ● If you want the function to return exact matches, you must sort the values in your table array in ascending order or the function will fail. ● The function starts searching at the top left of the cell range that you define, and it searches columns to the right of your starting point. ● You must always separate the arguments with comma. Use the HLOOKUP function to search through rows of data The steps in the previous section used the VLOOKUP function because the data resided in columns. The steps in this section explain how to use the HLOOKUP function to find data in one or more rows. 1. In the Pages worksheet, copy the data in the cell range A2 to B39. 2. Scroll to the top of the worksheet, right-click cell D2, and then click Paste Special. 3. In the Paste Special dialog box, select Transpose, and then click OK. Excel pastes the data into two rows starting at cell D2 and ending at cell AO3. 4. In the Page Views worksheet, type HLOOKUP in cell E6, type Result in cell E7, and then enter this formula into cell F7: 5. In cell F6, enter the ID from cell A4 and then press ENTER. Home Page appears in cell F6. You get the same type of result, but you searched through a set of rows instead of columns. The HLOOKUP function uses the same arguments as the VLOOKUP function. However, instead of declaring the column that contains the values you want to find, you declare the row. Next, let's look at an important principle that applies to both functions. Go to the Pages worksheet and follow these steps: 1. In cells D4 through M4, type anything that comes to mind. You can type anything you want, just add some text or numbers to those cells. 2. On the Page Views worksheet, alter the HLOOKUP formula so it reads as follows: When you finish changing the formula, the value you entered in cell D4 appears. Here's the principle to keep in mind: The value that you want to find does not have to reside in a cell next to your match value. It can reside in any number of columns to the right of your match value, or in any number of rows below your match value. Just make sure that you extend your table_array and col_index_num or row_index_num arguments so that they encompass the values that you want to find. General guidelines for using the HLOOKUP function Keep these rules in mind as you use the HLOOKUP function: ● The function starts searching at the top left of the cell range that you define, and it searches the rows below and to the right of your starting point. ● You must always separate the arguments with commas. ● If you want the function to return exact matches, you must sort the values in your data in ascending order. Yes, you can sort horizontally. To do so, follow these steps: 1. In the Pages worksheet, click cell D2. 2. On the Data menu, click Sort. 3. In the Sort dialog box, click Options. 4. In the Sort Options dialog box, click Sort left to right, and then click OK. 5. In the Sort dialog box, click OK to sort the data. In the next Power User column The next Power User column, More ways to use HLookup and VLookup functions, explains how to: ● Use ToolTips to write functions. ● Use a mix of absolute and relative cell references to return multiple records. ● Debug your functions. ● Use the Lookup Wizard. The wizard automates the process of finding data, but it uses the INDEX and MATCH functions instead of the HLOOKUP and VLOOKUP functions. ● For more information about using the VLOOKUP and HLOOKUP functions, including code samples, see Help in Excel. About the author Colin Wilcox writes for the Office Help team. In addition to contributing to the Office Power User Corner column, he writes articles and tutorials for Microsoft Data Analyzer.
{"url":"http://office.microsoft.com/en-us/excel-help/use-hlookup-and-vlookup-functions-to-find-records-in-large-worksheets-HA001056320.aspx?CTT=6&origin=EC001055307","timestamp":"2014-04-25T07:38:59Z","content_type":null,"content_length":"34850","record_id":"<urn:uuid:464882cc-d555-4c24-9987-ed1f2e778395>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse of the Riemann zeta function MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. I'm wondering if there is any information on the inverse of the Riemann zeta function (not it's reciprocal, but its functional inverse). This would obviously be a multi-valued function. up vote 2 down vote favorite reference-request zeta-functions add comment I'm wondering if there is any information on the inverse of the Riemann zeta function (not it's reciprocal, but its functional inverse). This would obviously be a multi-valued function. I fiddled with this recently; however I did not yet arrive at an interesting result for the inverse of the $\zeta$. But one could use the alternating-zeta (or Dirichlet's eta- $\eta$) function. It is not too difficult to construct an invertible powerseries for the related eta-function; one can simply consider the sequence of formal powerseries for $ {1 \over (1+1)^x }, {1 \over (1+2)^x}, ... $ and adds the coefficients at the same powers of x. These produce non-convergent series, but which can be Euler-summed. In Pari/GP one does simply: ps_eta = sumalt(k=0,(-1)^k*taylor(1/(1+k)^x,x)) and gets $ \small pseta(x)= 0.500000 + 0.225791 x - 0.0305146 x^2 - 0.00391245 x^3 + 0.00208483 x^4 - 0.000312274 x^5 + O(x^{6}) $ From this a powerseries for zeta is also constructible: ps_zeta = ps_eta /(1-2*2^-x) $ \small \begin{array} {lll} pszeta(x) &=& - 0.500000 + (0.0810615-1) x - (0.00317823+1) x^2 - (0.000785194+1) x^3 \\\ & & + (0.000120700-1) x^4 - (0.00000194090+1) x^5 + O(x^{6}) \end {array} $ (That powerseries is related to the power series using the Stieltjes-constant by replacing x by x+1 ) up vote 4 The power series for $\eta$ can be recentered at the fixpoint $ \small fp \sim 0.629334 $ to get a powerseries without a constant term which can then be inverted. Let's call this $\eta_{fp} down vote = \eta (x+fp)-fp $ then the powerseries begins like $ \small \begin{array} {lll} pseta_{fp}(x) &=& 0.184574 x - 0.0337023 x^2 + 0.000152965 x^3 + 0.00117594 x^4 \\\ & & - 0.000254950 x^5 + 0.0000216757 x^6 + 0.00000147274 x^7 - 0.000000714222 x^8 + O(x^{9}) \end{array} $ From this we can generate a powerseries for the inverse of $ \eta_{fp}$. The range of convergence is small; but using eulersummation one can compute values for the inverse of $ \eta_{fp}$. Even fractional iterates are accessible; here is a plot which shows the fractional iteration of $\eta(x,h)$, beginning at x=1 where h is the iteration-parameter (all is computed using the centered version $\eta_{fp}$ ). The plot has to be read that at h=0 we have $\eta(x,0)=x=1 $, at h=1 we have $\eta(x,1)=\eta(x)= \log(2)$, at h=2 we have $\eta(x,2)=\eta(\eta(x))$ at h->inf we get the fixpoint fp and the inverse is at h=-1: $ \eta(x,-1)=\eta^{-1}(x) \to \infty $ I'm not yet ready with a small script/sketch of an article where I explore this in a bit more detail. However, as I said, the inverse of zeta does not behave so nicely - the eigenvalue for the iteration is negative and one cannot uniquely get fractional roots out of this. Also the according powerseries may be too bad configured/has too small range of convergence. [update] Here is a plot for a range of the inverse alternating zeta; for the "extreme" values at the borders I used Eulersummation because the power series has very small range of add comment I fiddled with this recently; however I did not yet arrive at an interesting result for the inverse of the $\zeta$. But one could use the alternating-zeta (or Dirichlet's eta- $\eta$) function. It is not too difficult to construct an invertible powerseries for the related eta-function; one can simply consider the sequence of formal powerseries for $ {1 \over (1+1)^x }, {1 \over (1+2)^x}, ... $ and adds the coefficients at the same powers of x. These produce non-convergent series, but which can be Euler-summed. In Pari/GP one does simply: $ \small pseta(x)= 0.500000 + 0.225791 x - 0.0305146 x^2 - 0.00391245 x^3 + 0.00208483 x^4 - 0.000312274 x^5 + O(x^{6}) $ From this a powerseries for zeta is also constructible: $ \small \begin{array} {lll} pszeta(x) &=& - 0.500000 + (0.0810615-1) x - (0.00317823+1) x^2 - (0.000785194+1) x^3 \\\ & & + (0.000120700-1) x^4 - (0.00000194090+1) x^5 + O(x^{6}) \end{array} $ (That powerseries is related to the power series using the Stieltjes-constant by replacing x by x+1 ) $ \small \begin{array} {lll} pseta_{fp}(x) &=& 0.184574 x - 0.0337023 x^2 + 0.000152965 x^3 + 0.00117594 x^4 \\\ & & - 0.000254950 x^5 + 0.0000216757 x^6 + 0.00000147274 x^7 - 0.000000714222 x^8 + O (x^{9}) \end{array} $ From this we can generate a powerseries for the inverse of $ \eta_{fp}$. The range of convergence is small; but using eulersummation one can compute values for the inverse of $ \eta_{fp}$. Even fractional iterates are accessible; here is a plot which shows the fractional iteration of $\eta(x,h)$, beginning at x=1 where h is the iteration-parameter (all is computed using the centered version $\eta_{fp}$ ). The plot has to be read that at h=0 we have $\eta(x,0)=x=1 $, at h=1 we have $\eta(x,1)=\eta(x)= \log(2)$, at h=2 we have $\eta(x,2)=\eta(\eta(x))$ at h->inf we get the fixpoint fp and the inverse is at h=-1: $ \eta(x,-1)=\eta^{-1}(x) \to \infty $ I'm not yet ready with a small script/sketch of an article where I explore this in a bit more detail. However, as I said, the inverse of zeta does not behave so nicely - the eigenvalue for the iteration is negative and one cannot uniquely get fractional roots out of this. Also the according powerseries may be too bad configured/has too small range of convergence. The question is about the value distribution of $\zeta(s)$; it is considered (without speaking of inverse) in some detail in Chapter XI of Titchmarsh's book. up vote 2 down vote add comment The question is about the value distribution of $\zeta(s)$; it is considered (without speaking of inverse) in some detail in Chapter XI of Titchmarsh's book. The following Mathematica program gives the first few zeta zeros to good accuracy by applying series reversion or the inverse as you call it, twice: Clear[x, a, b, c, d]; b = 10; c = N[2*Pi*Exp[1]*(n - 11/8)/Exp[1]/LambertW[(n - 11/8)/Exp[1]], 10]; a = Normal[InverseSeries[Series[(Zeta[1/2 + I*x]), {x, c, b}], x]]; x = 0; d = N[Re[a]]; up vote 1 down Clear[a, x]; vote a = Normal[InverseSeries[Series[(Zeta[1/2 + I*x]), {x, d, b}], x]]; x = 0; Print[N[Re[a]]], {n, 1, 20}] 14.1347, 21.022, 25.0109, 30.4249, 32.9351, 37.5862, 40.9187, 43.3271, 48.0052, 49.7738, 52.9703, 56.4462, 59.347, 60.8318, 65.1125, 67.0798, 69.5464, 72.0672, 75.7047, 77.1448 Of course by giving not the approximation of the zeros as a starting point, but the zeros them selves it converges even faster for $x=0$. Expanding for other values not close to zeros the function appears to be much less orderly when looking at a few values. add comment The following Mathematica program gives the first few zeta zeros to good accuracy by applying series reversion or the inverse as you call it, twice: Clear[n] Do[ Clear[x, a, b, c, d]; b = 10; c = N[2*Pi*Exp[1]*(n - 11/8)/Exp[1]/LambertW[(n - 11/8)/Exp[1]], 10]; a = Normal[InverseSeries[Series[(Zeta[1/2 + I*x]), {x, c, b}], x]]; x = 0; d = N[Re [a]]; Clear[a, x]; a = Normal[InverseSeries[Series[(Zeta[1/2 + I*x]), {x, d, b}], x]]; x = 0; Print[N[Re[a]]], {n, 1, 20}] 14.1347, 21.022, 25.0109, 30.4249, 32.9351, 37.5862, 40.9187, 43.3271, 48.0052, 49.7738, 52.9703, 56.4462, 59.347, 60.8318, 65.1125, 67.0798, 69.5464, 72.0672, 75.7047, 77.1448 Of course by giving not the approximation of the zeros as a starting point, but the zeros them selves it converges even faster for $x=0$. Expanding for other values not close to zeros the function appears to be much less orderly when looking at a few values.
{"url":"http://mathoverflow.net/questions/72529/inverse-of-the-riemann-zeta-function","timestamp":"2014-04-17T18:29:42Z","content_type":null,"content_length":"69012","record_id":"<urn:uuid:98437b6b-61bc-4b57-8d08-bb758610b6cb>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Always able to assume normality if sample size >= 31 September 21st 2009, 08:39 AM Always able to assume normality if sample size >= 31 I know at a sample size of 31 or larger, and from a random sample selection, you are able to assume normality. Is this in general always true? If no, how do you discern? September 22nd 2009, 05:00 PM You can't assume normality unless you are told it is a normal distribution. At least that's what I was taught. September 24th 2009, 08:02 AM Well if both sample size x >= 31 and the raw data distribution mimics the central limit theorem then I would say go ahead and use parametric tests (that is assume normality)
{"url":"http://mathhelpforum.com/statistics/103482-always-able-assume-normality-if-sample-size-31-a-print.html","timestamp":"2014-04-18T17:17:43Z","content_type":null,"content_length":"4051","record_id":"<urn:uuid:3f8e5383-0098-4848-b6dd-f5412806b83c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
The most complicated problem in Derivative of Inverse Trigonometric Functions September 12th 2009, 09:59 PM #1 Sep 2009 The most complicated problem in Derivative of Inverse Trigonometric Functions Hello everyone, i'm new in here . I do hope I'm welcome in here. Pls.. give me some complicated, not basic, really complicated problems in derivatives of inverse trigonometric.. If possible pls. post it asap... many thanks May I ask why? Or, does the name say it all? BTW I am an MHF Ambassador Last edited by VonNemo19; September 13th 2009 at 12:04 AM. Go to the library and find calculus books. Each book will have examples of what you're looking for. Try also searching the Calculus subforum. And I also suggest you try using Google. And, what the heck, perhaps you could ask your teacher too. Ok, Ok... I'll give him one. This isn't that difficult, but try this and we'll see where you're at. Try that on for size. Ooops he said derivatives... I'm not a very good listener. How about... Show that $\frac{d}{dx}\arcsin{x}=\frac{1}{\sqrt{1-x^2}}$ . actually i have browsed and photocopied exercises in books in our library.. But the problems are so easy,... but when our professor gives an exam... It's so difficult as compared to the problems in the books.. Pls.. give me a set of problems thanks. As I said earlier, ask you Professor (that's part of his job). Also, past exam papers should be available (ask your institute's library, which is where they are usally archived) for you to work We don't have time to construct difficult questions for you (and how are we to know what is easy, difficult and impossible for you anyway. Everything is relative.) and we don't have time to write out solutions to such questions (which I assume you would want when you got stuck). The main purpose of MHF is to help people with questions thay can't do, not to provide an extension program for people who already understand the work and don't really need help. Students are constantly complaining that the problems on the test are harder than the homework problems. Actually professors typically make an effort to see that most of the test problems are easier than most of the homework problems. What they really mean is that on a test they don't have the text book open, or the answer in the back of the book, or friends working with them, etc. September 12th 2009, 10:01 PM #2 September 12th 2009, 10:02 PM #3 September 12th 2009, 10:09 PM #4 September 12th 2009, 10:13 PM #5 September 12th 2009, 10:44 PM #6 Sep 2009 September 12th 2009, 11:30 PM #7 Senior Member Apr 2009 September 13th 2009, 02:47 AM #8 September 13th 2009, 03:34 AM #9 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/101984-most-complicated-problem-derivative-inverse-trigonometric-functions.html","timestamp":"2014-04-21T10:30:17Z","content_type":null,"content_length":"62474","record_id":"<urn:uuid:f76800eb-62c0-450a-9a64-9b053a30ad2b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Coefficient and expansions. February 28th 2013, 12:40 AM Coefficient and expansions. find the Coefficient of term x^2y^3 in below expansion: expansion : (2+x-y+3z)^10 ^please write a good description for solution* or how to solve it? * for a person who is a dummy in math and combinatorics... February 28th 2013, 01:09 AM Re: Coefficient and expansions. 80,640. Use the multinomial theorem which says $(x_1+...+x_r)^n = \sum_{P(n)}{n \choose k_1,k_2,..,k_r } * (x_1)^{k_1}...(x_n)^{k_r}$(Basically sum over all non negative integer partitions of n.) the ${n \choose k_1,k_2,..,k_r} = \frac{n!}{k_1!...k_r!}$ basically, it says the powers on the exponents of each term you have in your parenthesis, namely 2,x,y,z must add up to 10. so how you can write 10 = 0+0+3+7 which is equivalent to $\frac{10!}{0!0!3!7!}2^0*x^0*y^3*z^7$ so since you asked for $x^2y^3$ this means since we don't see z so the power of z is 0. 2+3+0+x = 10 and so x = 5 (which is the power 2 is raised to) so you have $\frac{10!}{0!5!3!2!}2^5*x^2*y^3*z^0$ = 80640 $x^2y^3$ February 28th 2013, 01:25 AM Re: Coefficient and expansions. It is - 80640 x^2y^3 February 28th 2013, 01:28 AM Re: Coefficient and expansions. well. thanks and a question: isn't there any difference between for instance 3x and x, and -y and y ? February 28th 2013, 01:43 AM Re: Coefficient and expansions. well. thanks and a question: isn't there any difference between for instance 3x and x, and -y and y ?
{"url":"http://mathhelpforum.com/discrete-math/213978-coefficient-expansions-print.html","timestamp":"2014-04-20T17:49:17Z","content_type":null,"content_length":"6619","record_id":"<urn:uuid:a742f2b3-7480-4b14-b1b6-3c7d17b60718>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Sage Days 16: Barcelona, Spain -- Computational Number Theory Login: xsf.convidat Password: KidAut0RaceS Sage Days 16 will take place on June 22--27, 2009, the week after MEGA 2009. The event will be organised by the CRM (http://www.crm.cat) and the OSRM of the UPC (http://www-fme.upc.edu/osrm/), and will take place at the FME, in the campus of the UPC, in Barcelona. Mailing lists All video is here. I made this video by re-encoding HD video to "iPhone" video using Handbrake. Each video file is at most 200mb, and will play fine with VLC (and on the iPhone, of course). See also Sage on Blip TV for all these videos in an easy to view format. Sunday, June 21 19:00 Meet informally in the lobby of the Resedentia 21:00 From the Resedentia, go to dinner Monday, June 22 CRM Thematic Day on Mathematics and Computation 9:00 Meet with Jordi Quer at the Residencia lobby, take the train together to CRM '''Sage: Unifying Mathematical Software''', video part 1, video part 2 10:30-11:30 William Stein This will be an overview talk about Sage, which explains the history and motivation for the project, demos some key features of Sage, and discusses where we are going next. It will be accessible to people in all research areas and assumes no prior experience with Sage. 11:30-12:00 Coffee Break '''Experimental methods in number theory and analysis''' video part 1, video part 2 12:00-13:00 Henri Cohen In this talk, I would like to give a number of examples of numerical experiments coming from number theory and analysis, mention the tools used to perform them, and show how they sometimes can lead to interesting and deep conjectures. '''Developing tailored software for specific problems''' video part 1, video part 2 14:30-15:30 Àngel Jorba We will discuss the advantages and inconveniences of developing software (in a general purpose language like C) for concrete problems. I will also mention the results of a pool done by the Spanish project "i-Math" on the use of computational resources of the mathematical research groups in Spain. 15:30-16:00 Coffee Break Round Table video 16:00-17:00 part 1, video part 2 18:45 Leave from Residencia to UPC Coding Sprint 19:00-- Organization at UPC video Tuesday, June 23 '''New ideas for computing integral bases''' video part 1, video part 2 10:30-11:30 Jordi Guàrdia The determination of the ring of integers of a number field is one of the main tasks of computational algebraic number theory. The use of higher Newton polygons provides a new insight into the problem, leading to a fast method to compute integral bases, discriminants and prime ideal factorization in number fields. 11:30-12:00 Coffee Break '''How to use Sage to compute with Elliptic Curves''' video part 1 video part 2 12:00-13:00 William Stein I will explain how to use Sage to define elliptic curves over various fields, do arithmetic on them, and compute standard invariants. Then I'll talk about elliptic curves over finite fields, and how to count points and compute the group structure. Next, I'll talk about elliptic curves over number fields and Sage's implementation of Tate's algorithm. Finally, I'll discuss computing the invariants in the BSD conjecture for elliptic curves over QQ. 13:00-14:30 Lunch '''Computing exactly with unsafe resources: fault tolerant exact linear algebra and cloud computing''' video part 1 video part 2 In several ways, challenges in computational mathematics (including computational number theory, graph theory, cryptanalysis, ...) involve large linear algebra 14:30-15:30 Clément Pernet & computations over Z or Q. Distributed, peer-to-peer or Cloud computing represents nowadays the best perspectives to access large and cheap computing power, but based on Majid Khonji unreliable resources. Fault tolerant techniques are therefore developed in order to increase the confidence in the computations, or even to certify it. In the case of exact computations, the algebraic properties of the problems are well suited for the development of algorithm based fault tolerant protocols. In particular, the Chinese Remaindering Algorithm, offering an embarrassingly easy parallelization, can be adapted to work as an error correcting code and tolerate errors. We will present and demonstrate these algorithms and protocols in the case of a distributed computation of the determinant of a matrix over Z. '''How to get started developing Sage''' video part 1 video part 2 15:30-16:30 Martin Albrecht In this talk, we will try to highlight a few interesting and relevant bits and pieces for getting into Sage development. We will give an overview of how Sage is structured and step through the Sage development process. The talk is meant to be fairly interactive with people asking questions etc. Free Sant Joan evening festivity Wednesday, June 24 '''Modular forms and modular abelian varieties in Sage''' video part 1 video part 2 13:00-14:00 William Stein I will survey the capabilities in Sage for computing dimensions of modular forms spaces, congruence subgroups, modular symbols, modular forms, Brandt modules, overconvergent modular forms, half-integral weight forms, and modular abelian varieties. I will discuss both what is in Sage, and what is missing. '''Faugere's F5 Algorithm: variants and implementation issues''' video part 1 video part 2 14:30-15:30 Christian Eder In this talk we shortly recall main properties of Gröbner bases used for their computations. After an introduction on Faugere's F5 Algorithm we examine its points of inefficiency, especially the reduction process, and present the variant F5C improving these. The benefits of this improvement are explained and represented in detail. Moreover some hints implementing F5's data structures are given and the positive effects of F5C on these are shown. In the end we give some insight into the implementation of F5's reduction process in an F4-ish manner, i.e. using symbolic preprocessing. 16:00-- Coding Sprint / Status Report Thursday, June 25 '''P-adic modular forms in Sage''' video part 1, video part 2 10:30-11:30 David Loeffler I will give a quick introduction to p-adic modular forms, which are a generalisation of classical modular forms. I will first give a quick introduction to the theory, and then describe a few algorithms that can be used to compute them, and give an example of one of these which has been implemented in Sage since 3.4.1. Finally I will talk a little about some issues in inexact p-adic linear algebra that come up in the process. 11:30-12:00 Coffee Break ECHIDNA: Open source Magma extensions for Sage sws, PDF video part 1 video part 2 I will present the open source GPL repository of Magma code: Elliptic Curves and Higher Dimensional Analogues 12:00-13:00 David Kohel with associated databases, and its use as an extension to Sage. This repository includes updates to the original packages for quaternion algebras, Brandt modules and generalization of my code for genera of lattices (as a quadratic modules package). As new features, it includes p-adic point counting via canonical lifts for elliptic curves (AGM-X_0(N)), extensions to the Igusa invariants and Mestre's algorithm (to small characteristic) in genus 2, arithmetic of CM fields and CM constructions for curves of genus 2, invariants of genus 3 curves (Dixmier-Ohno and Shioda's hyperelliptic invariants), and numerous other features (e.g. working in generic Picard groups, singular cubic curves and generalized Jacobians of singular hyperelliptics, etc.). The majority of the algorithms are completely new to Magma, and represent algorithms developed over more than a decade (with students and collaborators). The Sage developer community is invited to contribute, document, and improve ECHIDNA, and port features directly to Sage. 13:00-14:30 Lunch Fast compiled graphs in Sage video part 1 video part 2 14:30-15:30 Robert Miller There will be a demonstration and advertisement of new developments in graph theory in Sage. In particular, compiled Sage graphs have finally reached the same level of functionality as NetworkX graphs, the slower Python implementation. Coding Sprint 16:00-- Organizer / Status Report Friday, June 26 Siegel Theta Series and Modular Forms Ordinary theta series count (in their Fourier coefficients) the number of ways in which the integral positive definite quadratic form Q(x)=x^t A x in m variables 10:30-11:30 Rainer represents an integer n. Siegel theta series count instead the number of ways in which a given positive semidefinite g x g - matrix T can be represented as X^t A X with Schulze-Pillot an integral m x g-Matrix X. In the same way in which ordinary theta series give modular forms on the upper half plane for congruence subgroups of SL_2(Z) the Siegel theta series give modular forms on a g(g+1)/2-dimensional space H_g for the symplectic group Sp_g(Z) and its congruence subgroups. Some computations for these have been done by Scharlau, Schiemann, and myself about 10 years ago; since then nothing much has happened apart from isolated computations of examples - maybe it's time to start another systematic attack on the subject. 11:30-12:00 Coffee Break '''Multiplication of binary polynomials''' Multiplying binary polynomials is an elementary operation which occurs as a basic primitive in several contexts, from computer algebra to coding theory and cryptography. 12:00-13:00 Emmanuel Thomé We study here a variety of algorithms for this operation, with the intent of obtaining satisfactory speeds for a wide range of possible degrees. We look into "low level" aspects related to microprocessor-specific optimizations, and higher level algorithms such as of course the Karatsuba and Toom-Cook approaches, but also two different FFT algorithms. Several improvements are presented. We provide comparisons of the timings obtained with those of the NTL library. The software presented can, as of NTL 5.5, be hooked into NTL as an add-on. 13:00-14:30 Lunch ||<|2> 14:30-15:30 ||<|2> Maite Aranes || '''Manin symbols over number fields (pdf)''' Sage worksheet, and video|| I will discuss results about cusps and Manin symbols over a number field K, which should be useful in the computation of spaces of cusp forms for GL(2, K) via modular symbols. I will also present ongoing work on implementations of both of these in Sage. 16:00-- Coding Sprint // Status Report Saturday, June 26 10:30-- Coding Sprint wrapup video part 1, video part 2 video part 3 1. Michael Abshoff, Technische Universität Dortmund 2. Martin Albrecht, University of London (Room C-010 at Residencia) 3. Maite Aranes, University of Warwick 4. Tomasz Buchert, Adam Mickiewicz University 5. Michal Bulant, Masaryk University 6. Gabriel Cardona, Universitat de les Illes Balears 7. Wouter Castryck, Leuven 8. Henri Cohen, Bordeaux 9. Francesc Creixell, UPC 10. Christian Eder, TU Kaiserslautern 11. Burcin Erocal, RISC, JKU - Linz 12. Julio Fernández, UPC 13. Imma Gálvez, UAB 14. Enrique González-Jimenez, Universidad Autónoma de Madrid 15. Josep González, UPC 16. Jordi Guàrdia, UPC 17. Xavier Guitart, UPC 18. Amir Hashemi, Isfahan University of Technology (Iran) 19. Nikolas Karalis, National Technical University of Athens 20. Hamish Ivey-Law, Sydney-Marseille 21. David Kohel, Institut de Mathématiques de Luminy 22. Joan Carles Lario, UPC 23. Offray Vladimir Luna Cárdenas, Javeriana (Colombia) 24. David Loeffler, University of Cambridge 25. Robert Miller, University of Washington (Room C-010 at Residencia) 26. Antonio Molina, Addlink Software Científico 27. Enric Nart, UAB 28. Sebastian Pancratz, University of Oxford 29. Clement Pernet 30. Joaquim Puig, UPC 31. Jordi Quer, UPC 32. Anna Río, UPC 33. Víctor Rotger, UPC 34. Bjarke Roune, University of Aarhus 35. Utpal Sarkar, HP (+UPC) 36. Diana Savin, Ovidius University (Romania) 37. Rainer Schulze-Pillot, Universitaet des Saarlandes 38. Mehmet Sengun, University of Duisburg-Essen 39. Jaap Spies, Holland 40. William Stein, University of Washington (Room C-113 at Residencia) 41. Emmanuel Thome, INRIA Lorraine 42. Andrew Tonks, London Metropolitan University 43. Gonzalo Tornaría, Universidad de la República (Uruguay) 44. Eulàlia Tramuns, UPC 45. Montrserrat Vela, UPC 46. Preston Wake, McMaster 47. Christian Wuthrich, University of Nottingham 48. Brian Wyman, Univ of Michigan
{"url":"http://wiki.sagemath.org/days16","timestamp":"2014-04-18T08:50:00Z","content_type":null,"content_length":"44569","record_id":"<urn:uuid:b9ae7b3f-a8e9-4176-8532-e0930a90e85e>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
harts and analyses Metric charts and analyses Project Metrics offers a selection of charts to choose from. You can find them in the Chart menu. The charts visualize the values currently on the grid. Most charts visualize the values of the currently selected column (current metric). Thus, before making a chart, put the cursor on the metric column you are interested in. You can save the charts to a file. The available file formats are metafile (EMF and WMF), bitmap and GIF. Resize the picture to the desired size before pressing the Save button. You can also copy & paste them to your favorite word processor for your project documentation. More charts? If you need the build other types of charts, you can copy & paste or export the data to your favorite spreadsheet program. Bar chart of column, unsorted Bar chart of column, sorted • Bar chart of column. This chart displays the selected metric column as a bar chart. Each value becomes a bar. The default is an unsorted chart. You may sort the column by clicking on the column header before taking the chart. In this case you get an illustrative view of the distribution of the values in size order. The green Median line illustrates the middle value. 50% of the values fall below this line and the rest is above. The red Outliers line displays the limit(s) of exceptional values. Histogram of column Distribution of column • Histogram of column. This chart displays a bar chart of the relative frequencies of different value ranges. It's useful for getting a quick look at the distribution of values of the selected metric column. The more values fall in a range, the higher the bar. In the above chart you can see that a typical procedures name consists of 7-16 characters. Shorter and longer names are fewer. There are no names with just 1 or 2 characters. • Distribution of column. This chart illustrates the distribution of a single metric. It's an alternative to the histogram. You can see the area where most of the value reside. In addition, you get to see exceptional values. There is also a boxplot with outliers, median and Q1/Q3 values. Half of the values fall within the blue box, while the rest is out of the box. If some values can be considered outliers or extreme outliers, they are highlighted and the outlier limits are drawn in the chart. The violet curve on the left shows the relative distribution of values. The farther right the curve, the more values at that level. Compare metrics Compare histograms • Compare metrics. This chart lets you compare 2 or more metrics to each other. Before getting this chart, you can sort by one of the metrics (click on the appropriate grid column) to display the bars in increasing or decreasing order. This chart is useful for comparing related metrics and finding dependencies. In the above chart you can see LLOC' (logical lines of comment) and MCOMM (meaningful comments) compared by file. You can see how most files have LLOC' and MCOMM at similar levels, leading us to believe that each comment line is also meaningful. However, some files have a higher MCOMM than LLOC'. This is because these files contain a considerable amount of end-of-line comments, which are counted in MCOMM but not LLOC'. On the other hand, if MCOMM were lower than LLOC', this would indicate the use of non-meaningful comments, such as empty comments or separator lines. • Compare histograms. This is an alternative way to compare 2 or more metrics. In the above chart you can see PARAMS (number of procedure parameters) and VARS (number of local variables, excluding parameters) compared. As you can see, most procedures have 0 to 2 parameters. However, when it comes to local variables, over half of the procedures have no locals at all, telling us they are very simple. The maximum number of parameters is 4, which is reasonable. However, the maximum number of local variables is 56, which is a clear indication of too a complex procedure. That particular 56-variable procedure was actually too hard to manage and it was removed later. • Time chart. This chart displays a line chart of 1 or more metrics by time. It works for the Project in time tab. Select the metrics you want to display. The chart can be used to monitor historical development, such as quality changes between versions or advances in project size. In the above chart you can see the number of dead procedures, variables and constants, version by Pie of limits on column Kiviat of limits on page • Pie of limits on column. If a limit applies to the selected metric column, you get a simple pie of the percentage of values within the acceptable range and outside of it. • Kiviat of limits on page. This chart type displays the percentage of values within the acceptable range. It takes all the limits in effect on the current page. It is most useful when there are several limits on the metrics of the selected page. You get a quick view of which metric revealed the most problematic cases and which metrics didn't present any problems. In the above chart, the values on the outer circle are all right: no problematic cases were found with these metrics. The closer the value is to the center of the circle, the more potential problem cases were found. • XY chart of 2 metrics. If you're interested in how any 2 metrics correlate with each other, this is your chart of choice. It's a simple scatter chart with one metric on the X axis and the other on the Y axis. If the metrics correlate linearly, you also get a line of correlation and an equation. Read more about regression analysis below. Regression analysis in the XY chart The XY chart includes a simple linear regression analysis of 2 metrics (x and y). It tells how the y metric depends on the x metric, if there is a statistical dependency. If x and y correlate (at 95% probability), a regression line is drawn and its equation is displayed at the bottom of the graph. If there is no statistically significant correlation (values are unrelated or number of data points is low), no equation and line is shown. Here is an example of the equation. CC = -0.68 + 0.16 x LLOC (R^2=76%) This means that in this project, LLOC explained 76% of the variation of a procedure's cyclomatic complexity. CC increased by 0.16 for each line of code. In other words, complexity increases as procedure size increases. This is no surprise as CC is the amount of decision statements + 1, and the number of decision statements is likely to be higher in a large procedure. Notice that this is not a universal equation, the coefficients are likely to vary by coding style. The R^2 value tells how well the x value explains the variation of the y value. R^2=100% means perfect correlation. You can get values close to 100% from metric pairs that are closely related, such as LINES and LLINES (physical and logical lines). A low value of R^2 means that although the values may be related, there is no clear linear relationship. A good way to evaluate the fit is also to see how well the regression line fits the data points. For another project, we tried the XY chart where x=PARAMS and y=LLOC' (parameters vs. comment lines). We found out that R^2=3%. Thus, the number of parameters and the number of comment lines were not related. One would probably expect that the amount of commentation increased by the number of parameters because the use of each parameter should be commented. Thus, the use of parameters was not properly commented in this project. In fact, a half of the procedures were totally uncommented. Related, we also tried with x=CC and y=LLOC'. In this case, R^2=39%. We were happy to find a good positive correlation between complexity and commentation. This was a good sign because even if full commentation was lacking, the more complex procedures were commented to a certain extent. Correlation and regression analysis Select Correlation analysis in the Report menu to run a correlation and regression analysis on the data currently displayed in the grid. This feature calculates the linear correlation coefficients (r) and regression line equations (y=a+bx) for each pair of the metric series. Select the metrics to correlate before running correlation analysis. You can do this in the View combobox. A large number of selected metrics (such as <All>) leads to longer analysis times and a large correlation table, not very easy to read. In the correlation table, r values are given if they are statistically significant (at the 95% probability level). A correlation value is omitted if it's not statistically significant (low correlation or small amount of data). Regression equations and R^2 values are given for each pair of metrics that are statistically correlated. Pay attention to the R^2 value. Even though 2 metrics may be statistically correlated, the effect may be very low (a low R^2). The regression equation is more meaningful when the R^2 value is high. Notice that a metric that is defined via another metric correlates with the other metric. This happens for a number of metric pairs. An example is IFIO=IFIN*IFOUT. IFIO correlates with IFIN and IFOUT because of its definition. The more data you have, the better the correlation analysis. It's most useful on procedure-level data in a large project. If you have less than 10 lines of data (say project-level data), the results are probably not that interesting. Regression analysis on page "Project in time" On this page you get a simple changes / day regression analysis. When you select DATE as the X metric, you can project the development another metric (such as LLOC) by time. Example. The below results mean that in this project, the developers have historically written about 24 lines of code per day, and 5 lines of comments. LLOC: 23.82 / day (Rē=95%) LLOC': 5.06 / day (Rē=87%) The equations show how your project has developed in time (assuming 7 days/week). Use these historical time analyses with care. Especially, don't require your developers to write more code/day solely based on the above result. This is only a historical average. It depends a lot on what values DATE has. If you saved the metrics often during the test phase, for example, and less often during the coding phase, the test phase has got more weight in the analysis. The analysis gives equal weight to each DATE regardless of how many days passed. Thus, you get a more reliable average by saving metrics at fixed time intervals. What is more, LLOC is not a perfect way to measure programmer output, as it doesn't take changed or delete lines into account, nor bug fixing, meetings or planning efforts. You might want to analyze historical project versions with the same version of Project Analyzer to make sure that no changes in different Project Analyzer versions affect the data. Correlations on page "All analyses" You cannot do a correlation analysis on the All projects page. This page may display several versions of one project. In this kind of a setting, the data lines are strongly related to each other and the correlations would be exaggerated. ©Aivosto Oy - Project Analyzer Help Contents
{"url":"http://www.aivosto.com/project/help/pm-charts.html","timestamp":"2014-04-18T15:38:38Z","content_type":null,"content_length":"15338","record_id":"<urn:uuid:39d6d153-eba3-409a-835a-2398a91fab61>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Eulers Formula Other Applications A selection of articles related to eulers formula other applications. Original articles from our library related to the Eulers Formula Other Applications. See Table of Contents for further available material (downloadable resources) on Eulers Formula Other Body Mysteries >> Sexuality Eulers Formula Other Applications is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Eulers Formula Other Applications books and related discussion. Suggested Pdf Resources An Amusing Equation: From Euler's formula with angle π, it follows that the There are many other uses and examples of this beautiful and useful formula. The symbol i is treated just like any other algebraic variable. So,. (2 + i)2 = (2 + i)( 2 + Next we give on application of Euler's formula in finding roots of numbers. An Euler equation is a difference or differential equation that is an intertempo- ral first-order condition for a . On the other hand, the equa- tions provide an uses the Euler equation as one equation in a system of equations. 2. I.3. problem is used to generate the Euler equation that underlies our empirical analysis. See Eberly (1994) for an application of similar ideas to the purchase of cars. .. Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/eulers-formula-other-applications/","timestamp":"2014-04-21T10:02:00Z","content_type":null,"content_length":"30197","record_id":"<urn:uuid:ee0a6642-cd93-4890-a078-3ce1c0dc6c42>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Hyperbolic function.. March 25th 2009, 08:25 AM Hyperbolic function.. Hi, I'm struggling with proof for this hyperbolic function, I just need a bit of explination if someone can help?? Let y=arccosh(x), then x=coshy x = (e^y+e^-y)/2 2x = e^y+e^-y e^(2y)-2xe^y+1 = 0 e^y = [2x +/- sqrt(4x^2-4)]/2 For the next step I understand it to be.. e^y = x +/- sqrt(2x^2-2) but apparentley the correct answer is e^y = x +/- sqrt(x^2-1) How is this? surely your only dividing by 2... can someone please explain where I'm going wrong Heres the rest, but i get that...:) y = ln[x +/- sqrt(x^2-1)] arccosh(x)=ln[x +/- sqrt(x^2-1)] March 25th 2009, 08:32 AM What were the instructions? What are you supposed to be doing with the "Let y =" bit? Thank you! :D March 25th 2009, 08:40 AM I'm trying to prove that arccosh(x)=ln[x +/- sqrt(x^2-1)]... March 25th 2009, 09:17 AM Its the formulae for the inverse hyperbolic function arccos(x)
{"url":"http://mathhelpforum.com/calculus/80598-hyperbolic-function-print.html","timestamp":"2014-04-20T07:46:45Z","content_type":null,"content_length":"4619","record_id":"<urn:uuid:dacde7f8-96f8-44d5-9f36-59351621fa18>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
NM, The 'Egg-Yolk' representation of regions with indeterminate boundaries Results 1 - 10 of 92 - FUNDAMENTA INFORMATICAE , 2001 "... The paper is a overview of the major qualitative spatial representation and reasoning techniques. We survey the main aspects of the representation of qualitative knowledge including ontological aspects, topology, distance, orientation and shape. We also consider qualitative spatial reasoning inclu ..." Cited by 179 (16 self) Add to MetaCart The paper is a overview of the major qualitative spatial representation and reasoning techniques. We survey the main aspects of the representation of qualitative knowledge including ontological aspects, topology, distance, orientation and shape. We also consider qualitative spatial reasoning including reasoning about spatial change. Finally there is a discussion of theoretical results and a glimpse of future work. The paper is a revised and condensed version of [33, 34]. - International Journal of Geographical Information Systems , 1995 "... Abstract. Analysis of global geographic phenomena requires non-planar models. In the past, models for topological relations have focused either on a twodimensional or a three-dimensional space. When applied to the surface of a sphere, however, neither of the two models suffices. For the two-dimensio ..." Cited by 114 (13 self) Add to MetaCart Abstract. Analysis of global geographic phenomena requires non-planar models. In the past, models for topological relations have focused either on a twodimensional or a three-dimensional space. When applied to the surface of a sphere, however, neither of the two models suffices. For the two-dimensional planar case, the eight binary topological relations between spatial regions are well known from the 9-intersection model. This paper systematically develops the binary topological relations that can be realized on the surface of a sphere. Between two regions on the sphere there are three binary relations that cannot be realized in the plane. These relations complete the conceptual neighborhood graph of the eight planar topological relations in a regular fashion, providing evidence for a regularity of the underlying mathematical model. The analysis of the algebraic compositions of spherical topological relations indicates that spherical topological reasoning often provides fewer ambiguities than planar topological reasoning. Finally, a comparison with the relations that can be realized for one-dimensional, ordered cycles draws parallels to the spherical topological relations. 1 - PROCEEDINGS OF THE DIMACS INTERNATIONAL WORKSHOP ON GRAPH DRAWING, 1994. LECTURE NOTES IN COMPUTER SCIENCE , 1997 "... This paper surveys the work of the qualitative spatial reasoning group at the University of Leeds. The group has developed a number of logical calculi for representing and reasoning with qualitative spatial relations over regions. We motivate the use of regions as the primary spatial entity and show ..." Cited by 81 (3 self) Add to MetaCart This paper surveys the work of the qualitative spatial reasoning group at the University of Leeds. The group has developed a number of logical calculi for representing and reasoning with qualitative spatial relations over regions. We motivate the use of regions as the primary spatial entity and show how a rich language can be built up from surprisingly few primitives. This language can distinguish between convex and a variety of concave shapes and there is also an extension which handles regions with uncertain boundaries. We also present a variety of reasoning techniques, both for static and dynamic situations. A number of possible application areas are briefly mentioned. , 2001 "... This paper presents an application of the theory of granular partitions proposed in (Smith and Brogaard, to appear), (Smith and Bittner 2001) to the phenomenon of vagueness. We understand vagueness as a semantic property of names and predicates. This is in contrast to those views which hold that the ..." Cited by 72 (34 self) Add to MetaCart This paper presents an application of the theory of granular partitions proposed in (Smith and Brogaard, to appear), (Smith and Bittner 2001) to the phenomenon of vagueness. We understand vagueness as a semantic property of names and predicates. This is in contrast to those views which hold that there are intrinsically vague objects or attributes in reality and thus conceive vagueness in a de re fashion. All entities are crisp, on de dicto view here defended, but there are, for each vague name, multiple portions of reality that are equally good candidates for being its referent, and, for each vague predicate, multiple classes of objects that are equally good candidates for being its extension. We show that the theory of granular partitions provides a general framework within which we can understand the relation between terms and concepts on the one hand and their multiple referents or extensions on the other, and we show how it might be possible to formulate within this framework a solution to the Sorites paradox. 1. - Data and Knowledge Engineering , 1996 "... INTRODUCTION This is a brief overview of formal theories concerned with the study of the notions of (and the relations between) parts and wholes. The guiding idea is that we can distinguish between a theory of parthood (mereology) and a theory of wholeness (holology, which is essentially afforded b ..." Cited by 62 (13 self) Add to MetaCart INTRODUCTION This is a brief overview of formal theories concerned with the study of the notions of (and the relations between) parts and wholes. The guiding idea is that we can distinguish between a theory of parthood (mereology) and a theory of wholeness (holology, which is essentially afforded by topology), and the main question examined is how these two theories can be combined to obtain a unified theory of parts and wholes. We examine various non-equivalent ways of pursuing this task, mainly with reference to its relevance to spatio-temporal reasoning. In particular, three main strategies are compared: (i) mereology and topology as two independent (though mutually related) theories; (ii) mereology as a general theory subsuming topology; (iii) topology as a general theory subsuming mereology. This is done in Sections 4 through 6. We also consider some more speculative strategies and directions for further research. First, however, we begin with some preliminary outline of , 1995 "... The standard mathematical approaches to topology, point-set topology and algebraic topology, treat points as the fundamental, undefined entities, and construct extended spaces as sets of points with additional structure imposed on them. Point-set topology in particular generalises the concept of ..." Cited by 49 (9 self) Add to MetaCart The standard mathematical approaches to topology, point-set topology and algebraic topology, treat points as the fundamental, undefined entities, and construct extended spaces as sets of points with additional structure imposed on them. Point-set topology in particular generalises the concept of a `space' far beyond its intuitive meaning. Even algebraic topology, which concentrates on spaces built out of `cells' topologically equivalent to n-dimensional discs, concerns itself chiefly with rather abstract reasoning concerning the association of algebraic structures with particular spaces, rather than the kind of topological reasoning which is required in everyday life, or which might illuminate the metaphorical use of topological concepts such as `connection' and `boundary'. This paper explores an alternative to these approaches, RCC theory, which takes extended spaces (`regions') rather than points as fundamental. A single relation, C (x; y) (read `Region x connects with "... . This chapter surveys the work of the qualitative spatial reasoning group at the University of Leeds. The group has developed a number of logical calculi for representing and reasoning with qualitative spatial relations over regions. We motivate the use of regions as the primary spatial entity and ..." Cited by 49 (5 self) Add to MetaCart . This chapter surveys the work of the qualitative spatial reasoning group at the University of Leeds. The group has developed a number of logical calculi for representing and reasoning with qualitative spatial relations over regions. We motivate the use of regions as the primary spatial entity and show how a rich language can be built up from surprisingly few primitives. This language can distinguish between convex and a variety of concave shapes and there is also an extension which handles regions with uncertain boundaries. We also present a variety of reasoning techniques, both for static and dynamic situations. A number of possible application areas are briefly mentioned. 1. Introduction Qualitative Reasoning (QR) has now become a mature subfield of AI as its tenth annual international workshop, several books (e.g. (Weld and De Kleer 1990, Faltings and Struss 1992)) and a wealth of conference and journal publications testify. QR tries to make explicit our everyday commonsense kno... - An Overview”, Fundamenta Informaticae , 2001 "... The need for spatial representations and spatial reasoning is ubiquitous in AI – from robot planning and navigation, to interpreting visual inputs, to understanding natural language – in all these cases the need to represent and reason about spatial aspects of the world is of key importance. Related ..." Cited by 45 (6 self) Add to MetaCart The need for spatial representations and spatial reasoning is ubiquitous in AI – from robot planning and navigation, to interpreting visual inputs, to understanding natural language – in all these cases the need to represent and reason about spatial aspects of the world is of key importance. Related fields of research, such as geographic information science - Artificial Intelligence , 1999 "... The Region-Connection Calculus (RCC) is a well established formal system for qualitative spatial reasoning. It provides an axiomatization of space which takes regions as primitive, rather than as constructions from sets of points. The paper introduces boolean connection algebras (BCAs), and prove ..." Cited by 43 (7 self) Add to MetaCart The Region-Connection Calculus (RCC) is a well established formal system for qualitative spatial reasoning. It provides an axiomatization of space which takes regions as primitive, rather than as constructions from sets of points. The paper introduces boolean connection algebras (BCAs), and proves that these structures are equivalent to models of the RCC axioms. BCAs permit a wealth of results from the theory of lattices and boolean algebras to be applied to RCC. This is demonstrated by two theorems which provide constructions for BCAs from suitable distributive lattices. It is already well known that regular connected topological spaces yield models of RCC, but the theorems in this paper substantially generalize this result. Additionally, the lattice theoretic techniques used provide the first proof of this result which does not depend on the existence of points in regions. Keywords: Region-Connection Calculus, Qualitative Spatial Reasoning, Boolean Connection Algebra, Mer... - GeoInformatica , 1997 "... An important component of spatial data quality is the imprecision resulting from the resolution at which data are represented. Current research on topics such as spatial data integration and generalisation needs to be well-founded on a theory of multi-resolution. This paper provides a formal framewo ..." Cited by 43 (7 self) Add to MetaCart An important component of spatial data quality is the imprecision resulting from the resolution at which data are represented. Current research on topics such as spatial data integration and generalisation needs to be well-founded on a theory of multi-resolution. This paper provides a formal framework for treating the notion of resolution and multi-resolution in geographic spaces. It goes further to develop an approach to reasoning with imprecision about spatial entities and relationships resulting from finite resolution representations. The approach is similar to aspects of rough and fuzzy set theories. The paper concludes by providing the beginnings of a geometry of vague spatial entities and relationships. Keywords: uncertainty, vagueness, rough set, fuzzy set, resolution, spatial reasoning, data quality 1. Introduction The notion of spatial resolution is fundamental to many aspects of the representation of spatial data, and a proper formulation of a multi-resolution data model is...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=210778","timestamp":"2014-04-21T03:10:53Z","content_type":null,"content_length":"39744","record_id":"<urn:uuid:39434fc5-cff3-4ee6-af18-e7d0ae1c9efc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: help!!! will give medals • 9 months ago • 9 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51e593e8e4b04edf24849424","timestamp":"2014-04-20T08:16:41Z","content_type":null,"content_length":"40791","record_id":"<urn:uuid:2fb5164a-9657-4f2d-9d5f-ba03c288c19c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
How many bobbyms does it take to change a lightbulb? I do not know but someone might 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: How many bobbyms does it take to change a lightbulb? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: How many bobbyms does it take to change a lightbulb? Answer: None. All the bobbyms will wait for Wolfram to create a better CAS 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: How many bobbyms does it take to change a lightbulb? Nope, M is like a 1911 Colt 45. Some people love it and some people say it is overkill, but everyone has to agree that it will knock your socks off. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: How many bobbyms does it take to change a lightbulb? Can you change a lightbulb with it? Yes? 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: How many bobbyms does it take to change a lightbulb? You do not need to! An M user's mind is so bright, so sharp, so piercing, so innovative that it will cast aside the darkness. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: How many bobbyms does it take to change a lightbulb? Okay, good answer 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: How many bobbyms does it take to change a lightbulb? Perhaps a better question is, how many bobbyms does it take to change a bobbym? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Real Member Re: How many bobbyms does it take to change a lightbulb? None. He changes himself all the time. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: How many bobbyms does it take to change a lightbulb? I am forced to disagree, he has remained the same for a very long time. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: How many bobbyms does it take to change a lightbulb? How long? You could not tie your shoes 92 years ago 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: How many bobbyms does it take to change a lightbulb? For about 81 years. I did not wear shoes 92 years ago. I wore little slippers. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: How many bobbyms does it take to change a lightbulb? Hmm, you have changed in a lot in all these years 'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.' 'God exists because Mathematics is consistent, and the devil exists because we cannot prove it' 'Who are you to judge everything?' -Alokananda Re: How many bobbyms does it take to change a lightbulb? Yes, I grew bigger. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=290116","timestamp":"2014-04-20T13:29:41Z","content_type":null,"content_length":"26074","record_id":"<urn:uuid:54cb9742-7cc0-4788-ae20-d313e574b19c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Parallel-Plate Capacitors The capacitance of a parallel-plate capacitor (or condenser) is given in SI units by , where is the area of each plate, is the spacing between plates, is the dielectric constant (relative permittivity), and is the permittivity of free space, farad/meter. If is expressed in and in , then microfarads (). Capacitance determines the quantity of positive and negative charges that can be held on the plates by a voltage , such that . In an air-gap capacitor, in which there is no dielectric layer, .
{"url":"http://demonstrations.wolfram.com/ParallelPlateCapacitors/","timestamp":"2014-04-18T08:55:13Z","content_type":null,"content_length":"43702","record_id":"<urn:uuid:97015862-bb69-4cc6-bb33-d88023066ece>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
LED BULBS AND LIGHTS - PASSAT This is some simple information to help remove some of the myths surrounding the use of LED bulbs and lights. A typical 5 watt tail light/ side light bulb draws a current of 0.42 Amps. (Current = Watts/Volts) where as a single bright white led draws about 0.04 Amps. That’s 10 times less current. In this diagram the FOUR white led’s are equivalent to one bulb in terms of voltage. BUT draw 10 times less current. Many modern cars are equipped with bulb failure warning lights. These generally work by measuring the current draw in each individual circuit. If you replace a 5 watt bulb with an equivalent led “bulb”, the current draw will be so low, it will be detected as a bulb failure. The only way to prevent this is to replace the missing current draw. So the led circuit needs to contain something that acts like a 5 watt bulb. The standard method is to add a resistor in parallel with the led light. The added resistor causes the same current draw as the single bulb so no bulb failure is flagged. This resistor can be built into the LED light or as an added extra. Remember….This resistor is replacing a bulb that got hot...so this resistor is going to get hot. A suitable replacement for a 5 watt bulb would be a 30 Ohm 5 watt or more resistor. The nearest size you can find may be 47 Ohm at 5 watts (You must use a high power (watts) resistor for this, or the resistor will over heat and catch fire) If you fit a LED light to your car that does not have this extra resistor, you will get a bulb failure warning. Some people describe this problem as a “can-bus problem” which It is not. It has resulted in people selling “can-bus safe” LED lights. So called “can-bus safe” LED lights should contain a resistor and not put the bulb failure light on, so these are the only ones worth buying. Remember...One reason for using led’s is to cut down on power consumption. If you fit LED’s that don’t bring on a bulb failure warning light, you are using the same power consumption as the original bulb, you replaced !!!!! LED’s used in cars often have a resistor attached. This is not for bulb failure warning prevention. This is because LED’s cannot work on 12 volts. A white LED typically works on 3 volts. If you connected an LED to 12 volts it will be destroyed instantly. To use LED’s on 12 volts you must either arrange them in groups of FOUR in SERIES or in ONE’s, TWO’s or THREE’s with a suitable balancing resistor in SERIES. In this diagram the FOUR white LED’s are equivalent to one 12v bulb as far as voltage is concerned. In this diagram the TWO white LED’s plus the resistor are equivalent to one 12v bulb as far as voltage is concerned. How do you know what size resistor to use ? This depends on the make and type of LED so don’t guess. Check with the supplier to get the working Voltage and Current for your LED’s. Voltage: 3.0 - 3.4 v Current: 30 mA - 40 mA. (40 mA is 0.040 Amps) So for this example the maximum allowed voltage is 3.4v and maximum current draw is 0.040 Amps. So the with a 12v supply we want the resistor to take 8.6v and leave 3.4v for the LED. (12 – 3.4 = 8.6) The current drawn by the LED is 0.040 amps so using Ohms law. The resistor we need will be 8.6 volts / 0.040 amps which is 215 ohms. You can’t buy 215 ohm resistors so you use the nearest HIGHER resistor available which is 220 ohms. If you want to run TWO LED’s in series using 12 volts you will need a different resistor. E.g. Two LED’s in series need 6.8 volts (3.4v + 3.4v). So with a 12v supply we want the resistor to take 5.2v. (12 - 6.8 = 5.2) The current drawn is still 0.040 amps so using Ohms law. The resistor will need to be 5.2 volts / 0.040 amps which is 130 ohms. A reasonable rule of thumb for LED’s on 12 volts is to assume each LED missing from a group of four should be replaced by 75 ohms. To save space and money, most manufactures arrange 12 volt powered LED’s in SERIES groups of FOUR. Then they don’t need the extra balancing resistor. This does mean that if one LED fails, FOUR will stop working. This is allowed for in vehicle lighting regulations. More than FOUR LED’s not working is considered to be a “bulb out offence”. Unlike bulbs, LED’s only work one way round. So if you fit a LED lamp and it doesn’t work. Try turning it round the other way. Fitting LED “bulbs” or light units to a car is a change from the manufacturers specification. Which means insurers may not like it.
{"url":"https://sites.google.com/site/1810martin/led-bulbs-and-lights","timestamp":"2014-04-17T22:03:18Z","content_type":null,"content_length":"42903","record_id":"<urn:uuid:b2da99c1-1782-4a8e-aadf-a4e2037c31a2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Rate the Invention Above This is a test. I want to see how many people read the first post that is meant to explain everything. Now if you have actually read this do not (and I am serious) say that this is a test, you may prove that you read this by beginning your response with a 'I don't know... a <whatever number you want>'. Hopefully not too many people will skip this first post. Now let's have some fun and see how many people fail this little test. The Pillow • Log in to comment
{"url":"http://www.comicvine.com/forums/off-topic-5/rate-the-invention-above-566138/","timestamp":"2014-04-20T19:27:54Z","content_type":null,"content_length":"160110","record_id":"<urn:uuid:05112add-7e9b-4da4-ac25-e7c3fcc0aa99>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Poisson Random Variable March 28th 2009, 05:11 AM Poisson Random Variable The number of defective items that come out of a production line on any given day is a Poisson random variable with parameter λ=2. At the end of the day, the defective items are reworked. Each defective item can be repaired with probability 0.6 and is discarded with probability 0.4. 1. What is the probability that fewer than 3 items are discarded on a given day? 2. What is the expected number of items discarded? March 29th 2009, 12:06 AM The number of defective items that come out of a production line on any given day is a Poisson random variable with parameter λ=2. At the end of the day, the defective items are reworked. Each defective item can be repaired with probability 0.6 and is discarded with probability 0.4. 1. What is the probability that fewer than 3 items are discarded on a given day? 2. What is the expected number of items discarded? The probability of $k$ defectives in a day is: $<br /> p(\text{k})=f(k,\lambda)=f(k,2)<br />$ where $f(k,\lambda)$ is the Poisson probability mass function. Hence the probability that fewer that 3 items are discarded in a day is: $p(\text{fewer than 3 discards})=\sum_{k=0}^{\infty} f(k,2)\sum_{r=0}^2 b(r;k,0.4)$ where $b(r;k,0.4)$ is the pmf for the binomial distribution with k trials with probability of success on a single trial of $0.4$.
{"url":"http://mathhelpforum.com/statistics/81034-poisson-random-variable-print.html","timestamp":"2014-04-17T01:31:17Z","content_type":null,"content_length":"6274","record_id":"<urn:uuid:7296193e-8be9-4f9b-a7a1-82216a9560b8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
2. ESTIMATES OF BRAKING These merger hopes would all be in vain, of course, if the severe kind of dynamical friction which they seem to require proved simply to be unattainable. Fortunately, as this section will now review, such pessimism seems unwarranted for slow and deeply interpenetrating encounters. It remains quite a different story, however, with making sure that two disk systems in some more grazing passage necessary to produce impressive tails can actually stop and merge rapidly enough - i.e., leave those tails still neatly in view when the bodies sink together. No one seems yet to have made any progress worth citing on this second and more pertinent question of dynamics. Still, to be grateful for what we have got, it is clear now that simply the splash or "violent relaxation" from mutual tidal forces can indeed halt rapidly any two comparable and already spherical stellar systems which happen to blunder head-on through each other with only about the speed developed in free fall from rest at infinity. Better yet, it appears that already the "impulsive tide" approximation of Alladin (1965; see also Sastry and Alladin 1970, 1977) - which in effect extended Chandrasekhar's friction estimates to the very nonuniform situations arising when whole galaxies interpenetrate at high speed - provides a neat and at least semi-quantitative explanation of even this near-parabolic stickiness. The examples offered by Alladin and Sastry themselves refer mostly to (spherical) mass distributions akin to the classical n = 4 gaseous polytrope. To appreciate the gist of their reasoning, however, it seems preferable to concentrate instead on the n = 5 polytrope known either as the Schuster or the Plummer model. Its volume density is given by the well-Known formula where M is the total mass, a is a scale, length, and r is the spherical radius. The equally simple force law of this model greatly reduces the chore of calculating the lateral speed that a test particle would develop upon rushing past it at distance D, with an immense and nearly constant speed U. And also for this model, it is pleasant to reckon further that if instead of a single particle the passerby consisted of many different stars from an identical system traveling with a (supposedly) constant speed U along an exactly head-on straight trajectory, then the tidally-induced motions v[] of those stars toward that orbit axis would amount to a kinetic energy soaked up suddenly by that intruding galaxy. Alladin and Sastry stressed very properly that such a gain of internal energy can occur only at the expense of the energy of relative motion of the two galaxies. In essence, they said, any such inward splash converts some of that orbital energy into mere stellar-dynamical heat. The only awesome thing about this reasoning is the magnitude of that expected transfer. To assess it quickly, notice that the potential energy released in bringing two undeformed and yet penetrable Plummer models together from infinity to a perfectly superposed state is |W| = (3^2/a). Of course this equals not only the negative sum of the potential energies of the two systems reckoned individually while still far apart, but also the "orbital" kinetic energy MU[esc]^2/4 developed by those two when overlapping at the very bottom of a (rigid) free fall from rest at great distance. Now suppose both models indeed to be flabby for the purposes of Equation (4), and adopt as the speed U the full relative escape speed U[esc] that we just estimated. It then follows at once that the ratio of the lost to the available kinetic energy is fully For other polytropes n = 4, 3, and 2, incidentally, laborious numerical integrations (such as Alladin and Sastry were also forced to perform) yield very similar ratios of 46.3, 45.6, and 45.1 per These striking conclusions can, of course, be faulted for abusing the impulsive and constant-speed assumptions on which they were based. Strictly speaking, it is correct to treat such estimates as merely asymptotic - that is, to infer only that a head-on intrusion of equal n = 3 polytropes, for example, with relative speeds U >> U[esc], will cost that pair a multiple of their new and much larger peak kinetic energy. As such a former skeptic, however, I must say that I now regard Equations (5) and (6) as very adequate even when mistreated. What convinced me was not the occasional mergers found by Hohl, Miller and/or Prendergast in their planar 10^5 -body experiments; it was more the 3-D studies with 100 rings or 2000 mass points described below. Figure 2 updates the brief report by Toomre (1974) on a numerical experiment in which Larry Cox and I simulated each of two parabolically approaching Plummer models as a beehive of randomly-moving coaxial rings, all interacting with one another via gravity forces softened modestly at close range. Our aim in using these softened rings instead of conventional point masses was to reduce greatly all inter-particle relaxation effects (such as were blamed, perhaps unduly, by Aarseth and Hills 1972 in their own experiment). We wished to concentrate more on the commotion due to the sudden onset of the collective tidal forces. The old diagram gave results for 12+12 rings. Figure 2, nearly as ancient, now repeats the exercise with 50+50. In units of the Plummer scale length a, it shows the axial coordinate z of each ring as a function of time t. One small discordant note: the densest cores in this diagram seem to separate as far as rigid n = 5 polytropes like here should not even have reached 4a after a presumed 48 per cent loss of kinetic energy at their instant of overlap. At least half this discrepancy, however, seems due to our reduction of the near-gravity. Figure 2. Head-on impact and merger of two equal Plummer models that arrived with escape speed. The axial coordinates z(t) of one set of 50 softened rings used to represent one such model are shown dotted, the others as solid curves. Figure 3 was contributed very kindly by van Albada and van Gorkom (1977), as a cousin of an impressive test case already shown in their paper. As if only for variety, it refers to polytropes of index n = 3 - and these were now assigned, at infinity, a relative motion U = U[esc] / 2Figure 3 is much to be preferred technically to Figure 2. Its chief immediate value, however, lies doubtless in this explicit demonstration that even a moderately hyperbolic initial motion does not yet spoil the merger. Of course, the total energy just ceases to be negative if we double the speed at infinity from the value in Figure 3. At least such a recipe no longer promises a merger - and further experiments quite agree. In fact, van Albada (1976, private communication) reckons empirically that captures cease already when U(0) [esc]. By contrast, Equation (6) patterned upon Alladin's work places that crossover at just a shade under 1.16 U[esc] Not bad for a simple formula. Figure 3. Head-on impact and merger of two stellar dynamical n = 3 polytropes upon arrival at center with 1.061 times the escape speed U[esc]. This diagram by van Albada and van Gorkom shows projected densities at seven instants. It will not have escaped the reader that, unfortunately, this little success story has referred only to (already) spherical systems taking part in the most symmetric encounters imaginable. As regards disks and their own interplay, it is trivial, of course, to extend both this thinking and the experiments to exactly axial (= face-to-face) penetrations of two very flattened assemblies of rings. And by constraining them to remain axisymmetric, one can even ignore blithely all serious instability questions of the subject. My own experience in that tractable but unrealistic setting has been that while the immediate energy loss runs only around 20 per cent (instead of the high 40s) for a variety of disk models, soon enough they manage to merge also, and they tend to yield outlines (though hardly the full density profiles) resembling E3/E4 galaxies. But all this, I stress again, seems almost irrelevant. It is surely no substitute for the much more difficult studies of off-center impacts of stable self-gravitating disks or, more likely, disk-halo systems. Quite understandably, such studies have been very slow to emerge. To conclude, the big worry remains that the strength of braking may drop off too rapidly with increasing impact parameter or miss distance, as one seeks circumstances that will also permit the manufacture of tails of the sort summarized in Table 1. Certainly, the off-axis studies of Alladin and Sastry convey the same warning even for encounters of spheres; paraphrasing them again, it seems that the center of one galaxy needs to impact the other system no farther out than about the 1/2 or 3/4-mass radius, lest the rapidity of their sinking cease to be impressive. Ironically, there is apt to be one logical "out" even if it emerges that disk models cannot decelerate fast enough on their own; In principle at least, one can always embed them, prior to any fateful encounter, within some appreciably larger and more massive systems like the much-discussed extensive halos. Such outer parts would by definition interpenetrate and even splatter nicely as those visible disks only graze one another. But what a strange way that would be to make ellipticals! This work was supported in part by a generous grant from the National Science Foundation.
{"url":"http://ned.ipac.caltech.edu/level5/Toomre/Toomre2.html","timestamp":"2014-04-16T10:22:06Z","content_type":null,"content_length":"13195","record_id":"<urn:uuid:db313039-54ee-4bbf-ae7b-5cdcad7cc0a5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Hersh's incoherent attack on formalism and logicism Reuben Hersh rhersh at math.math.unm.edu Thu Oct 1 18:14:20 EDT 1998 On Thu, 1 Oct 1998, Stephen G Simpson wrote: > Reply to Hersh 30 Sep 1998 16:19:42. > 1. Frege and modern logic > Hersh writes: > > You say "you must dismiss Frege's work as a failure." Not at all. > > I wrote, page 141 of W.I.M.R.: "Frege's introduction of quantifiers is > > consdered the birth of modern logic." > Do *you* consider Frege's work to be the birth of modern logic? > Do *you* think modern logic is of value for philosophy of mathematics? > You don't seem to think so. I'M SORRY, I SHOULDN'T HAVE USED THE ABBREVIATION W.I.M.R. IT STANDS FOR THE TITLE OF MY BOOK WHICH FOR ALL I KNOW YOU MAY STILL HAVE A COPY OF. "WHAT IS MATHEMATICS, REALLY?" WHEN I QUOTE MY OWN PRINTED WORDS, OF COURSE I AM EXPRESSING MY OWN OPINION. THE POINT IS, IF YOU BOTHERED TO READ WHAT I SEND YOU, EITHER BOOKS OR MESSAGES0 YOU WOULDN'T WASTE TIME WITH SUCH AN ABSURD QUESTION. THE ANSWER IS YES. YOUR LAST SENTENCE ABOVE IS, AS THEY SAY, "OFF THE WALL." > 2. The axiom of infinity: Hilbert's program and formalism > > You research program doesn't respond to my remark. > Which remark, and which research program? STOP PLAYING GAMES. YOU'RE BRIGHT, YOU KNOW WHICH REMARK AND > One of your remarks (12 Sep 1998 18:06:45) was as follows: > > One famous difficulty is [the] axiom of infinity. You can't do > > modern math without it. > One of "my" research programs (actually it involves a number of > people) responds directly to your remark. It does so by examining > the extent to which modern mathematics is reducible to finitism. > This is in the context of Hilbert's program, which you have > dismissed as a failure. I NEVER DISMISSED IT. I SAID IT DIDN'T ACHIEVE ITS ORIGINAL GOAL. SEE THE ESSAY ON THIS SUBJECT BY JOHN VON NEUMANN IN W.I.M.R.? > Are you willing to rescind your dismissal? THERE IS NO DISMISSAL. Are you willing toexamine evidence against your remark? SURE, OF COURSE.> > > You say, "If you are unwilling to study the role of infinity in > > mathematics, then how can you expect anyone to take your comments > > on it seriously?" > > > > My comment, that the axiom of infinity is not inituitively > > plausible as an axiom of logic, is not mine. It has been made by > > others .... > Even if the comment has been made by others, you repeated it, so you > must take some responsibility for it. You can't hide behind others. > RIGHT. I'M NOT HIDING. I QUOTED OTHERS TO BACK UP OR REINFORCE OR FORTIFY MY OWN OPINION. > However, I wasn't referring to that particular comment. (More on > that comment below, in connection with logicism.) Rather, I was > referring to another of your comments concerning the axiom of > infinity: > > One famous difficulty is [the] axiom of infinity. You can't do > > modern math without it. > I say again: How can you expect anyone to take this comment > seriously, if you are not willing to examine evidence for and > against it? I DON'T EXPECT ANYONE TO TAKE IT SERIOUSLY. WHETHER ANYONE TAKES IT SERIOUSLY IS ENTIRELY UP TO THEM. MY IMPRESSION IS THAT RUSSELL INTRODUCED THE AXIOM OF INFINITY BECAUSE HE COULDN'T DO WITHOUT IT. IF MODERN RESEARCH HAS SHOWN THAT RUSSELL WAS WRONG, THEN MY REMARK WOULD ALSO BE WRONG. I AM PUZZLED, THOUGH, BY THE FACT THAT PEOPLE CONTINUE TO REFER TO ZF AS THE FOUNDATIONAL AXIOMS OF SET THEORY. IF YOU ARE THROWING OUT THE AXIOM OF INFINITY, SHOULDN'T IT BE ZFS (ZERMELO-FRAENKL-SIMPSON)? > > You say, "You present a a caricature of Hilbert's work, then > > attack the caricature." No. I used the word "formalism in the > > common, colloquial sense, not in Hilbert's sense. There is no > > caricature and no attack. > Here you seem to be evading the fact that Hilbert is generally > regarded as the originator of formalism. Do you dispute this > conventional view of the history of formalism? > But, all right, let's take you at your word and assume that you > never attacked Hilbert's formalism. Let's assume that you were > attacking somebody else's formalism. > Who are these hitherto unnamed formalists? CURRY, HENLE Do you recognize a > difference between their views and those of Hilbert? YES. Or are you > merely attacking coffee-room chatter, as Martin Davis suggested? MY MAIN CONCERN IN WIMR IS THE PHILSOPHY OF MATHEMATICS IN THE SENSE OF THE PHILSOPHICAL VIEW OF MATHEAMTAICS IMPORTANT. OF COURSE, YOU ARE FREE TO SNEER AT IT AND DISMISS IT IF YOU CH;OOSE TO DO THAT. > > > You say, "You were arguing that it's OK to dismiss Hilbert's > > views without a hearing." As I keep trying to explain, I never > > referred to Hilbert's views at all. The word formalism has more > > than one meaning. I can't believe you're unaware of that. > I'm *not* aware of that. I accept the conventional view that > Hilbert is the originator of formalism. If you have some other kind > of formalism in mind, please tell me who originated it and how it > differs from Hilbert's formalism. FORMALISM IN COMMON SPEECH SAYS THAT MATHEMATICS IS JUST FORMULAS AND CALCULATIONS. MEANING IF ANY IS EXTRAMATHEMATICAL. I DON'T KNOW WHO ATTACKED. SORRRY I CAN'T GIVE YOU THE PAGE REFERENCE. HILBERT > Here are the real questions: > Do *you* think Hilbert's program is of any actual or potential value > for philosophy of mathematics? > YES. > Do *you* think the research of your other formalists (who are they?) > CURRY AND HENLE has any actual or potential value for philosophy of > 3. The axiom of infinity: set theory and logicism > Hersh writes: > > You seem incapable of dealing with this well known fact. > Here you are referring to the well known fact that the axiom of > infinity is not generally regarded as a logical axiom. I accept > that fact, and I understand the reasons for it, at least in the > context of Russell's type theory and ZF set theory. In this sense, > one could say that these theories do not represent a *total* > vindication of the logicist program. But it's going too far to say, > as you do, that the logicist program as a whole is a mistake or a > failure. I SAID IT FAILED TO ACHIEVE ITS ORIGINAL GOAL. THAT DOESN'T MEAN IT WAS A FAILURE AS A WHOLE OR A MISTAKE. I HAVE TOLD YOU OVER AND OVER THAT I RECOGNIZE THE ACHIEVEMENTS OF LOGICISM. LIKE, LOOK BACK AT THE VERY FIRST QUESTION ON Y;OUR > By the way, there is an alternative set theory known as New > Foundations (= NF), going back to Quine. I don't know too much > about it, but my impression is that it attempts to carry out the > logicist program by deriving the axiom of infinity and others from > some logical principles. Naturally there are costs to this. As I > say, I am not an expert on this. The FOM subscriber list includes > some experts on NF: Thomas Forster, Randall Holmes. > Also, there is some recent work of Harvey Friedman about motivating > the axioms of set theory in a more logical way, as an outcome of a > theory of mathematical predication. > Do you regard this kind of f.o.m. research as legitimate? Do you > regard it as having potential interest for philosophy of > mathematics? > YES > 4. Demonization > Hersh writes: > > what do you mean, "demonize"? When you attribute such motives to > > me, it's I who am being demonized. To criticize or even reject > > foundationalism isn't demonizing anything. It's what people do > > in the course of finding their philosophical beliefs. You have gone beyond what I regard as legitimate philosophical > criticism. You have attacked foundationalism as anti-"humanistic", > anti-life in a sense, IF YOU CAN CITE WHEN AND WHERE I DID SUCH A THING, I WILL GLADLYL RETRACT AND APOLOGIZE. MY GUESS IS THAT YOU ARE DAZED BY THE NAME I CHOSE FOR MY OWN IDEAS--HUMANISM. THEN, BY A MISUSE OF LOGIC, YOU CONCLUDE THAT I AM CALLING YOU AND YOUR FRIENDS "ANATI-HUMANISTIC' WHICH COULD ONLY MEAN DISAGREEING WITH HUMANISM, AS YOU CERTAINLY DO. BUT ON THE WAY YOU DROP A FEW LETTERS AND IMAGINE I CALLED YOU AND YOUR FRIENDS ANATI-HU;MAN. BIG MISTAKE. NEXT TIME PAY BETTER ATTENTION and you have tried to artificially link > foundationalism with religion and with authoritarian or totalitarian > politics. I don't think "demonize" is too strong a term to describe > your behavior. I PRESENTED HISTORICAALLY THE CONNECTION BETWEEN RELIGION ABD PHILOSOPHY OF MATHEMATICS, FROM PLATO TO LEIBNIZ..I WOULD BE INTERESTED IF YOU BOTHERED TO READ THAT, AND TELL ME IF THERE ARE ANY MISTAKES OR FALSIFICATIONS THERE. IN ANY CASE ESTABLISHING A LINK WITH RELIGION IS NOT DEMONIZING! AS FAR AS POLITICS, THIS DISCUSSION WAS A FEW PAGES OF CHAPTER 11. THERE I TABULATEAD THE PHILOSOPHICAL VS. POLITICAL VIEWS OF 23 PHILOSOPHERS. IT TURNED OUT THAT WHAT I CALL THE "MAINSTREAM" "MAVERICK" WERE MOSTLY LEFT WIHG. THAT'S IT. NOTHING ABOUT OR TOTALITARIAN. IT WOULD BE NICE IF YOU LOOKED OVER THOSE FEW PAGES AND TELL ME IF I MADE ANY MISTAKES. IF NOT, THEN I HAVE TO CONCLUDE BETWEEN PHILOSOPHY AND POLITICS. FINE, IF THAT'S WHAT YOU THINK, SAY SO. BUT WHERE IS THE DOMINIZING? > IT WAS DUMMETT WHO READ FREGE'S JOURNAL AND FOUND HE WAS A NAZI. BROUWER WAS TRIED BY HIS OWN UNIVERSITY AND CONVICTED OF THE PUBLIC. BUT IT'S TOO LATE. IT'S IN YOUR LIBRARY BACK THERE IN > > It's weird to tell me I regard the pursuit of certainty as "evil > > incarnate." > It's not at all weird to tell you this, in light of your attempt to > demonize foundationalism, on the explicit grounds that the > foundationalists (Frege, Brouwer, Hilbert, ...) were motivated by a > quest for certainty. YOU OWNED~!!! > > to be fair, you'll have to accuse Sol of demonizing, attacking, > > and being "so hostile" to fom!! > Not at all. Sol Feferman has never attempted to demonize f.o.m. by > saying that it is anti-humanistic and linking it to totalitarian > politics, as you routinely do. > 5. Misinterpretation > Reuben Hersh writes: > > I asked why you consistently persist in misinterpreting me. > > You didn't answer, of course. > OK, I'll answer. The answer is that I don't accept the premise of > your question. The premise of your question is that I am > misinterpreting you. I don't accept that premise. I don't think I am > misinterpreting you. I think my interpretation of you is correct. To > put it colloquially, I think I've "got your number". > GREAT! LET'S HEAR IT! WHAT IS THE NUMBER OF ME THAT YOU'VE GOT? I'M EAGER TO HEAR. REUBEN HERSH > -- Steve More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-October/002244.html","timestamp":"2014-04-20T01:13:31Z","content_type":null,"content_length":"16164","record_id":"<urn:uuid:f0b5d052-7676-4f19-945c-70c7a8e9d912>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Little Neck SAT Math Tutor ...The Computer Science program (ABET accredited) at Loyola requires Bachelors of Science to take 4 semesters (2 years) worth of the C programming language. In 2009, I received a Bachelor of Science degree (with honors) in Computer Science from Loyola College in Maryland. I graduated with an overall GPA of 3.65 and major GPA of 3.76. 53 Subjects: including SAT math, reading, algebra 1, GRE ...I am an experienced mathematics and science teacher, with a wide range of interests and an extensive understanding of physics and mathematics. I love to talk with students of all ages about these subjects, and I would like to help you to appreciate their fundamental simplicity and beauty while g... 25 Subjects: including SAT math, chemistry, physics, calculus ...I also help foreign graduate students perfect their grammar and delivery in writing. I believe in building confidence while teaching material. We all excel faster in some areas and slower in 25 Subjects: including SAT math, reading, writing, English ...I love teaching math because I push my students to understand math conceptually. When I teach math, I use a lot of concrete objects and pictures so that my students understand on a deeper level, and these techniques spark curiosity in my students. While teaching English in Korea, I learned many... 20 Subjects: including SAT math, English, reading, writing ...Knowledge is power and it transformed me in many ways. I have had the pleasure of helping hundreds and hundreds of students for the last 15 years and see them improve. I love to see students empowered, realize their own potential and master challenges that they never thought possible before. 55 Subjects: including SAT math, English, reading, calculus
{"url":"http://www.purplemath.com/little_neck_sat_math_tutors.php","timestamp":"2014-04-20T11:15:20Z","content_type":null,"content_length":"24099","record_id":"<urn:uuid:66bedb52-1caf-4ded-a827-423fc43f295a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Sustained development of a society with a renewable resourceTechnology shocks and aggregate fluctuations in an estimated hybrid RBC modelCountry portfolio dynamicsA game options approach to the investment problem with convertible debt financingA dynamic model of shirking and unemployment: Private saving, public debt, and optimal taxationSteady-state invariance in high-order Runge-Kutta discretization of optimal growth modelsFirm heterogeneity, trade, and wage inequalityIs corporate control effective when managers face investment timing decisions in incomplete markets?Heterogeneous trading strategies with adaptive fuzzy Actor-Critic reinforcement learning: A behavioral approachSelf-organized criticality in a dynamic gameEndogenous debt constraints in a life-cycle model with an application to social securityIdentifying a permanent markup shock and its implications for macroeconomic dynamicsOn the specification of noise in two agent-based asset pricing modelsA model of debit card as a means of paymentSmooth-adjustment econometrics and inventory-theoretic money managementFinancial crises and interacting heterogeneous agentsEnvelope theorems for locally differentiable open-loop Stackelberg equilibria of finite horizon differential gamesDynamic predictor selection in a new Keynesian model with heterogeneous expectationsStructural shocks and the comovements between output and interest ratesLabor-market volatility in the search-and-matching model: The role of investment-specific technology shocksNominal vs real wage rigidities in New Keynesian models with hiring costs: A Bayesian evaluationPatents as collateralDiscretization of highly persistent correlated AR(1) shocksOptimal monetary rules under persistent shocksMacroeconomic models and the yield curve: An assessment of the fitMonthly pass-through ratiosWelfare costs of inflation when interest-bearing deposits are disregarded: A calculation of the biasOn the theory of sterilized foreign exchange interventionOn the hidden hazards of adaptive behaviorAsian and Australian options: A common perspectiveOptimal tax rules and addictive consumptionA constructive geometrical approach to the uniqueness of Markov stationary equilibrium in stochastic games of intergenerational altruismOptimal lending contracts with long run borrowing constraintsProgressive taxation and macroeconomic (In) stability with productive government spendingPrice dynamics in a market with heterogeneous investment horizons and boundedly rational tradersTime consistent vs. time inconsistent dynamic asset allocation: Some utility cost calculations for mean variance preferencesTargets for global climate policy: An overviewThe role of non-convex costs in firms' investment and financial dynamicsMacroeconomic (in)stability under real interest rate targetingRobust monetary rules under unstructured model uncertaintyAdaptive learning with a unit root: An application to the current accountSolving the incomplete markets model with aggregate uncertainty using explicit aggregationAbility-heterogeneity, entrepreneurship, and economic growthSolving the incomplete markets model with aggregate uncertainty using the Krusell-Smith algorithm and non-stochastic simulationsBehavioural heterogeneity and shift-contagion: Evidence from the Asian crisisDoes tax competition really promote growth?On nonrenewable resource oligopolies: The asymmetric caseEndogenous growth and adverse selection in entrepreneurshipAging, transitional dynamics, and gains from tradeComparison of solutions to the incomplete markets model with aggregate uncertaintyA quantitative exploration of the Golden Age of European growthPooling forecasts in linear rational expectations modelsEstimated U.S. manufacturing production capital and technology based on an estimated dynamic structural economic modelImplied recoveryOptimal stalling when bargainingComputational suite of models with heterogeneous agents: Incomplete markets and aggregate uncertaintyUncertainty-driven growthCapital-labor substitution and equilibrium indeterminacyCan a stochastic cusp catastrophe model explain stock market crashes?On-the-job search, sticky prices, and persistenceDelegation, time inconsistency and sustainable equilibriumOn the distributional consequences of epidemicsDynamic investment and capital structure under manager-shareholder conflictMore hedging instruments may destabilize marketsA reliable and computationally efficient algorithm for imposing the saddle point property in dynamic modelsMarkov-perfect capital and labor taxesSolving the incomplete markets model with aggregate uncertainty using parameterized cross-sectional distributionsA lattice algorithm for pricing moving average barrier optionsSolving the incomplete market model with aggregate uncertainty using a perturbation methodSolving the incomplete markets model with aggregate uncertainty using the Krusell-Smith algorithmOptimal monetary policy in economies with dual labor marketsOptimal timing of management turnover under agency problemsWhite discrimination in provision of black education: Plantations and townsPreferences with frames: A new utility specification that allows for the framing of risksReal wages over the business cycle: OECD evidence from the time and frequency domainsBehavioral heterogeneity in dynamic search situations: Theory and experimental evidenceAn adverse selection model of optimal unemployment insuranceModeling structural breaks in economic relationships using large shocksModelling long memory and structural breaks in conditional variances: An adaptive FIGARCH approachLife-cycle savings, bequest, and a diminishing impact of scale on growthStochastic adaptation in finite games played by heterogeneous populationsInvestor heterogeneity, asset pricing and volatility dynamicsLearning gamesAssessing the accuracy of the aggregate law of motion in models with heterogeneous agentsSingle-leader-multiple-follower games with boundedly rational agentsComparing DSGE-VAR forecasting models: How big are the differences?Structural changes in the US economy: Is there a role for monetary policy?Chaos in the cobweb model with a new learning dynamicSolving the incomplete markets model with aggregate uncertainty by backward inductionJealousy and underconsumption in a one-sector model with wealth preferenceLife-cycle portfolio choice: The role of heterogeneous under-diversificationThe intrinsic comparative dynamics of infinite horizon optimal control problems with a time-varying discount rate and time-distance discountingDeregulation shock in product market and unemploymentOligopoly exploitation of a private property productive assetGovernment education expenditures in early and late childhoodMonetary regime change and business cyclesNew insights into optimal control of nonlinear dynamic econometric models: Application of a heuristic approachPricing Parisian and Parasian options analyticallyA flexible matrix Libor model with smilesHeterogeneous beliefs and housing-market boom-bust cyclesCharacterization of a risk sharing contract with one-sided commitmentThe marginal welfare cost of capital taxation: Discounting mattersDynamic R&D with spillovers: Competition vs cooperationR&D policy in a volatile economyMacroeconomic implications of early retirement in the public sector: The case of BrazilAutoregression-based estimation of the new Keynesian Phillips curveLearning about monetary policy rules when the housing market mattersFiscal stimulus and labor market policies in EuropeWho becomes an entrepreneur? Labor market prospects and occupational choiceLarge shareholders, monitoring, and ownership dynamics: Toward pure managerial firms?Option pricing where the underlying assets follow a Gram/Charlier density of arbitrary orderHeterogeneous expectations in monetary DSGE modelsInvestment, matching and persistence in a modified cash-in-advance economyChanges in the effects of monetary policy on disaggregate price dynamicsEscaping expectation traps: How much commitment is required?Are the representative agent’s beliefs based on efficient econometric models?Measuring high-frequency income risk from low-frequency dataThe information content of capacity utilization for detrending total factor productivityA system reduction method to efficiently solve DSGE modelsDynamically optimal R&D subsidizationOn the local stability of the stationary solution to variational problemsCentral bank independence and public debt policyEstimation of an agent-based model of investor sentiment formation in financial marketsComposition of R&D and technological cyclesEquilibrium with new investment opportunitiesThe solution of the infinite horizon tracking problem for discrete time systems possessing an exogenous componentDecomposition of the international consequences of policies into world and difference effects Application to the fair multi-country modelInformation technologies, embodiment and growthDynamic portfolio choice and asset pricing with differential informationA theory of optimal deadlinesCapital and macroeconomic instability in a discrete-time model with forward-looking interest rate rulesAre taxes too low?A conditional extreme value volatility estimator based on high-frequency returnsCapacity utilization and market powerA comparative study of portfolio insuranceOptimal currency diversification for a class of risk-averse international investorsThe report of the committee on policy optimisation-- UKBounded rationality, heterogeneity and market dynamicsOptimal pest control in agricultureSocial insurance and taxation under sequential majority voting and utilitarian regimesWhen does coordination pay?Economic policymaking in the United States: New procedures under Humphrey-HawkinsJoint production of substitutable, exhaustible resources, or: Is flaring gas rational?Limit pricing in a mature market A dynamic game approachStationary uncertainty frontiers in macroeconometric models and existence and uniqueness of solutions to matrix Riccati equationsEquilibrium dynamics in two-sector models of endogenous growthSolving for optimal simple rules in rational expectations modelsA dynamic analysis of moving average rulesThe effectiveness of Keynes-Tobin transaction taxes when heterogeneous agents can trade in different markets: A behavioral finance approachSustained endogenous growth with decreasing returns and heterogeneous capitalFinancial decision models in a dynamical settingFrom decay to growth: A demographic transition to economic growthBehavior of the firm in a market for heterogeneous laborTemporary stabilization policy : The case of flexible prices and exchange ratesStochastic multi-agent equilibria in economies with jump-diffusion uncertaintyThe American put under transactions costsA Hotelling model with a ceiling on the stock of pollutionA geometric approach to multiperiod mean variance optimization of assets and liabilitiesStochastic optimal policies when the discount rate vanishesMonte Carlo methods for security pricingHow big is the debt overhang problem?Economic dynamics of reservoir sedimentation management: Optimal control with singularly perturbed equations of motionCharacterizing sustainability: The converse of Hartwick's ruleThe utility of manufacturing cooperativesTiming of investment under technological and revenue-related uncertaintiesA generalized impulse control model of cash managementAn analysis of fiscal policy with endogenous investment-specific technological changeThe suspension of the gold standard as sustainable monetary policyEfficient equilibria in a differential game of capitalismOn the computation of equilibria in discounted stochastic dynamic gamesNominal rigidity and monetary uncertainty in a small open economyOptimally derived fixed rules and indicatorsOn the role of government in a stochastically growing open economyDevil's staircase and chaos from macroeconomic mode interactionStabilization policies for united states feed grain and livestock marketsIntroduction to the Journal of economic dynamics and controlQualitative reasoning in economicsStrategic asset allocation in a continuous-time VAR modelA computational scheme for optimal investment - consumption with proportional transaction costsDrift control of international reservesPortable random number generatorsHybrid algorithms with automatic switching for solving nonlinear equation systemsCluster analysis for portfolio optimizationPublic services, increasing returns, and equilibrium dynamicsBalanced-growth-consistent recursive utility and heterogeneous agentsCommon trends, the government's budget constraint, and revenue smoothingCredit contagion and aggregate lossesOn the cyclical allocation of riskExcess covariance and dynamic instability in a multi-asset modelOptimal growth with pollution: how to use pollution permits?A computational scheme for the optimal strategy in an incomplete marketOptimal consumption choices for a 'large' investorFiscal policy in unionized labor marketsImproving the value at risk forecasts: Theory and evidence from the financial crisisInformation shocks and precautionary savingSimulating and calibrating diversification against black swansNonlinear and stable perturbation-based approximationsA direct test for the mean variance efficiency of a portfolioDynamical systems in macroeconomics: Alternative approaches to the analysis of macroeconomic fluctuationsOptimal dynamic durabilityDoes productive capital affect the order of resource exploitation?Duality, separability, and functional structure: Theory and economic applications : C. Blackorby, D. Primont and R. Russel, (North-Holland, Amsterdam, 1978) pp. xx + 396, $44.00Competitive dynamic advertising : A modification of the Case gameOn the probability of chaos in large dynamical systems: A Monte Carlo studyHow does learning affect market liquidity? A simulation analysis of a double-auction financial market with portfolio tradersEffects of the Hodrick-Prescott filter on trend and difference stationary time series Implications for business cycle researchIndustrial subsidies and technology adoption in general equilibriumImpatience and long-run growthHartwick's rule and economic conservation lawsDevaluating projects and the investment-uncertainty relationshipRules of thumb in macroeconomic equilibrium A quantitative analysisThe optimal consumption function in a Brownian model of accumulation Part A: The consumption function as solution of a boundary value problemOptimal target zones: How an exchange rate mechanism can improve upon discretionOptimal taxation of capital income with imperfectly competitive product marketsU.S. money demand instability A flexible least squares approachStaggered updating in an artificial financial marketEvolutionary game dynamics and the analysis of agent-based imitation models: The long run, the medium run and the importance of global analysisLabor market rigidities and R&D-based growth in the global economyCombining competing forecasts of inflation using a bivariate arch modelThe competitive market paradoxPath-dependence in a Ramsey model with resource amenities and limited regenerationEvolved perception and behaviour in oligopoliesVariations in risk and fluctuations in demand: A theoretical modelDynamics of beliefs and learning under aL-processes -- the heterogeneous caseMultivariate detrending under common trend restrictions: Implications for business cycle researchAttitudes toward the timing of resolution of uncertainty and the existence of recursive utilityInventories, market structure, and price volatilityContinuous cascade models for asset returnsInflation dynamics and the New Keynesian Phillips Curve: An identification robust econometric analysisExperimental evidence on money as a medium of exchangeEquilibrium stock return dynamics under alternative rules of learning about hidden statesOn alternative state space representations of time series modelsBayesian learning, growth, and pollutionSources of complex dynamics in two-sector growth modelsDynamic advertising and pricing in an oligopology A Nash equilibrium approachSustainable growth, renewable resources and pollutionSolving heterogeneous-agent models by projection and perturbationGains from international monetary policy coordination: Does it pay to be different?What do `residuals' from first-order conditions reveal about DGE models?The JEDC and computational economicsA note on cointegration and controlAccounting for global warming risks: Resource management under event uncertaintyLearning dynamics, genetic algorithms, and corporate takeoversRules, reputation and macroeconomic policy coordination : David Currie and Paul Levine, (Cambridge University Press, Cambridge, UK, 1993) HB[UK pound]45, 430 ppErratumBiconvergent stochastic dynamic programming, asymptotic impatience, and 'average' growthInefficiency of credible strategies in oligopolistic resource markets with uncertaintyA dynamic model of occupational choiceAn empirical behavioral model of liquidity and volatilityIncomplete asset markets and the cross-country consumption correlation puzzleExport restraints in a model of trade with capital accumulationCapital accumulation and income distribution as the outcome of a dynamic gameA simple model of Schumpeterian growth with complex dynamicsMoney as a medium of exchange in an economy with artificially intelligent agentsMonetary policy cooperation and multiple equilibriaThe parametric path method: an alternative to Fair-Taylor and L-B-J for solving perfect foresight modelsA multisectoral general equilibrium model of Schumpeterian growth and fluctuationsParameter estimation in commodity markets: A filtering approachA model of learning and emulation with artificial adaptive agentsStrategic behavior and noncooperative hierarchical controlOptimal consumption and investment strategies with a perishable and an indivisible durable consumption goodInvestment under uncertainty--Does competition matter?Consistency and cautious fictitious playTime series properties of an artificial stock marketA dynamic new Keynesian life-cycle model: Societal aging, demographics, and monetary policyOn dynamics with time-to-build investment technology and non-time-separable leisureOptimal harvesting under resource stock and price uncertaintyExistence of stationary equilibrium in the markets for new and used durable goodsInvestment under uncertainty with price ceilings in oligopoliesEquity premium with distorted beliefs: A puzzleManagement compensation and market timing under portfolio constraintsModels and decision making in national economies : J.M.L. Janssen, L.F. Pau and A. Straszak, (North-Holland, Amsterdam, 1979)Short rate nonlinearities and regime switchesMonetary and fiscal policies under two alternative types of rulesOn the investment-uncertainty relationship in a real options modelSolving higher-dimensional continuous-time stochastic control problems by value function regressionHandbook of computational economics : H.M. Amman, D.A. Kendrick, J. Rust, (eds.), vol. 1. North-Holland, Amsterdam, 1996, pp. xxi + 827, $163.75/265.0 Dutch Guilders. (ISBN 0-444-89857-3)A DNS-curve in a two-state capital accumulation model: a numerical analysisOptimal hedging in a dynamic futures market with a nonnegativity constraint on wealthLeverage management in a bull–bear switching marketCyclical dynamics of industrial production and employment: Markov chain-based estimates and testsCan social security be welfare improving when there is demographic uncertainty?Sustainable monetary policiesLearning with bounded memory in stochastic modelsThe treatment of registered retirement savings plans at maturityProductive consumption, the intertemporal consumption trade-off and growthOn the transition from local regular to global irregular fluctuationsFrequent price changes under menu costsImperfect transparency and shifts in the central bank's output gap targetInflation targeting with NAIRU uncertainty and endogenous policy credibilityNew perspectives from the complex planeNecessary conditions for infinite-horizon discounted two-stage optimal control problemsA multi-agent model for describing transhumance in North Cameroon: Comparison of different rationality to develop a routineLeverage as a predictor for real activity and volatilityComputational modelling of price formation in the electricity pool of England and WalesStrong time-consistency in the cartel-versus-fringe modelCausal reasoning and explanation in dynamic economic systemsSupply management with intermittent trade disruptions when the probabilities are not fully knownAsymmetric outcome in a symmetric dynamic duopolyProduction experiences and market structure in R&D competitionInter-pattern speculation: Beyond minority, majority and $-gamesAdaptive strategies of the firm through a business cycleAsymptotic distribution of power spectra and peak frequencies in the stochastic response of econometric modelsStructure, behavior, and market power in an evolutionary labor market with adaptive searchA Monte Carlo approach for the American put under stochastic interest ratesRecursive macroeconomic theory, Lars Ljungqvist and Thomas J. Sargent; The MIT Press, Cambridge, MA, 2000, pp. 737, $60.On the stability of an adjustment process for spatial price equilibrium modeled as a projected dynamical systemOptimal transition to backstop substitutes for nonrenewable resourcesThe dynamic analysis of continuous-time life-cycle savings growth modelsPublic support to innovation and imitation in a non-scale growth modelThe existence and properties of a stationary distribution for unemployment when job search is sequentialStochastic saddlepoint systems Stabilization policy and the stock marketThe optimal economic lifetime of vintage capital in the presence of operating costs, technological progress, and learningA model of sequential investmentHow many cake-eaters? Chouette, on a du monde a diner !Measuring business cycles with business-cycle modelsThe convergence of multivariate unit root distributions to their asymptotic limits : The case of money-income causalityComputing second-order-accurate solutions for rational expectation models using linear solution methodsCan money matter for interest rate policy?Animal spirits in the foreign exchange marketDo open market operations matter? Theory and evidence from the Second Bank of the United StatesOptimal social security in a dynastic model with investment externalities and endogenous fertilityControlled stochastic differential equations under Poisson uncertainty and with unbounded utilityGovernment spending, endogenous labor, and capital accumulationDo CAPM results hold in a dynamic economy? A numerical analysisMassively parallel computation of spatial price equilibrium problems as dynamical systemsDilemmas with infinitesimal magnitudes : The case of resource depletion problemA general framework for predicting returns from multiple currency investmentsRobust portfolio selection using linear-matrix inequalitiesA differential game approach to investment in product differentiationParametric continuity in dynamic programming problemsOn infinite-horizon minimum-cost hedging under cone constraintsOn the preservation of deterministic cycles when some agents perceive them to be random fluctuationsInternational relocation, the real exchange rate and welfareDynamic asset pricing theory with uncertain time-horizonEconometric analysis of structural systems with permanent and transitory shocksOptimal timing of capacity expansionThe impact of a financial transaction tax on stylized facts of price returns—Evidence from the labDynamic optimal taxation, rational expectations and optimal controlEndogenous growth and collective bargainingChoice of projects and their starting dates An extension of Pontryagin's maximum principle to a case which allows choice among different possible evolution equationsA cooperative incentive equilibrium for a resource management problemFactor taxation and labor supply in a dynamic one-sector growth modelStochastic macroeconomic control with non-identical control intervalsFiscal policy rules in an overlapping generations model with endogenous labour supplyThe design of decentralized auction mechanisms that coordinate continuous trade in synthetic securitiesFiscal spending shocks, endogenous government spending, and real business cyclesOptimal monetary policy in a micro-founded model with parameter uncertaintyTransfers to sustain dynamic core-theoretic cooperation in international stock pollutant controlPerturbation and robustness analysis of a closed macroeconomic modelDistributed lag analysis The Pade z-transform methodCredibility and the value of information transmission in a model of monetary policy and inflationImperfect competition, general equilibrium and unemploymentFunctional equivalence between intertemporal and multisectoral investment adjustment costsEducation, economic growth, and brain drainKrylov methods for solving models with forward-looking variablesWhen do borrowing constraints bind? Some new results on the income fluctuation problemApproximating payoffs and pricing formulasProduction management, output volatility, and good luckThe equity premium and the allocation of income riskNote on Goodwin's 1951 nonlinear accelerator model with an investment delayA simplified treatment of the theory of optimal regulation of Brownian motionDemographic structure and capital accumulation: A quantitative assessmentHierarchical Decision Making in Stochastic Manufacturing Systems : S.P. Sethi and Qing Zhang, (Birkhauser, Boston, Cambridge, MA) ISBN 0-8176-3735-4Spectral decomposition of optimal asset-liability managementOptions with combined reset rights on strike and maturityBootstrap-based bias correction for dynamic panelsPDE methods for pricing barrier optionsTime-domain robustness criteria for large-scale economic systemsA non-parametric test for independence based on symbolic dynamicsIrreversibility and the behavior of aggregate stochastic growth modelsIndividual expectations, limited rationality and aggregate outcomesAltruism, intergenerational transfers of time and bequestsAre hyperinflation paths learnable?Heterogeneous beliefs and routes to chaos in a simple asset pricing modelShort-term planning and the life-cycle consumption puzzleNo-trade and uniqueness of steady statesInternational transmission of monetary and fiscal policy : A symmetric N-country analysis with unionA response to professor MarcotteForward trading and storage in a Cournot duopolyDynamic production teams with strategic behaviorExploration information and AEC regulation of the domestic uranium industryThe protection of intellectual property rights and endogenous growth: Is stronger always better?Employment and hours over the business cycleDo we need multi-country models to explain exchange rate and interest rate and bond return dynamics?Behavioral heterogeneity in stock pricesStochastic equilibrium: learning by exponential smoothingRollover risk, network structure and systemic financial crisesEffective securities in arbitrage-free markets with bid-ask spreads at liquidation: a linear programming characterizationTesting for hysteresis against nonlinear alternativesA newton-type method for the optimization and control of non-linear econometric modelsSimple market protocols for efficient risk sharingEndowments, stability, and fluctuations in OG modelsA Hicksian two-sector model of unemployment, cycles, and growthBalance sheets, exchange rate policy, and welfareGlobal stability of unique Nash equilibrium in Cournot oligopoly and rent-seeking gameNumerical computation of the optimal vector field: Exemplified by a fishery modelMonopoly with endogenous durabilityDefault risks, interest rate spreads, and business cycles: Explaining the interest rate spread as a leading indicatorTesting for sign and amplitude asymmetries using threshold autoregressionsLiaisons dangereuses: Increasing connectivity, risk sharing, and systemic riskTransitional dynamics in a two-sector non-scale growth modelOn recalls, layoffs, variable hours, and labor adjustment costsGrowth and the dynamics of trade liberalizationFinancial crashes as endogenous jumps: estimation, testing and forecastingOptimal investment in learning-curve technologiesVector rational error correctionSimulation-based exact jump tests in models with conditional heteroskedasticityA simple asset pricing model with social interactions and heterogeneous beliefsAnalytic solving of asset pricing models: The by force of habit caseInitial conditions at Emancipation: The long-run effect on black-white wealth and earnings inequalityIntellectual property rights protection and endogenous economic growthDestabilizing optimal policies in the business cycleQuantifying and understanding the economics of large financial movementsPricing American-style securities using simulationOptimal consumption and portfolio rules with durability and habit formationA new look at optimal growth under uncertaintyGlobal patent protection: channels of north and south welfare gainEfficiency and optimality in stochastic models with productionOptimal portfolio management with American capital guaranteeGrowth effect of taxes in an endogenous growth model: to what extent do taxes affect economic growth?Cost uncertainty and the rate of investmentA robust method for simulating forward-looking modelsLifetime investment and consumption using a defined-contribution pension schemeThe inflation aversion of the Bundesbank: A state space approachExport promotion, learning by doing and growthEquilibrium asset prices and exchange ratesHeterogeneous beliefs, wealth accumulation, and asset price dynamicsA dynamic factor approach to nonlinear stability analysisIndicator variables for optimal policy under asymmetric informationAn algorithm for Ramsey pricing by multiproduct public firms under incomplete informationStock market crashes as social phase transitionsAn improved algorithm to solve a discrete matrix riccati equationThe random-time binomial modelThe tree-cutting problem in a stochastic environment : The case of age-dependent growthJob search with belated information and wage signalling A commentOn sustainable growth and collapse: Optimal and adaptive pathsTime to complete and research joint ventures: A differential game approachApproximate state space models of some vector-valued macroeconomic time series for cross-country comparisonsTime to implement and aggregate fluctuationsUnemployment and the business cycle in a small open economy: G.M.M. estimation and testing with French dataNearly redundant parameters and measures of persistence in economic time seriesMultiple equilibria, fiscal policy, and human capital accumulationOn the open-loop Nash equilibrium in LQ-gamesTurnpikes and computation of piecewise open-loop equilibria in stochastic differential gamesTime consistent side payments in a dynamic game of downstream pollutionA turnpike theorem for continuous-time optimal-control modelsThe empirics of growth and convergence: A selective reviewWhy present-oriented societies undergo cycles of drug epidemicsThe transfer of human capitalTowards endogenous recombinant growthThe value of information in a storage model with open- and closed-loop controls A numerical exampleA computational general equilibrium model with vintage capitalOptimal abatement in dynamic multi-pollutant problems when pollutants can be complements or substitutesThe variability of output-inflation tradeoffsEvaluation of American option prices in a path integral framework using Fourier-Hermite series expansionsCompetitive equilibrium and public investment plansMood fluctuations, projection bias, and volatility of equity pricesSticky prices, fair wages, and the co-movements of unemployment and labor productivity growthComplementarity problems in GAMS and the PATH solverDelaying or deterring entry A game-theoretic analysisCostly information transmission in continuous time with implications for credit rating announcementsPolicy design in asymmetrically dependent economiesQualitative dynamics and causality in a Keynesian modelHierarchical information and the rate of information diffusionStructural estimation of real options modelsThe composition of government expenditure and its consequences for macroeconomic performanceOptimal long-run fiscal policy: Constraints, preferences and the resolution of uncertaintySubsidies in an R&D growth model with elastic laborA monetary business cycle model with unemploymentA massively parallel implementation of a discrete-time algorithm for the computation of dynamic elastic demand traffic problems modeled as projected dynamical systemsA general framework for evaluating executive stock optionsCorporate control and real investment in incomplete marketsConsumption and portfolio turnpike theorems in a continuous-time finance model1Minimum-cost portfolio insuranceThe climate change learning curveComputing in economics and financeA procedure for differentiating perfect-foresight-model reduced-from coefficientsDynamic taxes and quotas with learningStructural stochastic volatility in asset pricing dynamics: Estimation and model contestEducation, moral hazard, and endogenous growthAlternative bias approximations in first-order dynamic reduced form modelsMonetary equilibrium and the differentiability of the value functionInferring strategies from observed actions: a nonparametric, binary tree classification approachEconomic implications of using a mean-VaR model for portfolio selection: A comparison with mean-variance analysisUsing cross-country variances to evaluate growth theoriesEndogenous fiscal policy and capital market transmissions in the presence of demographic shocksNonconvexities in a stochastic control problem with learningFurther results on asset pricing with incomplete informationIR & D project data and theories of R & D investmentOn income fluctuations and capital gains with a convex production functionDiverging patterns with endogenous labor migrationTraining, adverse selection and appropriate technology: Development and growth in a small open economyOptimal consumption-portfolio choices and retirement planningHeterogeneous borrowers, liquidity, and the search for creditEquilibrium turnpike theory with time-separable utilityUsing stochastic growth models to understand unit roots and breaking trendsExistence of equilibria in exhaustible resource industries Nonconvexities and discrete vs. continuous timeEstimation and inference in the linear-quadratic inventory modelThe creation of plants and firmsMonetary policy rules for an open economyA recursive forward simulation method for solving nonlinear rational expectations modelsIntergenerational human capital evolution, local public good preferences, and stratificationA moving boundary approach to American option pricingRepeated real options: optimal investment behaviour and a good rule of thumbConsistent high-frequency calibrationJob matching and propagationA clarification of the Goodwin model of the growth cycleInformation structure and stochastic control performanceCooperative and non-cooperative fiscal stabilization policies in the EMUForecasting volatility and volume in the Tokyo Stock Market: Long memory, fractality and regime switchingA dynamic portfolio choice model of tax evasion: Comparative statics of tax rates and its implication for economic growthA further note on flexible least squares and Kalman filteringChaotic dynamics and bifurcation in a macro modelTwo-stage optimal control problems with an explicit switch point dependence : Optimality criteria and an example of delivery lags and investmentA dynamic migration model with uncertaintyInvestment under uncertainty: calculating the value function when the Bellman equation cannot be solved analyticallyInterest rate rules, endogenous cycles, and chaotic dynamics in open economiesEndogenous growth and the welfare costs of inflation: a reconsiderationLearning the optimum as a Nash equilibriumCycles in nonrenewable resource prices with pollution and learning-by-doingThe cyclical behavior of household and business investment in a cash-in-advance economyMonetary policy, exchange rate dynamics and the labour marketFiscal policy, monopolistic competition, and finite livesContinuous time vs. backward induction a new approach to modelling reputation in the finite time horizon contextEfficient representation of state spaces for some dynamic modelsRobustifying learnabilityExpectational stability of stationary sunspot equilibria in a forward-looking linear modelProfits, markups and entry: fiscal policy in an open economyThe simple analytics of optimal growth with illegal migrantsMulti-period information marketsGlobalization, polarization and cultural driftCompetitive price paths of an exhaustible resource with increasing extraction costsThe reliability of control experiments : Comparison of the sources of errorA new statistic and practical guidelines for nonparametric Granger causality testingAn evolutionary analysis of turnout with conformist citizensEvolutionary dynamics of currency substitutionA note on a new class of solutions to dynamic programming problems arising in economic growthThe effects of incomplete insurance markets and trading costs in a consumption-based asset pricing modelMoving horizon control in dynamic gamesDepreciation rules and value invariance with extractive firmsOptimal monetary policy with uncertaintyDiffusion-induced instability and pattern formation in infinite horizon recursive optimal controlAn alternative approach to stochastic calculus for economic and financial modelsA direct discrete-time approach to Poisson-Gaussian bond option pricing in the Heath-Jarrow-Morton modelExplaining fashion cycles: Imitators chasing innovators in product spaceTrade in capital goods and investment-specific technical changeArbitrage pricing and the stochastic inflation tax in a multisector monetary economyTrading in exhaustible resources in the presence of conversion costs a general equilibrium approachAn exact solution for the investment and value of a firm facing uncertainty, adjustment costs, and irreversibilityContinuous time autoregressive models with common stochastic trendsAsset allocation under multivariate regime switchingConsistent expectations equilibria and learning in a stock marketBackward dynamics in economics. The inverse limit approachForeign exchange trading models and market behaviorTime-to-build and cyclesBank capital regulation with random auditsStabilizing properties of monetary feedback rules: A representative-agent approachLearning, regime switches, and equilibrium asset pricing dynamicsTesting macroeconometric models : Ray C. Fair, (Harvard University Press, Cambridge, MA, 1994) ISBN 0-674-87503-6Decreasing and increasing marginal impatience and the terms of trade in an interdependent world economyInterpreting cointegrated modelsStability, chaos and multiple attractors: a single agent makes a differenceOptimal monetary policy under flexible exchange ratesCointegration and stock prices : The random walk on wall street revisitedValuation and martingale properties of shadow prices: An expositionTesting conditional asymmetry: A residual-based approachStochastic dominance bounds on derivatives prices in a multiperiod economy with proportional transaction costsVenture capital financed investments in intellectual capitalA class of asset pricing models governed by subordinate processes that signal economic shocksExistence and transversality conditions for a general unbounded-horizon model of the mining firmWas it real? The exchange rate -- Interest differential relation: 1973-1984A model of strategic default of sovereign debtReceding horizon control of jump linear systems and a macroeconomic policy problemDynamic optimization and forward looking processesHow long is the firm's forecast horizon?Advances in experimental and agent-based modelling: Asset markets, economic networks, computational mechanism design and evolutionary game dynamicsOptimal tax depreciation under a progressive tax systemA network analysis of the Italian overnight money marketOn Abel's concept of doubt and pessimismFinancial markets are markets in stories: Some possible advantages of using interviews to supplement existing economic data sourcesLong-run average welfare in a pollution accumulation modelOn some computational aspects of equilibrium business cycle theoryThe role of the target saving motive in guest worker migration A theoretical studyOptimal management of an R&D budgetGlobal bifurcations, credit rationing and recurrent hyperinflationsA multiperiod binomial model for pricing options in a vague worldOptimal taxation in an RBC model: A linear-quadratic approachConditional volatility, skewness, and kurtosis: existence, persistence, and comovementsOn the fluctuations in consumption and market returns in the presence of labor and human capital: An equilibrium analysisGaining the competitive edge using internal and external spillovers: a dynamic analysisCycles and chaos in a socialist economyOptimal disinflationary pathsDynamic optimal pricing and (possibly) advertising in the face of various kinds of potential entrantsIntensity-based framework and penalty formulation of optimal stopping problemsSpecialization and non-renewable resources: Ricardo meets RicardoA portfolio approach to endogenous growth: equilibrium and optimal policyInternational policy coordination and the reduction of the US trade deficitChoosing a monetary instrument The case of supply-side shocksMaximin, viability and sustainabilityLong-term risk management of nuclear waste: a real options approachRival models in policy optimizationWelfare effects of controlling labor supply: an application of the stochastic Ramsey modelOscillations in the Rodriguez model of entry and price dynamicsInterpolation and backdating with a large information setThe formulation of robust policies for rival rational expectations models of the economyNecessity of the transversality condition for stochastic models with bounded or CRRA utilityDo institutional changes affect business cycles? Evidence from EuropeMonopolistic competition, dynamic inefficiency and asset bubblesEmployment cycles in search equilibriumPredictability and habit persistenceTemporal risk aversion in a phased deregulation gameShort-memory and the PPP hypothesisReal business-cycle theory : Wisdom or whimsy?Income taxes, public investment and welfare in a growing economyDynamic portfolio selection with fixed and/or proportional transaction costs using non-singular stochastic optimal control theoryA patent race in a real options setting: Investment strategy, valuation, CAPM beta, and return volatilitySimplicity versus optimality: The choice of monetary policy rules when agents must learnApplications of randomized low discrepancy sequences to the valuation of complex securitiesImport price adjustments with staggered import contractsEvolving market structure: An ACE model of price dispersion and loyaltyOptimal growth when tastes are inheritedAdaptive learning and the use of forecasts in monetary policyEquilibrium and reinforcement learning in private-information games: An experimental studyOptimal interest rate stabilization in a basic sticky-price modelBuilding up social capital in a changing worldEndogenous growth theory: An introductionOn the application and use of DSGE modelsThe persistence of inflation in the United StatesThe stochastic lake game: A numerical solutionPricing of path-dependent American options by Monte Carlo simulationCongestible public goods and local indeterminacy: A two-sector endogenous growth modelAre European business cycles close enough to be just one?The importance of the number of different agents in a heterogeneous asset-pricing modelNonlinear Phillips curves, mixing feedback rules and the distribution of inflation and outputTwo-sided intergenerational transfer policy and economic development: A politico-economic approachOptimal investment and finance in renewable resource harvestingInfectious disease and preventive behavior in an overlapping generations modelDistribution of bankruptcy time in a consumption/portfolio problemEstimation of simultaneous equation models with stochastic trend componentsPricing home mortgages and bank collateral: A rational expectations approachMatchings, covers, and Jacobian matricesSurplus analysis for overlapping generationsInvestment timing, asymmetric information, and audit structure: A real options frameworkPrices as factors: Approximate aggregation with incomplete marketsModifications to the subroutine OPALQP for dealing with large problemsUnderreaction to fundamental information and asymmetry in mispricing between bullish and bearish markets. An experimental studyUtility based option evaluation with proportional transaction costsOn optimal portfolio choice under stochastic interest ratesInvestment, interest rate policy, and equilibrium stabilityA method for estimating the timing interval in a linear econometric model, with an application to Taylor's model of staggered contractsThe optimal lag selection and transfer function analysis in Granger causality testsAsset returns in an endogenous growth model with incomplete marketsState space modeling of time series : A review essayAdaptive expectations coordination in an economy with heterogeneous agentsGeometric combination lags as flexible infinite distributed lag estimatorsFinancially constrained arbitrage in illiquid marketsCritical debt and debt dynamicsPensions, wage profiles, and retirement rules specific human capital approachPeriodic linear-quadratic methods for modeling seasonalityLong-run effects of unfunded social security with earnings-dependent benefitsLimit price entry prevention when complete information is lackingHeterogeneous beliefs, asset prices, and volatility in a pure exchange economyCalculating short-run adjustments: Sensitivity to non-linearities in a representative agent frameworkOn the equivalence of solutions in rational expectations modelsImperfect price adjustment and the optimal assignment of monetary and fiscal policiesOn the extrapolation method and the USA algorithmThe optimality of socialist development strategies an empirical inquiryInterpreting a stochastic monetary growth model as a modified social planner's problemNonlinear expectations in speculative markets – Evidence from the ECB survey of professional forecastersDynamic specifications in optimizing trend-deviation macro modelsThe value of information: The case of signal-dependent opportunity setsOptimal policy in a model of endogenous fluctuations and assetsProgressive services, asymptotically stagnant services, and manufacturing: Growth and structural changeFactor demand models with nonlinear short-run fluctuationsGrowth and economic development : Siro Lombardini, (Edward Elgar, Cheltenham, Glos., UK; Brookfield, Vermont, USA) ISBN 1 85898 394 0; [UK pound]49.95Limit cycles in intertemporal adjustment models : Theory and applicationsA maximum entropy approach to estimation and inference in dynamic models or Counting fish in the sea using maximum entropyStrategic dynamic interaction : Fish warsInflationary financing of public investment and economic growthCentral bank reputation in a forward-looking modelComputing continuous-time growth models with boundary conditions via waveletsGlobal dynamics in macroeconomics: an overlapping generations exampleSolving long-term financial planning problems via global optimizationFat tails and volatility clustering in experimental asset marketsSaddlepoint approximations for affine jump-diffusion modelsFactor price uncertainty, technology choice and investment delayChaotic dynamics in a cash-in-advance economyBinomial valuation of lookback optionsEuropean option pricing and hedging with both fixed and proportional transaction costsTheorists of economic growth from David Hume to the presentComputing sunspot equilibria in linear rational expectations modelsIndeterminacy in a dynamic small open economyDynamic adjustment of firms' capital structures in a varying-risk environmentExchange market intervention under multiple solutions : Should we rule out multiple solutions?Equilibrium consumption and precautionary savings in a stochastically growing economyThe stable non-Gaussian asset allocation: a comparison with the classical Gaussian approachRational-expectations econometric analysis of changes in regime : An investigation of the term structure of interest ratesSector-specific capital and real exchange rate dynamicsA market structure for an environment with heterogeneous job-matches, indivisible labor and persistent unemploymentA solution to the positivity problem in the state-space approach to modeling vector-valued time seriesThe U.S. Phillips curve: The case for asymmetryDetermining the optimal dimensionality of multivariate volatility models with tools from random matrix theoryDirect preferences for wealth, the risk premium puzzle, growth, and policy effectivenessOn the minimax Lyapunov stabilization of uncertain economiesVariational inequalities in the analysis and computation of multi-sector, multi-instrument financial equilibriaMitigation of the Lucas critique with stochastic control methodsOptimal delta-hedging under transactions costsLearning competitive pricing strategies by multi-agent reinforcement learningEnvironmental policy and stable collusion: The case of a dynamic polluting oligopolyOptimal monetary policy in the generalized Taylor economySecond-order approximation of dynamic models without the use of tensorsEquilibrium open interestJump and volatility risk premiums implied by VIXPricing of CDOs based on the multivariate Wang transformMaintenance and investment: Complements or substitutes? A reappraisalTechnology shocks, capital utilization and sticky pricesEKC-type transitions and environmental policy under pollutant uncertainty and cost irreversibilityShape factors and cross-sectional riskCorrigendum to "New Keynesian versus old Keynesian government spending multipliers" [J. Econ. Dynam. Control 34(3) (2010) 281-295]Solving the multi-country real business cycle model using a Smolyak-collocation methodDurable goods, inter-sectoral linkages and monetary policyThe method of endogenous gridpoints with occasionally binding constraints among endogenous variablesComment on "A dynamic portfolio choice model of tax evasion: Comparative statics of tax rates and its implication for economic growth"Dividends and leverage: How to optimally exploit a non-renewable investmentAnticipated tax reforms and temporary tax cuts: A general equilibrium analysisDynamics of brand competition: Effects of unobserved social networksThe parameter set in an adaptive control Monte Carlo experiment: Some considerationsTransmission lags and optimal monetary policyMonetary policy and learning from the central bank's forecastAn analysis of the effect of noise in a heterogeneous agent financial market modelInvertible and non-invertible information sets in linear rational expectations modelsOut-of-sample comparison of copula specifications in multivariate density forecastsWomen's lifetime labor supply and labor market experienceBayesian analysis of structural credit risk models with microstructure noisesFormal education and public knowledgeFast delta computations in the swap-rate market modelA network of options: Evaluating complex interdependent decisions under uncertaintyThe effects of the market structure on the adoption of evolving technologiesOptimal irrational behavior in continuous timeInternational capital flows and expectation-driven boom-bust cycles in the housing marketThe heterogeneous expectations hypothesis: Some evidence from the labOn the precision of Calvo parameter estimates in structural NKPC modelsThe financial accelerator in an evolving credit networkOptimal pricing of a conspicuous product during a recession that freezes capital markets 2014-04-20T19:15:00Z http://oai.repec.openlib.org/oai.php oai:RePEc:eee:dyncon:v:34:y:2010:i:6:p:1048-10612010-06-29RePEc:eee:dyncon article A maximin program is applied to a policy of sustaining a simple society whose population is dependent on a resource subject to logistic growth. Regular and non-regular paths are characterized. There are continua of both regular and non-regular solutions, the type depending on the initial conditions. A non-regular path involves an intermediate part in which the sustainment constraint is not effective. All solutions are time consistent and Pareto optimal. Because a problem may not be regular, it is not valid to assume that sustainment implies constant utility. Sustainability Intergenerational equity Maximin Regular path Population 6 2010 34 6 1048 1061 http://www.sciencedirect.com/science/article/B6V85-4Y95TWS-5/2/ccae18e440d88ac70e48e3870a9d9a93 Cairns, Robert D. Tian, Huilan oai:RePEc:eee:dyncon:v:34:y:2010:i:7:p:1214-12322010-06-29RePEc:eee:dyncon article This paper contributes to the on-going empirical debate regarding the role of the RBC model and in particular of neutral and investment-specific technology shocks in explaining aggregate fluctuations. To achieve this, we estimate the model's posterior density using Bayesian methods. Within this framework we first extend (Ireland, 2001b) and (Ireland, 2004a) hybrid estimation approach to allow for a vector autoregressive moving average (VARMA) process to describe the movements and co-movements of the model's errors not explained by the basic RBC model. Our main findings for the model with neutral technical change are: (i) the VARMA specification of the errors significantly improves the hybrid model's fit to the historical data relative to the VAR and AR alternatives; and (ii) despite setting the RBC model a more difficult task under the VARMA specification, neutral technology shocks are still capable of explaining a significant share of the observed variation in output and its components over shorter- and longer-forecast horizons as well as hours at shorter horizons. When the hybrid model is extended to incorporate investment shocks, we find that: (iii) the VAR specification is preferred to the alternatives; and (iv) the model's ability to explain fluctuations improves considerably. Real business cycle Bayesian estimation Technology shocks Measurement errors 7 2010 34 7 1214 1232 http://www.sciencedirect.com/science/article/B6V85-4YB78SF-1/2/ 264a470bcdf2ab7a9f61ba0ec06574f3 Malley, Jim Woitek, Ulrich oai:RePEc:eee:dyncon:v:34:y:2010:i:7:p:1325-13422010-06-29RePEc:eee:dyncon article This paper presents a general approximation method for characterizing time-varying equilibrium portfolios in a two-country dynamic general equilibrium model. The method can be easily adapted to most dynamic general equilibrium models, it applies to environments in which markets are complete or incomplete, and it can be used for models of any dimension. Moreover, the approximation provides simple, easily interpretable closed-form solutions for the dynamics of equilibrium portfolios. Country portfolios Solution methods 7 2010 34 7 1325 1342 http://www.sciencedirect.com/science/article/B6V85-4YP0MSS-1/2/20f5f083e7a3e499a7ef1ffb12d8811e Devereux, Michael B. Sutherland, Alan oai:RePEc:eee:dyncon:v:34:y:2010:i:8:p:1456-14702010-06-29RePEc:eee:dyncon article We consider a firm that operates a single plant and has an expansion option to invest in a new plant with convertible debt financing. This conversion feature introduces another complication not only because of the added conversion timing problem (by the bond holder) but also because the equity holder needs to take future conversion into account when evaluating her expansion/financing decision. We have two main objectives here. We use game options techniques to analyze optimal strategies involved in this convertible debt financed expansion problem. The first goal is to provide a comprehensive framework and procedure for solving the problem in a mathematically tractable way. Secondly, we illustrate our solution method through a concrete example with economic analysis. This includes a comparison with straight bond financing and comparative statics with respect to price volatility and conversion ratio. In this regard, we attempt to clarify how the conversion feature affects the equity holder's investment decisions. Throughout the paper, we study expansion options by viewing a firm's existing operation, bankruptcy threat, conversion decisions and financing decisions all together. Convertible bond Investment decision Optimal stopping Game options 8 2010 34 8 1456 1470 http://www.sciencedirect.com/science/article/B6V85-4YS4W6D-1/2/2549d11447e81adf9b05c5cb96214f1a Egami, Masahiko oai:RePEc:eee:dyncon:v:34:y:2010:i:8:p:1392-14022010-06-29RePEc:eee:dyncon article This paper introduces private saving and public debt into the shirking-unemployment model of Shapiro and Stiglitz (1984), while relaxing their exclusive focus on steady states. After generalizing their no-shirking constraint to accommodate asset accumulation, and demonstrating that the resulting economy's equilibrium is saddle-path stable, we use our dynamic model to obtain significant departures from the Shapiro-Stiglitz prescriptions for optimal policy. Most notably, wage income should be taxed (not subsidized) in the long run if the labor market is sufficiently distorted. Furthermore, interest income should be (exhaustively) taxed only during an initial interval of time, as in Chamley's (1986) full-employment model. Shirking Unemployment Saving Public debt Optimal taxation 8 2010 34 8 1392 1402 http://www.sciencedirect.com/science/article/B6V85-4YP8THN-1/2/958128a27e018e67b60d111392d8392a Brecher, Richard A. Chen, Zhiqi Choudhri, Ehsan U. oai:RePEc:eee:dyncon:v:34:y:2010:i:7:p:1248-12592010-06-29RePEc:eee:dyncon article This work deals with infinite horizon optimal growth models and uses the results in the Mercenier and Michel (1994a) paper as a starting point. Mercenier and Michel (1994a) provide a one-stage Runge-Kutta discretization of the above-mentioned models which preserves the steady state of the theoretical solution. They call this feature the "steady-state invariance property". We generalize the result of their study by considering discrete models arising from the adoption of s-stage Runge-Kutta schemes. We show that the steady-state invariance property requires two different Runge-Kutta schemes for approximating the state variables and the exponential term in the objective function. This kind of discretization is well-known in literature as a partitioned symplectic Runge-Kutta scheme. Its main consequence is that it is possible to rely on the well-stated theory of order for considering more accurate methods which generalize the first order Mercenier and Michel algorithm. Numerical examples show the efficiency and accuracy of the proposed methods up to the fourth order, when applied to test models. Optimal growth models Steady-state invariance Partitioned symplectic Runge-Kutta methods 7 2010 34 7 1248 1259 http:// www.sciencedirect.com/science/article/B6V85-4YMY6WF-1/2/70ba00ce41bc65d0b019c42b817db6cd Ragni, Stefania Diele, Fasma Marangi, Carmela oai:RePEc:eee:dyncon:v:34:y:2010:i:8:p:1369-13792010-06-29RePEc:eee:dyncon article This paper considers a world of symmetric countries with two factors of production and two sectors. Outputs of the two sectors are imperfect substitutes and the sectors differ in relative factor intensity. Each sector contains a continuum of heterogeneous firms that produce differentiated goods within their sector. Trade is costly and there are both variable and fixed costs of exporting. The paper shows that under some plausible conditions supported by the data, trade between similar countries can increase the demand for skilled labor, which in turn increases the wage inequality between skilled and unskilled labor. The quantitative analysis suggests that such trade effects have played an important role in the increase in the US skill premium. Firm heterogeneity Trade Skill premium 8 2010 34 8 1369 1379 http://www.sciencedirect.com/science/article/B6V85-4YMPX35-1/2/ ff7bb76fa34cb15cc223b7ccd089efa1 Unel, Bulent oai:RePEc:eee:dyncon:v:34:y:2010:i:6:p:1062-10762010-06-29RePEc:eee:dyncon article This paper presents a model of investment timing by risk averse managers facing incomplete markets and corporate control. Managers are exposed to idiosyncratic risks due to the dependence of their compensation on investment payoffs which are not spanned by other assets. We show that risk averse managers invest earlier than well-diversified shareholders would prefer, leading to significant agency costs. This effect can be mitigated if the manager is subject to corporate control. Our main finding is that the interaction of idiosyncratic risk and control results in two regimes. When the market is sufficiently close to being complete, control has a strong disciplinary effect and agency costs can be virtually eliminated. However, when idiosyncratic risk is too large, shareholders suffer agency costs and control is ineffective. An implication is that we would expect to see different investment behavior across industries or specific investments as the degree of idiosyncratic risk varied. It would also suggest that both the standard complete-markets real options model and the npv framework can proxy in describing investment timing. Real options Investment timing Incomplete markets Corporate control 6 2010 34 6 1062 1076 http:// www.sciencedirect.com/science/article/B6V85-4YHNYS0-1/2/91bfdbcbfa5719333fb22b35351c0786 Henderson, Vicky oai:RePEc:eee:dyncon:v:34:y:2010:i:6:p:1153-11702010-06-29RePEc:eee:dyncon article The present study addresses the learning mechanism of boundedly rational agents in the dynamic and noisy environment of financial markets. The main objective is the development of a system that "decodes" the knowledge-acquisition strategy and the decision-making process of technical analysts called "chartists". It advances the literature on heterogeneous learning in speculative markets by introducing a trading system wherein market environment and agent beliefs are represented by fuzzy inference rules. The resulting functionality leads to the derivation of the parameters of the fuzzy rules by means of adaptive training. In technical terms, it expands the literature that has utilized Actor-Critic reinforcement learning and fuzzy systems in agent-based applications, by presenting an adaptive fuzzy reinforcement learning approach that provides with accurate and prompt identification of market turning points and thus higher predictability. The purpose of this paper is to illustrate this concretely through a comparative investigation against other well-established models. The results indicate that with the inclusion of transaction costs, the profitability of the novel system in case of NASDAQ Composite, FTSE100 and NIKKEI255 indices is consistently superior to that of a Recurrent Neural Network, a Markov-switching model and a Buy and Hold strategy. Overall, the proposed system via the reinforcement learning mechanism, the fuzzy rule-based state space modeling and the adaptive action selection policy, leads to superior predictions upon the direction-of-change of the market. Agent-based modeling Technical trading Reinforcement learning Fuzzy inference Bounded rationality 6 2010 34 6 1153 1170 http://www.sciencedirect.com/science/article /B6V85-4YB5M0K-1/2/09f47c920482b94f6031320b5d0c44ea Bekiros, Stelios D. oai:RePEc:eee:dyncon:v:34:y:2010:i:8:p:1380-13912010-06-29RePEc:eee:dyncon article We investigate conditions under which self-organized criticality (SOC) arises in a version of a dynamic entry game. In the simplest version of the game, there is a single location--a pool--and one agent is exogenously dropped into the pool every period. Payoffs to entrants are positive as long as the number of agents in the pool is below a critical level. If an agent chooses to exit, he cannot re-enter, resulting in a future payoff of zero. Agents in the pool decide simultaneously each period whether to stay in or not. We characterize the symmetric mixed strategy equilibrium of the resulting dynamic game. We then introduce local interactions between agents that occupy neighboring pools and demonstrate that, under our payoff structure, local interaction effects are necessary and sufficient for SOC and for an associated power law to emerge. Thus, we provide an explicit game-theoretic model of the mechanism through which SOC can arise in a social context with forward looking agents. Self-organization Criticality Local interaction Power Law Entry Game 8 2010 34 8 1380 1391 http://www.sciencedirect.com/science/article/B6V85-4YVJ3S2-2/2/c1811e24cf56eff7c1980ec1ec19f031 Blume, Andreas Duffy, John Temzelides, Ted oai:RePEc:eee:dyncon:v:32:y:2008:i:12:p:3745-37592010-06-29RePEc:eee:dyncon article This paper develops a simple life-cycle model that embeds a theory of debt restrictions based on the existence of inalienable property rights a la Kehoe and Levine [1993. Debt constrained asset markets. Review of Economic Studies 60(4), 865-888; 2001. Liquidity constrained markets versus debt constrained markets. Econometrica 69(3), 575-598]. In our environment, net debtors have the option of defaulting on unsecured debt at the cost of being subjected to wage garnishment and/or having some or all of their future assets seized by creditors. One advantage of our framework is that it encompasses two standard versions of the life-cycle model: one with perfect capital markets and one with a non-negative net-worth restriction. We study the impact of a payroll financed social security system to illustrate the role of endogenous debt constraints and compare our results to a model with exogenous debt constraints. Whereas the aggregate effects are similar under both types of constraints, the distributional consequences are found to be significantly different across debt regimes. Life-cycle Debt constraints Social security 12 2008 32 12 3745 3759 http://www.sciencedirect.com/science/article/B6V85-4S80XCN-1/2/62ce7cceab1d98f0c845b1409ba0e4bb Andolfatto, David Gervais, Martin oai:RePEc:eee:dyncon:v:34:y:2010:i:8:p:1471-14912010-06-29RePEc:eee:dyncon article A permanent (price) markup shock is justified using an industry-based model in which an increase in market concentration raises the desired markup. Moreover, evidence in favor of the non-stationarity of the markup is presented, which in turn implies that per capita hours are also non-stationary. Structural vector autoregressions are then constructed that can identify shocks to the markup, technology and the federal funds rate. The results show that (1) inflation responds immediately to shocks to the markup and technology whereas it displays a hump-shaped response to a monetary policy shock, and that (2) per capita hours decline in response to positive shocks to the markup and technology. These empirical findings have important implications for macroeconomic dynamics, including the issues on inflation inertia and the technology-hours debate. The paper also points out that the dynamics of the economy cannot be correctly explained without consideration of the permanent markup shock. Finally, the approach in this paper suggests several ways to identify a wage markup shock using structural vector autoregressions. Industrial concentration Markup shock Technology shock 8 2010 34 8 1471 1491 http://www.sciencedirect.com/science/article/B6V85-4YVJ3S2-1/2/ d24fe373979013dd92a57969a090ae1b Kim, Bae-Geun oai:RePEc:eee:dyncon:v:34:y:2010:i:6:p:1140-11522010-06-29RePEc:eee:dyncon article The paper is concerned with two recent agent-based models of speculative dynamics from the literature, one by Gaunersdorfer and Hommes (2007) and the other by He and Li (2007). At short as well as long lags, both of them display an autocorrelation structure in absolute and squared returns that comes fairly close to that of real data at a daily frequency. The note argues that these long memory effects are to be ascribed to the stochastic specification of the price equation, which despite the wide fluctuations in these models fails to normalize the price shocks. Under an appropriate respecification, the long memory completely disappears. It is subsequently shown that an alternative introduction of randomness, which may be called structural stochastic volatility, can restore the original properties and even improves upon them. Volatility clustering Autocorrelations of returns Structural stochastic volatility Heterogeneous agents 6 2010 34 6 1140 1152 http://www.sciencedirect.com/science/article/B6V85-4YB5M0K-3/2/ 91bb419bf62ed264872692b8d67dfb97 Franke, Reiner oai:RePEc:eee:dyncon:v:34:y:2010:i:8:p:1359-13682010-06-29RePEc:eee:dyncon article This paper provides an explanation for both the rapid growth in the use of a debit card over time and the cross-sectional difference in the use of a debit card using a search-theoretic model. The trade-off between cash and a debit card as means of payment is incorporated such that a buyer incurs disutility cost proportional to the amount of cash holdings, while a seller accepting a debit card bears a fixed record-keeping cost regardless of transaction amount. As record-keeping cost decreases with the development of information technology over time, disutility cost of cash holdings required for pairwise trade eventually exceeds record-keeping cost so that all the agents with different wealth levels choose to use a debit card as a means of payment. Also, disutility cost of cash holdings required for pairwise trade would be higher for the rich than for the poor, implying the cross-sectional feature of payment pattern that the rich use a debit card more frequently than the poor. There are two distinct mechanisms that improve welfare as record-keeping cost decreases: one is to reduce deadweight loss from holding cash and the other is to reduce its distortionary effect on output produced in pairwise trade. Cash Debit card Record keeping cost Means of payment 8 2010 34 8 1359 1368 http://www.sciencedirect.com/science/article/B6V85-4YM7FD3-1/2/30cda61f92a73158c52ef72d3deec0fa Kim, Young Sik Lee, Manjong oai:RePEc:eee:dyncon:v:34:y:2010:i:6:p:1031-10472010-06-29RePEc:eee:dyncon article A growing number of empirical papers use Miller-Orr (S, s) money management as economic motivation for application of non-linear smooth-adjustment models. This paper shows such models are not implied by the Miller-Orr economy. Instead, the Miller-Orr economy implies non-standard smooth-adjustment, as derived in the neglected (and misinterpreted) work of Milbourne et al. (1983). Remarkably, this function includes a varying weight on the lagged dependent variable, capturing static (not dynamic) effects. Interpretations of these apparent dynamics are presented, some of which may be useful in non-monetary (S, s) contexts. Results imply a new agenda for applied smooth-adjustment modeling of money. Money Miller-Orr Smooth-adjustment Nonlinear Inventory 6 2010 34 6 1031 1047 http://www.sciencedirect.com/science/article/B6V85-4Y7P6R7-1/2/f0a18bda0709fd9282dcbd6b326ae1b4 Greene, Clinton A. oai:RePEc:eee:dyncon:v:34:y:2010:i:6:p:1105-11222010-06-29RePEc:eee:dyncon article In this paper we examine various types of financial crises and conjecture their underlying mechanisms using a deterministic heterogeneous agent model (HAM). In a market-maker framework, forward-looking investors update their price expectations according to psychological trading windows and cluster themselves strategically to optimize their expected profits. The switches between trading strategies lead to price dynamics in market that subsequently move price up and down, and in the extreme case, cause financial crises. The model suggests that both fundamentalists and chartists could potentially contribute to the financial crises. Financial crisis Chaos Multi-phase heterogeneous beliefs Discounted expected profits 6 2010 34 6 1105 1122 http://www.sciencedirect.com/science/article/B6V85-4Y95TWS-6/2/d85d88645005ee7abb9013dd2d2ff69d Huang, Weihong Zheng, Huanhuan Chia, Wai-Mun oai:RePEc:eee:dyncon:v:34:y:2010:i:6:p:1123-11392010-06-29RePEc:eee:dyncon article Envelope theorems are established for locally differentiable Stackelberg equilibria of a general class of finite horizon differential games with an open-loop information structure. It is shown that the follower's envelope results agree in form with those of any player in an open-loop Nash equilibrium, while those of the leader differ. An unanticipated conclusion is that the costate vector of the leader--but not that of the follower--corresponding to the state vector of the differential game may be legitimately interpreted as the shadow value of the state vector for time-inconsistent open-loop Stackelberg equilibria. Surprisingly, the same cannot be said for time-consistent open-loop Stackelberg equilibria. Stackelberg duopoly Envelope theorems Differential games Open-loop information structure 6 2010 34 6 1123 1139 http://www.sciencedirect.com/science/article/B6V85-4YB5M0K-2/2/ cbf1aec0363125a3e4816ad9cc2ce9b5 Van Gorder, Robert A. Caputo, Michael R. oai:RePEc:eee:dyncon:v:34:y:2010:i:8:p:1492-15082010-06-29RePEc:eee:dyncon article This paper introduces dynamic predictor selection into a New Keynesian model with heterogeneous expectations and examines its implications for monetary policy. We extend Branch and McGough (2009) by incorporating endogenous time-varying predictor proportions along the lines of Brock and Hommes (1997). We find that periodic orbits and complex dynamics may arise even if the model under rational expectations has a unique stationary solution. The qualitative nature of the non-linear dynamics turns on the interaction between hawkishness of the government's policy and the extrapolative behavior of non-rational agents. Heterogeneous expectations Complex dynamics Determinacy Monetary policy 8 2010 34 8 1492 1508 http://www.sciencedirect.com/science/article/B6V85-4YRXCXD-1/2/cba958a253aee2258c82d62c48e28d82 Branch, William A. McGough, Bruce oai:RePEc:eee:dyncon:v:34:y:2010:i:6:p:1171-11862010-06-29RePEc:eee:dyncon article Stylized facts on U.S. output and interest rates have so far proved hard to match with simple DSGE models. I estimate covariances between output, nominal and real interest rate conditional on structural shocks, since such evidence has largely been lacking in previous discussions of the output-interest rate puzzle. Conditional on shocks to technology and monetary policy, the results square with simple models. Moreover, permanent inflation shocks accounted for the counter-cyclical and inversely leading behavior of the real rate during the Great Inflation (1959-1979). Over the Great Moderation (1982-2006), technology shocks were more dominant and the real rate has been pro-cyclical. Interest rates Business cycles Bandpass filter Structural VAR News shocks 6 2010 34 6 1171 1186 http://www.sciencedirect.com/science/article/B6V85-4YG1KTC-2/2/ 05e16263ed0b8f65c2a8ba51e15f3db5 Mertens, Elmar oai:RePEc:eee:dyncon:v:34:y:2010:i:8:p:1509-15272010-06-29RePEc:eee:dyncon article Shocks to investment-specific technology have been identified as a main source of U.S. aggregate output volatility. In this paper, we present a model with frictions in the labor market and explore the contribution of these shocks to the volatility of labor market variables, namely, unemployment, vacancies, tightness and the job-finding rate. Thus, our paper contributes to a recent body of literature assessing the ability of the search-and-matching model to account for the large volatility observed in labor market variables. To this aim, we solve a neoclassical economy with search and matching, where neutral and investment-specific technologies are subject to shocks. The three key features of our model economy are: (i) Firms are large, in the sense that they employ many workers. (ii) Adjusting capital and labor is costly. (iii) Wages are the outcome of an intra-firm Nash-bargaining problem between the firm and its workers. In our calibrated economy, we find that shocks to investment-specific technology explain 40% of the observed volatility in U.S. labor productivity. Moreover, these shocks generate relative volatilities in vacancies and the workers' job finding rate which match those observed in U.S. data. Relative volatilities in unemployment and labor market tightness are 55% and 75% of their empirical values, respectively. Search and matching Labor market fluctuations Investment-specific technology Adjustment costs Factor adjustment dynamics 8 2010 34 8 1509 1527 http://www.sciencedirect.com/science/article/B6V85-4YX7K78-1/2/8b4a36ac008e0596678c2fa0d803be6b Faccini, Renato Ortigueira, Salvador oai:RePEc:eee:dyncon:v:34:y:2010:i:7:p:1305-13242010-06-29RePEc:eee:dyncon article The inclusion of labor market frictions in the new Keynesian DSGE model overcomes the main drawbacks of the baseline framework. In this paper we show that this extended model, by assuming real wage rigidities, does not replicate the correct wage dynamics and the negative conditional correlation between technology shocks and employment observed in the data, known as the "productivity-employment puzzle" . We show also that these empirical limitations can be overcome by replacing real wage rigidities with nominal wage rigidities, without sacrificing other appealing features of the model. We adopt a Bayesian perspective to estimate the dynamic properties of the model with real wage rigidities and compare them with those of the model with nominal wage rigidities. We show that the evidence favors this latter construction. New-Keynesian model Labor market frictions Wage rigidities Technology shocks Bayesian inference 7 2010 34 7 1305 1324 http://www.sciencedirect.com/science/article/B6V85-4YK2F01-1/2/0d63ef072bed282ac27c9f6d24ac2a91 Riggi, Marianna Tancioni, Massimiliano oai:RePEc:eee:dyncon:v:34:y:2010:i:6:p:1092-11042010-06-29RePEc:eee:dyncon article This paper studies how the assignment of patents as collateral determines the savings of firms and magnifies the effect of innovative rents on investment in research and development (R&D). We analyse the behaviour of innovative firms that face random and lumpy investment opportunities in R&D. High growth rates of innovations, possibly higher than the real rate of interest, may be achieved despite financial constraints. There is an optimal level of publicly funded policy by the patent and trademark office that minimizes the legal uncertainty surrounding patents as collateral and maximizes the growth rate of innovations. Collateral Patents Research and development Credit rationing Growth Innovation 6 2010 34 6 1092 1104 http://www.sciencedirect.com/science/article/B6V85-4YMB60S-1/2/5e7f1397db49cf4c79e4f4e95a58731e Amable, Bruno Chatelain, Jean-Bernard Ralf, Kirsten oai:RePEc:eee:dyncon:v:34:y:2010:i:7:p:1260-12762010-06-29RePEc:eee:dyncon article The finite state Markov-chain approximation methods developed by Tauchen (1986) and Tauchen and Hussey (1991) are widely used in economics, finance and econometrics to solve functional equations in which state variables follow autoregressive processes. For highly persistent processes, the methods require a large number of discrete values for the state variables to produce close approximations which leads to an undesirable reduction in computational speed, especially in a multivariate case. This paper proposes an alternative method of discretizing multivariate autoregressive processes. This method can be treated as an extension of Rouwenhorst's (1995) method which, according to our finding, outperforms the existing methods in the scalar case for highly persistent processes. The new method works well as an approximation that is much more robust to the number of discrete values for a wide range of the parameter space. Finite state Markov-chain approximation Discretization of multivariate autoregressive processes Transition matrix Numerical methods Value function iteration 7 2010 34 7 1260 1276 http://www.sciencedirect.com/science/article/B6V85-4YDKJS5-1/2/7ecd9a20d7f3d5a9342f90edf9824787 Galindev, Ragchaasuren Lkhagvasuren, Damba oai:RePEc:eee:dyncon:v:34:y:2010:i:7:p:1277-12942010-06-29RePEc:eee:dyncon article The tug-o-war for supremacy between inflation targeting and monetary targeting is a classic, yet timely topic, in monetary economics. In this paper, we revisit this issue within the context of a pure-exchange, overlapping generations model in which spatial separation and random relocation create an endogenous demand for money. We study AR(1) shocks to both real output and the real interest rate. Irrespective of the nature of the shocks, the optimal inflation target is always positive. Under monetary targeting, shocks to output necessitate negative money growth rates; for shocks to real interest rates, money growth rates may be either positive or negative depending on the elasticity of consumption substitution. Also, for output shocks, monetary targeting welfare-dominates inflation targeting but the gap between the two vanishes as the shock process approaches a random walk. In sharp contrast, for shocks to the real interest rate, we prove that monetary targeting and inflation targeting are welfare-equivalent only in the limit as the shocks become i.i.d. The upshot is that persistence of the underlying fundamental uncertainty matters: depending on the nature of the shock, policy responses need to be either more or less aggressive as persistence increases. Real shocks Persistence Overlapping generations Random relocation model Monetary targeting Inflation targeting 7 2010 34 7 1277 1294 http://www.sciencedirect.com/science/article/B6V85-4YN5P8R-1/2/ 3bd99ee6f9303e1a2960a5cb69cbbc7e Bhattacharya, Joydeep Singh, Rajesh oai:RePEc:eee:dyncon:v:34:y:2010:i:8:p:1343-13582010-06-29RePEc:eee:dyncon article Many have questioned the empirical relevance of the Calvo-Yun model. This paper adds a term structure to three widely studied macroeconomic models (Calvo-Yun, hybrid and Svensson). We back out from observations on the yield curve the underlying macroeconomic model that most closely matches the level, slope and curvature of the yield curve. With each model we trace the response of the yield curve to macroeconomic shocks. We assess the fit of each model against the observed behaviour of interest rates and find limited support for the Calvo-Yun model in terms of fit with the observed yield curve, we find some support for the hybrid model but the Svensson model performs best. Macromodels Yield curve Persistence 8 2010 34 8 1343 1358 http://www.sciencedirect.com/science/article/B6V85-4Y95TWS-2/2/1caf4c66e594d1873dee2ffb2b478d93 Chadha, Jagjit S. Holly, Sean oai:RePEc:eee:dyncon:v:34:y:2010:i:7:p:1202-12132010-06-29RePEc:eee:dyncon article This paper estimates monthly pass-through ratios from import prices to consumer prices in real time. Conventional time series methods impose restrictions to generate exogenous shocks on exchange rates or import prices when estimating pass-through coefficients. Instead, our estimation strategy follows an event-study approach based on monthly releases in import prices. Projections from a dynamic common factor model with daily panels before and after monthly releases of import prices define the innovation for import prices. We apply our identification procedure to Swiss prices and find strong evidence that the median of the monthly pass-through ratio is around 0.3. Tests show that standard assumptions of non-real time data and limited information breath are critical for the pass-through estimates. Common factors Pass-through Real-time data 7 2010 34 7 1202 1213 http:// www.sciencedirect.com/science/article/B6V85-4Y9XKVB-1/2/e7358fe5a0f5f5b1471431f5d9556583 Amstad, Marlene Fischer, Andreas M. oai:RePEc:eee:dyncon:v:34:y:2010:i:6:p:1015-10302010-06-29RePEc:eee:dyncon article Most estimates of the welfare costs of inflation are devised considering only noninterest-bearing assets, ignoring that since the 1980s technological innovations and new regulations have increased the liquidity of interest-bearing deposits. We investigate the resulting bias. Sufficient and necessary conditions on its sign are presented, along with closed-form expressions for its magnitude. Two examples dealing with bidimensional bilogarithmic money demands show that disregarding interest-bearing monies may lead to a non-negligible overestimation of the welfare costs of inflation. An intuitive explanation is that such assets may partially make up for the decreased demand of noninterest-bearing assets due to higher inflation. Welfare Inflation Money demand Divisia index Interest-bearing monies 6 2010 34 6 1015 1030 http://www.sciencedirect.com/science/article/B6V85-4Y65S8W-2/2/b3a47af22096bb54f0671a1886183af7 Cysne, Rubens Penha Turchick, David oai:RePEc:eee:dyncon:v:34:y:2010:i:8:p:1403-14202010-06-29RePEc:eee:dyncon article Standard theory finds that, given uncovered interest parity, sterilized foreign exchange intervention should not affect equilibrium prices and quantities. This paper shows that when, as in the data, taxation is not sufficiently flexible in response to spending shocks, uncovered interest parity is replaced by a monotonically increasing relationship between the stock of domestic currency government debt and domestic interest rates. Sterilized intervention then becomes a second independent monetary policy instrument that affects portfolios, interest rates, exchange rates and consumption. It should be most effective in developing countries, where fiscal spending volatility is large and domestic currency government debt is small. Uncovered interest parity Imperfect asset substitutability Portfolio balance models Sterilized foreign exchange intervention 8 2010 34 8 1403 1420 http:// www.sciencedirect.com/science/article/B6V85-4YWB28P-1/2/6c4fc5a900e6928825713ecc6d86b3c7 Kumhof, Michael oai:RePEc:eee:dyncon:v:34:y:2010:i:8:p:1442-14552010-06-29RePEc:eee:dyncon article Adaptive behavior has been observed in almost all aspects of real-world. One of the main advantages of acting adaptively is its stabilizing effect on dynamic equilibrium, associated with which are three favorable features: (a) non-destabilizing characteristics, (b) low-speed effectiveness and (c) the convexity of the stabilization regime in terms of the adaptive parameter. It is shown either in theory or by counter-examples that these advantages may not be preserved if the adaptive mechanism is applied to multi-dimensional processes. The necessary and sufficient conditions for the relevant phenomena are provided for two-dimensional dynamic processes with application to duopolistic dynamics. Our findings not only help to clarify hidden misconceptions and prevent potential abuse of adaptive mechanisms, but also illustrate the possible pitfalls arising from generalizing well-known characteristics of low dimensional and/or homogeneous agent models to high-dimensional and heterogenous agent models. Adaptive strategy Adaptive learning Adaptive adjustment Stability Adaptive behavior Dynamics Heterogenous agent models 8 2010 34 8 1442 1455 http://www.sciencedirect.com/ science/article/B6V85-4YVJ3S2-3/2/304b13a92d8437ef757ec5430fde45e1 Huang, Weihong oai:RePEc:eee:dyncon:v:37:y:2013:i:5:p:1001-10182013-04-12RePEc:eee:dyncon article We show that Australian options are equivalent to fixed or floating strike Asian options and consequently that by studying Asian options from the Australian perspective and vice versa, much can be gained. One specific application of this “Australian approach” leads to a natural dimension reduction for the pricing PDE of Asian options, with or without stochastic volatility, featuring time independent coefficients. Another application lies in the improvement of Monte Carlo schemes, where the “Australian approach” results in a path-independent method. We also show how the Milevsky and Posner (1998) result on the reciprocal Γ-approximation for Asian options can be quickly obtained by using the connection to Australian options. Further, we present an analytical (exact) pricing formula for Australian options and adapt a result of Carr et al. (2008) to show that the price of an Australian call option is increasing in the volatility and by doing this answering a standing question by Moreno and Navas (2008). Asset pricing; Derivatives; Asian options; Quanto options; Dollar cost averaging (DCA); Numerical methods; 5 2013 37 1001 1018 G12 G13 C63 http://www.sciencedirect.com/science/article/pii/ S0165188913000146 Ewald, Christian-Oliver Menkens, Olaf Hung Marten Ting, Sai oai:RePEc:eee:dyncon:v:37:y:2013:i:5:p:984-10002013-04-12RePEc:eee:dyncon article This paper studies implementation of the social optimum in a model of addictive consumption. We consider corrective taxes that address inefficiencies due to negative externalities, imperfect competition, and self-control problems. Our setup allows us to evaluate how such taxes are affected by (i) market power and (ii) a requirement for implementation to be time consistent. Together, these features can imply significantly lower taxes. We provide a general characterization of the optimal tax rule and illustrate it with two examples. Dynamic externalities; Internalities; Addiction; Optimal taxation; Time consistent implementation; 5 2013 37 984 1000 H55 D72 D91 E62 http://www.sciencedirect.com/science/article/pii/S0165188913000171 Bossi, Luca Calcott, Paul Petkov, Vladimir oai:RePEc:eee:dyncon:v:37:y:2013:i:5:p:1019-10392013-04-12RePEc:eee:dyncon article We provide sufficient conditions for existence and uniqueness of a monotone, Lipschitz continuous Markov stationary Nash equilibrium (MSNE) and characterize its associated Stationary Markov equilibrium in a class of intergenerational paternalistic altruism models with stochastic production. Our methods are constructive, and emphasize both order-theoretic and geometrical properties of nonlinear fixed point operators, and relate our results to the construction of globally stable numerical schemes that construct approximate Markov equilibrium in our models. Our results provide a new catalog of tools for the rigorous analysis of MSNE on minimal state spaces for OLG economies with stochastic production and limited commitment. Stochastic games; Constructive methods; Intergenerational altruism; 5 2013 37 1019 1039 C62 C73 D91 http://www.sciencedirect.com/science/article/pii/ S0165188913000134 Balbus, Łukasz Reffett, Kevin Woźny, Łukasz oai:RePEc:eee:dyncon:v:37:y:2013:i:5:p:964-9832013-04-12RePEc:eee:dyncon article This paper discusses two variations to the optimal lending contract under asymmetric information studied in Clementi and Hopenhayn (2006). One variation assumes that the entrepreneur is less patient than the bank, and the other assumes the bank has limited commitment. The qualitative properties of the two modified contracts are very similar. In particular, both variations lead to borrowing constraints that are always binding such that the firm is financially constrained throughout its life cycle and subject to a positive probability of being liquidated eventually. Optimal lending contract; Borrowing constraints; Asymmetric information; Limited commitment; Impatient entrepreneur; 5 2013 37 964 983 G3 L2 D21 http://www.sciencedirect.com/science/article/pii/S0165188913000110 Li, Shuyun May oai:RePEc:eee:dyncon:v:37:y:2013:i:5:p:951-9632013-04-12RePEc:eee:dyncon article This paper systematically examines the interrelations between a progressive income tax schedule and macroeconomic (in) stability in an otherwise standard one-sector real business model with productive government spending. We analytically show that the economy exhibits indeterminacy and sunspots if and only if the equilibrium after-tax wage-hours locus is positively sloped and steeper than the household's labor supply curve. Unlike in the framework with useless public expenditures, a less progressive tax policy may operate like an automatic stabilizer that mitigates belief-driven cyclical fluctuations. Moreover, our quantitative analysis shows that this result is able to provide a theoretically plausible explanation for the discernible reduction in US output volatility after the Tax Reform Act of 1986 was implemented. Progressive income taxation; Equilibrium (in)determinacy; Productive government spending; Business cycles; 5 2013 37 951 963 E32 E62 http://www.sciencedirect.com/science/article/pii/S0165188913000122 Chen, Shu-Hua Guo, Jang-Ting oai:RePEc:eee:dyncon:v:37:y:2013:i:5:p:1040-10652013-04-12RePEc:eee:dyncon article This paper studies the effects of multiple investment horizons and investors' bounded rationality on the price dynamics. We consider a market with one risky asset with agents maximizing expected utility of wealth over discrete investment periods. Investors' demand for the risky asset may depend on the historical returns, so that our model encompasses a wide range of behaviorist patterns. Stochastic properties of the returns process are established analytically and illustrated by simulation. The links between dynamic patterns in returns and different types of investment behavior are explored in the heterogeneous agents' framework. We find that conditional volatility of returns cannot be constant in many generic situations, especially if agents with different investment horizons operate on the market. In the latter case, the return process can display conditional heteroscedasticity, even if all investors are so-called “fundamentalists” and their demand for the risky asset is subject to exogenous iid shocks. We show that the heterogeneity of investment horizons can contribute to the explanation of different stylized patterns in stock returns, in particular, mean-reversion and volatility clustering. Asset pricing; Heterogeneous agents; Multiple investment scales; Volatility clustering; 5 2013 37 1040 1065 G12 G11 D84 http://www.sciencedirect.com/science/article/pii/S0165188913000195 Chauveau, Th. Subbotin, A. oai:RePEc:eee:dyncon:v:37:y:2013:i:5:p:1066-10962013-04-12RePEc:eee:dyncon article We solve for the time consistent dynamic asset allocation of an investor with a mean variance objective function in a multiple assets affine setting. We use as a benchmark the pre-commitment strategy widely used in the literature and assess the potential welfare gains from pre-commitment by comparing the time consistent strategy to the pre-commitment, time inconsistent, strategy. The gains from pre-commitment are simply considerable since, in some cases, at the 5 years horizon the yearly certainty equivalent of the pre-commitment strategy is 48% compared with 9% for the time consistent strategy. However, these welfare gains result from huge and unrealistic positions in the risky assets; in some cases, the pre-commitment strategy is more than 60 times the time consistent strategy. We thus looked for alternative time inconsistent strategies that improve relative to the time consistent strategy while still involving reasonable risky asset positions. To identify these strategies, we explore an original aspect of the time consistent mean variance strategy: the presence of intertemporal hedging in such a strategy reflects welfare degradation. Therefore, a natural candidate is the time consistent strategy without the intertemporal hedging component. The second component of the time consistent strategy is the traditional myopic component discounted. We show that this component could be seen as a standard myopic strategy which is marked to market and the discount factor acts as a tailing factor. This marked to market myopic (MMM) strategy is shown to yield reasonable risky assets positions and substantial welfare gains at long horizons relative to the time consistent strategy. We also show that it dominates the standard myopic strategy as well as the equally weighted strategy. Mean-variance preferences; Dynamic asset allocation; Intertemporal hedging; Predictability; Value and growth investment; 5 2013 37 1066 1096 D11 D12 G11 http://www.sciencedirect.com/science/article/pii/S0165188913000158 Lioui, Abraham oai:RePEc:eee:dyncon:v:37:y:2013:i:5:p:911-9282013-04-12RePEc:eee:dyncon article A survey of the economic impact of climate change and the marginal damage costs shows that carbon dioxide emissions are a negative externality. The estimated Pigou tax and its growth rate are too low to justify the climate policy targets set by political leaders. A lower discount rate or greater concern for the global distribution of income would justify more stringent climate policy, but would imply an overhaul of other public policies. Catastrophic risk justifies more stringent climate policy, but only to a limited extent. Climate change; Climate policy; First-best; 5 2013 37 911 928 Q54 http://www.sciencedirect.com/science/article/pii/S0165188913000092 Tol, Richard S.J. oai:RePEc:eee:dyncon:v:37:y:2013:i:5:p:929-9502013-04-12RePEc:eee:dyncon article This paper shows that non-convex costs of financial adjustment are quantitatively relevant for explaining firm dynamics. First, empirically, financial activity is lumpy, more than investment activity. Second, non-convex costs are necessary, in the context of a dynamic investment and financing model, to rationalize this lumpiness. Two versions of the model, with and without non-convex costs, are compared. Only the non-convex costs version replicates the dynamics in the data, generating financial lumpiness higher than investment lumpiness. Other predictions of the model with respect to investment and finance are discussed. Financial frictions; External financing costs; Investment; Dynamic trade-off model; Financial lumpiness; 5 2013 37 929 950 E22 E42 E44 G31 G32 G33 http://www.sciencedirect.com/science/article/pii/S0165188913000109 Bazdresch, Santiago oai:RePEc:eee:dyncon:v:33:y:2009:i:9:p:1631-16382011-03-29RePEc:eee:dyncon article We show that in a one-sector monetary endogenous growth model under real interest rate targeting, the local stability properties of the economy's balanced growth path depend crucially on the exact formulation of the cash-in-advance constraint and the degree of productive externalities. In particular, when a positive fraction (including 100%) of gross investment is subject to the liquidity constraint, the model exhibits indeterminacy and sunspots if and only if the equilibrium wage-hours locus is positively sloped and steeper than the labor supply curve. On the other hand, when real money balances are required only for the household's consumption purchases, the economy always displays saddle-path stability and equilibrium uniqueness, regardless of the strength of productive externalities. Real interest rate targeting Endogenous growth Cash-in-advance constraint Indeterminacy 9 2009 33 9 1631 1638 http://www.sciencedirect.com/science/article/B6V85-4VW550C-1/2/0f0e7e6e0473d527d2d6e6ce1459e9f1 Chin, Chi-Ting Guo, Jang-Ting Lai, Ching-Chong oai:RePEc:eee:dyncon:v:34:y:2010:i:3:p:456-4712011-03-29RePEc:eee:dyncon article This paper revisits a widely adopted approach to robust decision making developed by (Hansen and Sargent, 2003) and (Hansen and Sargent, 2008)--henceforth HS--and applies it to monetary policy design in the face of model uncertainty. We pay particular attention to two issues: first, we distinguish three possible forms of the implied game between malign nature and the policymaker in the HS procedure each leading to a different robust and approximating equilibria. Second, we impose the zero lower bound (ZLB) constraint on the nominal interest rate. We show that the ZLB constraint has serious consequences for a policymaker pursuing HS-type robustness, especially when accompanied by an inability to commit. Robustness Unstructured uncertainty Commitment Zero lower bound interest rate constraint 3 2010 34 3 456 471 http://www.sciencedirect.com/science/article/B6V85-4XFGJ7N-1/2/ 73f095ab90362b4e9a696931a20f252c Levine, Paul Pearlman, Joseph oai:RePEc:eee:dyncon:v:34:y:2010:i:2:p:179-1902011-03-29RePEc:eee:dyncon article This paper develops a simple two-country, two-good model of international trade and borrowing that suppresses all previous sources of current account dynamics. Under rational expectations, international debt follows a random walk. Under adaptive learning, however, the model's unit root is eliminated and international debt is either a stationary or an explosive process, depending on agents' specific learning algorithm. Some stationary learning algorithms result in debt following an AR(1) process with an autoregressive coefficient less than 0.8. Because unit roots are a common and problematic feature of many international business cycle models, our results offer a new approach for generating stationarity. Current account International debt movements Expectations Adaptive learning 2 2010 34 2 179 190 http:// www.sciencedirect.com/science/article/B6V85-4X6FNGV-1/2/d3d57457aa0fc0c95a31112403ec6896 Davies, Ronald B. Shea, Paul oai:RePEc:eee:dyncon:v:34:y:2010:i:1:p:69-782011-03-29RePEc:eee:dyncon article We propose a method to solve models with heterogeneous agents and aggregate uncertainty. The law of motion describing aggregate behavior is obtained by explicitly aggregating the individual policy rule. The algorithm is simpler and faster than existing algorithms that rely on parameterization of the cross-sectional distribution and/or a computationally intensive simulation step. Explicit aggregation establishes a link between the individual policy rule and the set of necessary aggregate state variables, an insight that can be helpful in determining what state variables to include in other algorithms as well. Numerical solutions Projection methods 1 2010 34 1 69 78 http://www.sciencedirect.com/science/article/B6V85-4WYDMW5-3/2/61e5c3a39cfd60e48c9badd0d19ce63b Den Haan, Wouter J. Rendahl, Pontus oai:RePEc:eee:dyncon:v:34:y:2010:i:3:p:522-5412011-03-29RePEc:eee:dyncon article This paper develops an endogenous growth model of occupational choice with overlapping generations heterogeneous in entrepreneurial ability. While an increase in the number of entrepreneurs creates a growth-enhancing variety effect, the reduced overall quality of entrepreneurial ability retards growth. As a result, the number of entrepreneurs and output growth need not be positively related, in response to changes in the ability distribution. While cheaper financial operation and higher manufacturing productivity are both growth-enhancing, they have different effects on equilibrium factor prices and equilibrium financial markups. Additionally, the long-run growth consequences of subsidies to entrepreneurship and credit-market imperfections are studied. Occupational choice Entrepreneurial ability Distribution and growth 3 2010 34 3 522 541 http://www.sciencedirect.com/science /article/B6V85-4XHCHWT-3/2/4044301ebf35be162547426e01a35c8a Jiang, Neville Wang, Ping Wu, Haibin oai:RePEc:eee:dyncon:v:34:y:2010:i:1:p:36-412011-03-29RePEc:eee:dyncon article This article describes the approach to computing the version of the stochastic growth model with idiosyncratic and aggregate risk that relies on collapsing the aggregate state space down to a small number of moments used to forecast future prices. One innovation relative to most of the literature is the use of a non-stochastic simulation routine. Idiosyncratic risk Business cycles Numerical methods 1 2010 34 1 36 41 http://www.sciencedirect.com/science/article/B6V85-4WYDMW5-4/2/602054d13483daea5f5da0ed606e0616 Young, Eric R. oai:RePEc:eee:dyncon:v:33:y:2009:i:11:p:1929-19442011-03-29RePEc:eee:dyncon article In this paper, we propose an empirical model based on the heterogeneous agents literature. Price changes are induced by fundamental, technical, and international factors. The model is estimated for Hong Kong and Thailand surrounding the Asian crisis. We find that the three sources are relevant and that their relative price impact fluctuates conditional on price impact in the previous period. Results imply that the crisis is triggered in Thailand due to an increased focus on the fundamental price, followed by an increase in chartism and finally aggravated by a focus on foreign developments. Furthermore, the crisis deepens in Hong Kong because of increased attention for foreign markets. Heterogeneous expectations Contagion Asian crisis Dynamic models 11 2009 33 11 1929 1944 http:// www.sciencedirect.com/science/article/B6V85-4WNB503-1/2/2e137b6b2812665ba6f235582edc0b4b de Jong, Eelke Verschoor, Willem F.C. Zwinkels, Remco C.J. oai:RePEc:eee:dyncon:v:34:y:2010:i:2:p:191-2062011-03-29RePEc:eee:dyncon article This paper considers the relationship between tax competition and growth in an endogenous growth model where there are stochastic shocks to productivity, and capital taxes fund a public good which may be for final consumption or an infrastructure input. Absent stochastic shocks, decentralized tax setting (two or more jurisdictions) maximizes the rate of growth, as the constant returns to scale present with endogenous growth implies "extreme" tax competition. Stochastic shocks imply that households face a portfolio choice problem, which dampens down tax competition and may raise taxes above the centralized level. Growth can be lower with decentralization. Our results also predict a negative relationship between output volatility and growth with decentralization. Tax competition Uncertainty Stochastic growth 2 2010 34 2 191 206 http://www.sciencedirect.com/science/article/B6V85-4X7GMB3-1 /2/23bbefce6332052aac301f77c7c8752d Koethenbuerger, Marko Lockwood, Ben oai:RePEc:eee:dyncon:v:33:y:2009:i:11:p:1867-18792011-03-29RePEc:eee:dyncon article We give a full characterization of the open-loop Nash equilibrium of a nonrenewable resource game between two types of firms differing in extraction costs. We show that (i) there almost always exists a phase where both types of firms supply simultaneously, (ii) when the high cost mines are exploited by a number of firms that goes to infinity the equilibrium approaches the cartel-versus-fringe equilibrium with the fringe firms acting as price takers, and (iii) the cheaper resource may not be exhausted first, a violation of the Herfindahl rule, that may be detrimental to social welfare. Nonrenewable resources Nash equilibrium Cartel versus fringe Open loop 11 2009 33 11 1867 1879 http://www.sciencedirect.com/science/article/B6V85-4WDNKR5-1/2/360feabfcf9a9a71501c196d566f04a2 Benchekroun, Hassan Halsema, Alex Withagen, Cees oai:RePEc:eee:dyncon:v:33:y:2009:i:7:p:1419-14362011-03-29RePEc:eee:dyncon article This paper proposes a model of Schumpeterian endogenous growth incorporating the role of market imperfections that exist due to adverse selection between investors that finance R&D and entrepreneurs that perform R&D. There is a distribution of agents indexed by a skill factor that determines one's average productivity at performing research. An entrepreneur starts-up a research venture by borrowing from an investor that funds R&D so as to invent new goods. Skill is private information, creating an adverse selection problem for the investor who designs a truth-telling mechanism. We show that an increase in the mean skill enhances growth as it leads to greater R&D productivity and investment; while an increase in the dispersion of the skill distribution dampens growth as it makes the adverse selection problem between investors and entrepreneurs more severe. The growth rate would double in the absence of adverse selection. The R&D investment of the average size firm must be subsidized threefold for the negative adverse selection effect to be nullified. We provide U.S. industry-level and European sector-level evidence in favor of the positive scale effect and negative adverse selection effect using the firm size distribution (FSD) to proxy for the entrepreneurial skill distribution. Asymmetric information Mechanism design Innovation Technological change 7 2009 33 7 1419 1436 http://www.sciencedirect.com/science/article/B6V85-4VNKGR3-1/2/ ab25afe6c1ac287349b4380402d13861 Plehn-Dujowich, Jose M. oai:RePEc:eee:dyncon:v:33:y:2009:i:8:p:1531-15422011-03-29RePEc:eee:dyncon article We formulate a two-country, two-good, two-factor, two-period-lived overlapping generations model to examine how population aging determines the pattern of and gains from trade. Two main results are obtained. First, the aging country endogenously becomes a small country exporting the capital-intensive good, whereas the younger country endogenously dominates the world economy determining the world prices, in the free trade steady state. Second, although uncompensated free trade cannot be Pareto superior to autarky, there exists a compensation scheme applied within each country such that free trade is Pareto superior to autarky. Aging and trade Gains from trade Overlapping generations model Transitional dynamics Compensation scheme 8 2009 33 8 1531 1542 http://www.sciencedirect.com/science/article/B6V85-4VRX638-1/2/ 8e4710ccc10cd1cba1671016d97da09f Naito, Takumi Zhao, Laixun oai:RePEc:eee:dyncon:v:34:y:2010:i:1:p:4-272011-03-29RePEc:eee:dyncon article This paper compares numerical solutions to the model of Krusell and Smith [1998. Income and wealth heterogeneity in the macroeconomy. Journal of Political Economy 106, 867-896] generated by different algorithms. The algorithms have very similar implications for the correlations between different variables. Larger differences are observed for (i) the unconditional means and standard deviations of individual variables, (ii) the behavior of individual agents during particularly bad times, (iii) the volatility of the per capita capital stock, and (iv) the behavior of the higher-order moments of the cross-sectional distribution. For example, the two algorithms that differ the most from each other generate individual consumption series that have an average (maximum) difference of 1.63% (11.4%). Numerical solutions Approximations 1 2010 34 1 4 27 http://www.sciencedirect.com/science/article/B6V85-4X01P6T-2/2/944569cdddfbca9b2444dde01e71e390 Den Haan, Wouter J. oai:RePEc:eee:dyncon:v:33:y:2009:i:7:p:1437-14502011-03-29RePEc:eee:dyncon article Income per capita in some Western European countries more than tripled in the two and a half decades that followed World War II. The literature has identified several factors behind this outstanding growth episode, specifically; structural change, the Marshall Plan combined with the public provision of infrastructure, the surge of intra-European trade, and the reconstruction process that followed the war. This paper is an attempt to formalize and quantify the contribution of each one of these factors to post-war growth. Our results highlight the importance of reconstruction growth and structural change, and point to the limited role of the Marshall Plan, and the late contribution of intra-European trade. Economic growth European economic history 1913- CGE models 7 2009 33 7 1437 1450 http://www.sciencedirect.com/science/article/B6V85-4VP665P-2/2/67c83d232e8275ebc7575840d7fa40b5 Alvarez-Cuadrado, Francisco Pintea, Mihaela I. oai:RePEc:eee:dyncon:v:33:y:2009:i:11:p:1858-18662011-03-29RePEc:eee:dyncon article Estimating linear rational expectations models in a limited-information setting requires replacing the expectations of future, endogenous variables either with instrumented, actual values or with forecast survey data. Applying the method of Gottfries and Persson [Empirical examinations of the information sets of economic agents. Quarterly Journal of Economics 103, 251-259], I show how to augment these methods with actual, future values of the endogenous variables to improve statistical efficiency. The method is illustrated with an application to the US hybrid new Keynesian Phillips curve, where traditional, lagged instruments and the median forecast from the Survey of Professional Forecasters both appear to miss significant information used by price-setters, so that forecast pooling with actual values improves the statistical fit to inflation. Forecast pooling Recursive projection New Keynesian Phillips curve 11 2009 33 11 1858 1866 http://www.sciencedirect.com/science/article/B6V85-4W8VVYD-2/2/ c4e35256dfc6c71ae26ea44f79b14de8 Smith, Gregor W. oai:RePEc:eee:dyncon:v:33:y:2009:i:7:p:1398-14182011-03-29RePEc:eee:dyncon article Production capital and total factor productivity or technology are fundamental to understanding output and productivity growth, but are unobserved except at disaggregated levels and must be estimated before being used in empirical analysis. In this paper, we develop estimates of production capital and technology for U.S. total manufacturing based on an estimated dynamic structural economic model. First, using annual U.S. total manufacturing data for 1947-1997, we estimate by maximum likelihood a dynamic structural economic model of a representative production firm. In the estimation, capital and technology are completely unobserved or latent variables. Then, we apply the Kalman filter to the estimated model and the data to compute estimates of model-based capital and technology for the sample. Finally, we describe and evaluate similarities and differences between the model-based and standard estimates of capital and technology reported by the Bureau of Labor Statistics. Kalman filter estimation of latent variables 7 2009 33 7 1398 1418 http://www.sciencedirect.com/science/article/B6V85-4VJ4WJJ-1/2/9da7fdc2df3fec7c680a43654bfb67d8 Chen, Baoline Zadrozny, Peter A. oai:RePEc:eee:dyncon:v:33:y:2009:i:11:p:1837-18572011-03-29RePEc:eee:dyncon article In the absence of forward-looking models for recovery rates, market participants tend to use exogenously assumed constant recovery rates in pricing models. We develop a flexible jump-to-default model that uses observables: the stock price and stock volatility in conjunction with credit spreads to identify implied, endogenous, dynamic functions of the recovery rate and default probability. The model in this paper is parsimonious and requires the calibration of only three parameters, enabling the identification of the risk-neutral term structures of forward default probabilities and recovery rates. Empirical application of the model shows that it is consistent with stylized features of recovery rates in the literature. The model is flexible, i.e. it may be used with different state variables, alternate recovery functional forms, and calibrated to multiple debt tranches of the same issuer. The model is robust, i.e. evidences parameter stability over time, is stable to changes in inputs, and provides similar recovery term structures for different functional specifications. Given that the model is easy to understand and calibrate, it may be used to further the development of credit derivatives indexed to recovery rates, such as recovery swaps and digital default swaps, as well as provide recovery rate inputs for the implementation of Basel II. Credit default swaps Recovery Default probability Reduced form 11 2009 33 11 1837 1857 http://www.sciencedirect.com/science/ article/B6V85-4W8VVYD-1/2/480e4555dc0e9b9ef298e2c0911c7787 Das, Sanjiv R. Hanouna, Paul oai:RePEc:eee:dyncon:v:34:y:2010:i:2:p:101-1202011-03-29RePEc:eee:dyncon article This paper analyzes an alternating offer model of bargaining over the sale of an asset in a market, such as that for housing, in which another agent may come and compete for the right to strike a deal. The analysis allows the buyer and seller to have possibly differing views as to how likely such a competition is. Hence the buyer and the seller disagree about their respective bargaining powers. These views adjust to market realizations as the parties learn. It is shown that there exists a unique subgame perfect equilibrium which can be explicitly constructed: hence, conditional on market conditions, equilibrium prices and optimal stall lengths (that is, delay) can be found. Bargaining delay can only occur if there is optimism (not pessimism) and only if the parties are open to learning as time elapses. This delay can occur even for very small levels of optimism and the delay can be for economically significant periods. Optimism Bargaining delay Asset sales House sales Bargaining power 2 2010 34 2 101 120 http://www.sciencedirect.com/science/article/B6V85-4WD7B2K-1/2/53887f93e92bb8fbebed52c08c3dc3f2 Thanassoulis, John oai:RePEc:eee:dyncon:v:34:y:2010:i:1:p:1-32011-03-29RePEc:eee:dyncon article This paper describes the first model considered in the computational suite project that compares different numerical algorithms. It is an incomplete markets economy with a continuum of agents and an inequality (borrowing) constraint. Numerical solutions Simulations Approximations 1 2010 34 1 1 3 http://www.sciencedirect.com/science/article/B6V85-4X01P6T-1/2/6c88c795995fed3eba523854830290f4 Den Haan, Wouter J. Judd, Kenneth L. Juillard, Michel oai:RePEc:eee:dyncon:v:34:y:2010:i:5:p:897-9122011-03-29RePEc:eee:dyncon article In this paper, I present a model in which firm-level uncertainty raises aggregate productivity growth. The mechanism for this is learning-by-doing in the research sector: firms undertake research to reduce uncertainty, which results in social knowledge accumulation that improves the productivity of future research. The model explains the positive correlation between TFP growth and dispersion in manufacturing industries. Firm-level uncertainty Knowledge accumulation TFP growth and dispersion Bayesian updating 5 2010 34 5 897 912 http://www.sciencedirect.com/science/article/B6V85-4Y05DF9-1/2/1e491377fc0274d0c3285466e47a76de Oikawa, Koki oai:RePEc:eee:dyncon:v:33:y:2009:i:12:p:1991-20002011-03-29RePEc:eee:dyncon article This paper examines the quantitative relationship between the elasticity of capital-labor substitution in production and the conditions needed for equilibrium indeterminacy (and belief-driven fluctuations) in a one-sector growth model. With variable capital utilization, the substitution elasticity has little quantitative impact on the minimum degree of increasing returns needed for indeterminacy. However, when capital utilization is constant, a below-unity substitution elasticity sharply raises the minimum degree of increasing returns because it imposes a higher effective adjustment cost on labor hours. Overall, our results show that empirically-plausible departures from the Cobb-Douglas production specification can make indeterminacy more difficult to achieve. Capital-labor substitution Equilibrium indeterminacy Capital utilization Real business cycles Sunspots 12 2009 33 12 1991 2000 http://www.sciencedirect.com/science/article/B6V85-4WP47KN-1/2/7664cfb1f18c988304296f3bf2176852 Guo, Jang-Ting Lansing, Kevin J. oai:RePEc:eee:dyncon:v:33:y:2009:i:10:p:1824-18362011-03-29RePEc:eee:dyncon article This paper is the first attempt to fit a stochastic cusp catastrophe model to stock market data. We show that the cusp catastrophe model explains the crash of stock exchanges much better than other models. Using the data of U.S. stock markets we demonstrate that the crash of October 19, 1987, may be better explained by cusp catastrophe theory, which is not true for the crash of September 11, 2001. With the help of sentiment measures, such as the index put/call options ratio and trading volume (the former models the chartists, the latter the fundamentalists), we have found that the 1987 returns are bimodal, and the cusp catastrophe model fits these data better than alternative models. Therefore we may say that the crash has been led by internal forces. However, the causes for the crash of 2001 are external, which is also evident in much weaker presence of bifurcations in the data. In this case, alternative models explain the crash of stock exchanges better than the cusp catastrophe model. Stochastic cusp catastrophe Bifurcations Singularity Nonlinear dynamics Stock market crash 10 2009 33 10 1824 1836 http://www.sciencedirect.com/science/article/B6V85-4W8VVYD-3/2/2eea78b69386e10727edc7d8d4ddca49 Barunik, J. Vosvrda, M. oai:RePEc:eee:dyncon:v:34:y:2010:i:3:p:437-4552011-03-29RePEc:eee:dyncon article Models of the monetary transmission mechanism often generate empirically implausible business fluctuations. This paper analyzes the role of on-the-job search in the propagation of monetary shocks in a sticky price model with labor market search frictions. Such frictions induce long-term employment relationships, such that the real marginal cost is determined by real wages and the cost of an employment relationship. On-the-job search opens up an extra channel of employment growth that dampens the response of these two components. Because real marginal cost rigidity induces small price adjustments, on-the-job search gives rise to a strong propagation of monetary shocks that increases output persistence. On-the-job search Cost of an employment relationship Sticky prices Business fluctuations 3 2010 34 3 437 455 http://www.sciencedirect.com/science/article/B6V85-4XDCHJC-1/2/ 5fd7c33aa55e0aa768f4002fb6ecfe75 Van Zandweghe, Willem oai:RePEc:eee:dyncon:v:33:y:2009:i:8:p:1617-16292011-03-29RePEc:eee:dyncon article This paper analyzes the effectiveness of delegation in solving the time inconsistency problem of monetary policy using a microfounded general equilibrium model where delegation and reappointment are explicitly included into the government's strategy. The method of Chari and Kehoe [1990. Sustainable plans. Journal of Political Economy 98 (4), 783-802] is applied to characterize the entire set of sustainable outcomes. Countering McCallum's [1995. Two fallacies concerning central-bank independence. American Economic Review 85 (2), 207-211] second fallacy, delegation is able to eliminate the time inconsistency problem, with the commitment policy being sustained under discretion for any intertemporal discount rate. Central bank Monetary policy Institutional design 8 2009 33 8 1617 1629 http://www.sciencedirect.com/science/article/ B6V85-4VTVPSW-1/2/2a930d27022c986721cb7efe91ee400e Basso, Henrique S. oai:RePEc:eee:dyncon:v:34:y:2010:i:2:p:231-2452011-03-29RePEc:eee:dyncon article We develop a tractable general theory for the study of the economic and demographic impact of epidemics, notably its distributional consequences. To this end, we build up a three-period overlapping generations model where altruistic parents choose optimal health expenditures for their children and themselves. The survival probability of adults and children depends on such investments. Agents can be skilled or unskilled. In this paper, epidemics are modeled as one-period exogenous shocks to the adults' survival rates. We first show that such epidemics have permanent effects on the size of population and on the level of output. However, the income distribution is shown to be unaltered in the long-run. Second, we show that this distribution may be significantly altered in the medium-term: in particular, the proportion of the unskilled will necessarily increase at that term if orphans are too penalized in the access to education. Epidemics Orphans Income distribution Endogenous survival Medium-term dynamics 2 2010 34 2 231 245 http://www.sciencedirect.com/science/article/B6V85-4X7GMB3-3/2/813474d388aa67015a6d312bb61f82bd Boucekkine, Raouf Laffargue, Jean-Pierre oai:RePEc:eee:dyncon:v:34:y:2010:i:2:p:158-1782011-03-29RePEc:eee:dyncon article This paper investigates the interactions between the investment and financing decisions of a firm under manager-shareholder conflicts arising from asymmetric information. In particular, we extend the manager-shareholder conflict problem in a real options model by incorporating debt financing. We show that manager-shareholder conflicts over investment policy increase not only the investment and default triggers but also coupon payments, which lead to a decrease in the equity value. Moreover, given the presence of manager-shareholder conflicts, debt financing increases investment and decreases total social welfare. As a result, there is a trade-off between the efficiency of investment and total social welfare with debt financing. These results fit well with the findings of previous empirical work in this area. Real options Debt financing Agency problem Asymmetric information 2 2010 34 2 158 178 http://www.sciencedirect.com/science/article/B6V85-4X49Y9V-1/2/e682069b17c95754acb4213864612245 Shibata, Takashi Nishihara, Michi oai:RePEc:eee:dyncon:v:33:y:2009:i:11:p:1912-19282011-03-29RePEc:eee:dyncon article This paper formalizes the idea that more hedging instruments may destabilize markets when traders have heterogeneous expectations and adapt their behavior according to performance-based reinforcement learning. In a simple asset pricing model with heterogeneous beliefs the introduction of additional Arrow securities may destabilize markets, and thus increase price volatility, and at the same time decrease average welfare. We also investigate whether a fully rational agent can employ additional hedging instruments to stabilize markets. It turns out that the answer depends on the composition of the population of non-rational traders and the information gathering costs for rationality. Financial innovation Asset pricing Hedging Reinforcement learning Bifurcations 11 2009 33 11 1912 1928 http://www.sciencedirect.com/science/article/B6V85-4WJBBPJ-1/2/40e5b1ee44776e3b879835e8696d1739 Brock, W.A. Hommes, C.H. Wagener, F.O.O. oai:RePEc:eee:dyncon:v:34:y:2010:i:3:p:472-4892011-03-29RePEc:eee:dyncon article This paper describes a set of algorithms for quickly and reliably solving linear rational expectations models. The utility, reliability and speed of these algorithms are a consequence of (1) the algorithm for computing the minimal dimension state space transition matrix for models with arbitrary numbers of lags or leads, (2) the availability of a simple modeling language for characterizing a linear model and (3) the use of the QR Decomposition and Arnoldi type eigenspace calculations. The paper also presents new formulae for computing and manipulating solutions for arbitrary exogenous processes. Linear rational expectations Blanchard-Kahn Saddle point solution 3 2010 34 3 472 489 http://www.sciencedirect.com/science/article/B6V85-4XJP3VB-1/2/2970fcba269144417b87a6fc4b368cd4 Anderson, Gary S. oai:RePEc:eee:dyncon:v:34:y:2010:i:3:p:503-5212011-03-29RePEc:eee:dyncon article This paper analyzes the Markov-perfect equilibrium of an economy were a benevolent government that lacks the ability to commit to future policy choices, uses taxes on capital and labor income to finance the provision of a public good. The main finding is that the government taxes capital and subsidizes labor so that only the dynamic inefficiency of future capital taxes remains. If agents' preference for the public good is sufficiently high, then capital is confiscated. Setting bounds on taxes alleviates the dynamic inefficiency inherent in capital taxation, but some implementations carry a high welfare cost. Allowing for endogenous capital utilization makes the current capital tax distortionary and implies capital and labor tax rates that are relatively close to those measured for the U.S. economy. Time-consistency Markov-perfect equilibrium Optimal taxation Capital tax 3 2010 34 3 503 521 http://www.sciencedirect.com/science/article/B6V85-4XHCHWT-2/2/f132e8737850625c5ac850743c1a6fee Martin, Fernando M. oai:RePEc:eee:dyncon:v:34:y:2010:i:1:p:59-682011-03-29RePEc:eee:dyncon article This note describes how the incomplete markets model with aggregate uncertainty in Den Haan et al. [Comparison of solutions to the incomplete markets model with aggregate uncertainty. Journal of Economic Dynamics and Control, this issue] is solved using standard quadrature and projection methods. This is made possible by linking the aggregate state variables to a parameterized density that describes the cross-sectional distribution. A simulation procedure is used to find the best shape of the density within the class of approximating densities considered. This note compares several simulation procedures in which there is--as in the model--no cross-sectional sampling variation. Numerical solutions Projection methods Simulations 1 2010 34 1 59 68 http://www.sciencedirect.com/science/article /B6V85-4X0F3P5-1/2/e9e49738b507ae98d607809c22b17b6a Algan, Yann Allais, Olivier Den Haan, Wouter J. oai:RePEc:eee:dyncon:v:34:y:2010:i:3:p:542-5542011-03-29RePEc:eee:dyncon article This paper presents a lattice algorithm for pricing both European- and American-style moving average barrier options (MABOs). We develop a finite-dimensional partial differential equation (PDE) model for discretely monitored MABOs and solve it numerically by using a forward shooting grid method. The modeling PDE for continuously monitored MABOs has infinite dimensions and cannot be solved directly by any existing numerical method. We find their approximate values indirectly by using an extrapolation technique with the prices of discretely monitored MABOs. Numerical experiments show that our algorithm is very efficient. Barrier option Moving average Lattice algorithm Forward shooting grid method Extrapolation 3 2010 34 3 542 554 http://www.sciencedirect.com/science/article/ B6V85-4XH5MF1-1/2/bab10d21859a7d34354377f78e510e02 Dai, Min Li, Peifan Zhang, Jin E. oai:RePEc:eee:dyncon:v:34:y:2010:i:1:p:50-582011-03-29RePEc:eee:dyncon article We use a perturbation method to solve the incomplete markets model with aggregate uncertainty described in den Haan et al. [Computational suite of models with heterogeneous agents: incomplete markets and model uncertainty. Journal of Economic Dynamics & Control, this issue]. To apply that method, we use a "barrier method" to replace the original problem with occasionally binding inequality constraints by one with only equality constraints. We replace the structure with a continuum of agents by a setting in which a single infinitesimal agent faces prices generated by a representative-agent economy. We also solve a model variant with a large (but finite) number of agents. Our perturbation-based method is much simpler and faster than other methods. Heterogeneous agents Occasionally binding inequality constraints Barrier method 1 2010 34 1 50 58 http://www.sciencedirect.com/science/article/B6V85-4WYDMW5-2/2/585b7dd21ea624d99d73b60f8a867a57 Kim, Sunghyun Henry Kollmann, Robert Kim, Jinill oai:RePEc:eee:dyncon:v:34:y:2010:i:1:p:42-492011-03-29RePEc:eee:dyncon article This paper studies the properties of the solution to the heterogeneous agents model in Den Haan et al. [2009. Computational suite of models with heterogeneous agents: incomplete markets and aggregate uncertainty. Journal of Economic Dynamics and Control, this issue]. To solve for the individual policy rules, we use an Euler-equation method iterating on a grid of pre-specified points. To compute the aggregate law of motion, we use the stochastic-simulation approach of Krusell and Smith [1998. Income and wealth heterogeneity in the macroeconomy. Journal of Political Economy 106, 868-896]. We also compare the stochastic- and non-stochastic-simulation versions of the Krusell-Smith algorithm, and we find that the two versions are similar in terms of their speed and accuracy. Dynamic stochastic models Heterogeneous agents Aggregate uncertainty Euler-equation methods Simulations Numerical solutions 1 2010 34 1 42 49 http://www.sciencedirect.com/science/article/B6V85-4WYDMW5-1/2/d342f8cb89af29c3432db5cca8df2a88 Maliar, Lilia Maliar, Serguei Valli, Fernando oai:RePEc:eee:dyncon:v:33:y:2009:i:7:p:1469-14892011-03-29RePEc:eee:dyncon article We present a dynamic stochastic general equilibrium (DSGE) New Keynesian model with indivisible labor and a dual labor market: a Walrasian one where wages are fully flexible and a unionized one characterized by real wage rigidity. We show that the negative effect of a productivity shock on inflation and the positive effect of a cost-push shock are crucially determined by the proportion of firms that belong to the unionized sector. The larger this number, the larger are these effects. Consequently, the larger the union coverage, the larger should be the optimal response of the nominal interest rate to exogenous productivity and cost-push shocks. The optimal inflation and output gap volatility increases as the number of the unionized firms in the economy increases. Optimal monetary policy Trade-unions Real wage rigidity Taylor rules 7 2009 33 7 1469 1489 http://www.sciencedirect.com/ science/article/B6V85-4VR242T-1/2/5bba62e4fe7f9a7326d34e8629a7707f Mattesini, Fabrizio Rossi, Lorenza oai:RePEc:eee:dyncon:v:33:y:2009:i:12:p:1962-19802011-03-29RePEc:eee:dyncon article We explore the timing of the replacement of a manager as an important incentive mechanism, using a real options approach in a situation where the timing of the decision to replace the manager is related to a major change in a firm's strategies that involves spending large amounts of various sunk adjustment costs. Using a continuous-time agency setting, we show that when renegotiation is not possible, the early replacement of the manager of a lower quality project (prior to the first-best trigger level) occurs only if a moral hazard or an adverse selection problem exists. We also indicate that the possibility of renegotiation drastically changes the results. Agency CEO turnover Executive compensation Real options Renegotiation 12 2009 33 12 1962 1980 http://www.sciencedirect.com/science/ article/B6V85-4WMDHDY-1/2/3dfa23d0b9cf18873e137d4878e07810 Hori, Keiichi Osano, Hiroshi oai:RePEc:eee:dyncon:v:33:y:2009:i:7:p:1490-15302011-03-29RePEc:eee:dyncon article We present a model of public provision of education for blacks in two discriminatory regimes, white plantation controlled, and white yeoman-town controlled. We show that the ability to migrate to a non-discriminating district constrains the ability of both types of regimes to discriminate. The model produces time series of educational outcomes for whites and blacks that mimic the behavior seen in Post Reconstruction South Carolina to the onset of the Civil Rights Act. It also fits the Post World War II black-white income differentials. Discrimination Education Development Income convergence 7 2009 33 7 1490 1530 http: //www.sciencedirect.com/science/article/B6V85-4VV2NFC-4/2/13b3ea6cf4d406d2ba3fe4d1c7dd9e44 Canaday, Neil Tamura, Robert oai:RePEc:eee:dyncon:v:33:y:2009:i:8:p:1555-15762011-03-29RePEc:eee:dyncon article Experiments on decision-making show that, when people evaluate risk, they often engage in "narrow framing": that is, in contrast to the prediction of traditional utility functions defined over wealth or consumption, they often evaluate risks in isolation, separately from other risks they are already facing. While narrow framing has many potential real-world applications, there are almost no tractable preference specifications that incorporate it into the standard framework used by economists. In this paper, we propose such a specification and demonstrate its tractability in both portfolio choice and equilibrium settings. Framing Stock market participation Diversification Equity premium 8 2009 33 8 1555 1576 http://www.sciencedirect.com/science/article/B6V85-4VV2NFC-1/2/ 0bbc52d6767a2a5dd7d22fcf877c17e7 Barberis, Nicholas Huang, Ming oai:RePEc:eee:dyncon:v:33:y:2009:i:6:p:1183-12002011-03-29RePEc:eee:dyncon article We study differences in the adjustment of aggregate real wages in the manufacturing sector over the business cycle across OECD countries, combining results from different data and dynamic methods. Summary measures of cyclicality show genuine cross-country heterogeneity even after controlling for the impact of data and methods. We find that more open economies and countries with stronger unions tend to have less pro-cyclical (or more counter-cyclical) wages. We also find a positive correlation between the cyclicality of real wages and employment, suggesting that policy complementarities may influence the adjustment of both quantities and prices in the labor market. Real wages Business cycle Dynamic correlation Labor market institutions 6 2009 33 6 1183 1200 http://www.sciencedirect.com/science/article/B6V85-4V761Y9-1/2 /c5b6f897d26f8921d975aafd873429a2 Messina, Julian Strozzi, Chiara Turunen, Jarkko oai:RePEc:eee:dyncon:v:33:y:2009:i:9:p:1719-17382011-03-29RePEc:eee:dyncon article This paper presents models for search behavior and provides experimental evidence that behavioral heterogeneity in search is linked to heterogeneity in individual preferences. Observed search behavior is more consistent with a new model that assumes dynamic updating of utility reference points than with models that are based on expected-utility maximization. Specifically, reference point updating and loss aversion play a role for more than a third of the population. The findings are of practical relevance as well as of interest for researchers who incorporate behavioral heterogeneity into models of dynamic choice behavior in, for example, consumer economics, labor economics, finance, and decision theory. Dynamic choice Behavioral heterogeneity Reference points Individual differences Loss aversion 9 2009 33 9 1719 1738 http://www.sciencedirect.com/science/article/B6V85-4W1SRKP-1/2/93cd4c3f2cbd46f05005e8735488ec27 Schunk, Daniel oai:RePEc:eee:dyncon:v:34:y:2010:i:3:p:490-5022011-03-29RePEc:eee:dyncon article We ask whether offering a menu of unemployment insurance contracts is welfare-improving in a heterogeneous population. We adopt a repeated moral hazard framework as in Shavell and Weiss (1979), supplemented by unobserved heterogeneity about agents' job opportunities. Our main theoretical contribution is a quasi-recursive formulation of our adverse selection problem, including a geometric characterization of the state space. Our main economic result is that optimal contracts for "bad" searchers tend to be upward-sloping due to an adverse selection effect. This is in contrast to the well-known optimal decreasing time profile of benefits in pure moral hazard environments that continue to be optimal for "good" searchers in our model. Unemployment insurance Recursive contracts Adverse selection Repeated moral hazard 3 2010 34 3 490 502 http://www.sciencedirect.com/science/article/B6V85-4XHCHWT-1/2/f86fc5e8653020c812ebb16ce9780b59 Hagedorn, Marcus Kaul, Ashok Mennel, Tim oai:RePEc:eee:dyncon:v:34:y:2010:i:3:p:417-4362011-03-29RePEc:eee:dyncon article This paper introduces a new model of structural breaks in the coefficients of economic relationships which allows them to be driven by large past economic shocks. The breaks generated by these shocks can be taken to reflect stochastic changes in agents' decisions or beliefs triggered by extraordinary economic events. Our model specifies that both the timing and size of breaks are stochastic. The last property of it enables us to investigate qualitative effects that large shocks can have on economic relationships. As an empirical application of our model, the paper investigates the stability the oil-economy relationship since the early sixties. From the six large oil-shocks identified by our data, the paper shows that only the first oil shock at the end of 1973 has caused a major long term adverse effect on economic activity. All the large oil price shocks that have happened since then did not have any significant negative effects on the slope of the oil-economy relationship. Structural breaks State space model Oil shocks 3 2010 34 3 417 436 http://www.sciencedirect.com/science/article/ B6V85-4XF83N9-1/2/e94e059c18e704d8ed06f927d111130b Kapetanios, G. Tzavalis, E. oai:RePEc:eee:dyncon:v:33:y:2009:i:8:p:1577-15922011-03-29RePEc:eee:dyncon article This paper introduces a new long memory volatility process, denoted by adaptive FIGARCH, or A-FIGARCH , which is designed to account for both long memory and structural change in the conditional variance process. Structural change is modeled by allowing the intercept to follow the smooth flexible functional form due to Gallant (1984. The Fourier flexible form. American Journal of Agricultural Economics 66, 204-208). A Monte Carlo study finds that the A-FIGARCH model outperforms the standard FIGARCH model when structural change is present, and performs at least as well in the absence of structural instability. An empirical application to stock market volatility is also included to illustrate the usefulness of the technique. FIGARCH Long memory Structural change Stock market volatility 8 2009 33 8 1577 1592 http://www.sciencedirect.com/science/article/B6V85-4VV2NFC-5/2/efe25129db96605fa85f0112792ac052 Baillie, Richard T. Morana, Claudio oai:RePEc:eee:dyncon:v:33:y:2009:i:9:p:1639-16472011-03-29RePEc:eee:dyncon article The present paper shows that the savings motive critically affects the size and sign of scale effects in standard endogenous growth models. If the bequest motive dominates, the scale effect is positive. If the life-cycle motive dominates, the scale effect is ambiguous and may even be negative. Overlapping generations Endogenous growth Scale effects 9 2009 33 9 1639 1647 http://www.sciencedirect.com/science/article/B6V85-4VWHVX8-1/2/9a8927b3b4cabeb2b95cf9aa52b67312 Dalgaard, Carl-Johan Jensen, Martin Kaae oai:RePEc:eee:dyncon:v:33:y:2009:i:8:p:1543-15542011-03-29RePEc:eee:dyncon article We analyze stochastic adaptation in finite n-player games played by heterogeneous populations containing best repliers, better repliers, and imitators. Individuals select strategies by applying a personal learning rule to a sample from a finite history of past play. We give sufficient conditions for convergence to minimal closed sets under better replies and selection of a Pareto dominant such set. Finally, we demonstrate that the stochastically stable states are sensitive to the sample size by showing convergence to the risk-dominant equilibrium for sufficiently small sample size and to the Pareto-dominant equilibrium for sufficiently large sample size in 2x2 coordination games. Heterogeneous agents Markov chain Stochastic stability Pareto dominance Risk dominance 8 2009 33 8 1543 1554 http://www.sciencedirect.com/science/article/B6V85-4VT14CV-1/2/ 8b77e83f1c35788ff28345422a1226ef Josephson, Jens oai:RePEc:eee:dyncon:v:33:y:2009:i:7:p:1379-13972011-03-29RePEc:eee:dyncon article We provide an explicit characterization of the equilibrium when investors have heterogeneous risk preferences. Given market completeness, investors can achieve full risk sharing. Thus, a representative agent can be constructed, though this agent's risk aversion changes over time as the relative wealths of the individual investors change. We show that volatility depends on the covariance of aggregate risk aversion and stock returns. We find that heterogeneity increases volatility, produces volatility clustering (ARCH effects) and "leverage"-like effects. Option prices exhibit implied volatility skews. There is predictability and we assess the magnitude of investors' hedging demands and trading volume. Further, diversity is beneficial to all agents and entails welfare gains that can be substantial. Asset pricing Preference heterogeneity Volatility 7 2009 33 7 1379 1397 http://www.sciencedirect.com/science/article/B6V85-4VC7DTK-2/2/34aad70622eba664c17d40113543d2af Weinbaum, David oai:RePEc:eee:dyncon:v:33:y:2009:i:10:p:1739-17562011-03-29RePEc:eee:dyncon article This paper presents a model of learning about a game. Players initially have little knowledge about the game. Through playing the same game repeatedly, each player not only learns which action to choose but also constructs a personal view of the game. The model is studied using a hybrid payoff matrix of the prisoner's dilemma and coordination games. Results of computer simulations show that (1) when all the players are slow at learning the game, they have only a partial understanding of the game, but might enjoy higher payoffs than in cases with full or no understanding of the game; (2) when one player is quick in learning the game, that player obtains a higher payoff than the others. However, all can receive lower payoffs than in the case in which all players are slow learners. Learning Subjective views Computer simulation 10 2009 33 10 1739 1756 http://www.sciencedirect.com/science/ article/B6V85-4W1SRKP-2/2/0e351e1c4a4ac019be7e885a048c2780 Hanaki, Nobuyuki Ishikawa, Ryuichiro Akiyama, Eizo oai:RePEc:eee:dyncon:v:34:y:2010:i:1:p:79-992011-03-29RePEc:eee:dyncon article This paper shows that the R2 and the standard error have fatal flaws and are inadequate accuracy tests. Using data from a Krusell-Smith economy, I show that approximations for the law of motion of aggregate capital, for which the true standard deviation of aggregate capital is up to 14% (119%) higher than the implied value and which are thus clearly inaccurate, can have an R2 as high as 0.9999 (0.99). Key in generating a more powerful test is that predictions of the aggregate law of motion are not updated with the aggregated simulated individual data. Numerical solutions Simulations Approximations 1 2010 34 1 79 99 http://www.sciencedirect.com/science/article/B6V85-4WYDMW5-5/2/c10d77bb52ecee2c4b29d7180d949266 Den Haan, Wouter J. oai:RePEc:eee:dyncon:v:33:y:2009:i:8:p:1593-16032011-03-29RePEc:eee:dyncon article This paper studies a class of hierarchical games called single-leader-multiple-follower games (SLMFGs) that have important applications in economics and engineering. We consider such games in the context of boundedly rational agents that are limited in the information and computational power they may possess. Agents in our SLMFG are modeled as adaptive learners that use simple reinforcement learning schemes to learn their optimal behavior. The proposed learning approach is illustrated using a well-studied problem in economics. It is shown that with a patiently learning leader the repeated plays of the game result in approximate equilibrium outcomes. Leader-follower games Bounded rationality Reinforcement learning 8 2009 33 8 1593 1603 http://www.sciencedirect.com/science/article/B6V85-4VV2NFC-3/2/8fa15c45d6a3e3886a66bb6fa7ba297f Tharakunnel, Kurian Bhattacharyya, Siddhartha oai:RePEc:eee:dyncon:v:33:y:2009:i:4:p:864-8822011-03-29RePEc:eee:dyncon article I generate priors for a vector autoregression (VAR) from a standard real business cycle (RBC) model, an RBC model with capital-adjustment costs and habit formation, and a sticky-price model with an unaccommodating monetary authority. The response of hours worked to a TFP shock differs sharply across these models. I compare the accuracy of forecasts made from each of the resulting dynamic stochastic general equilibrium vector autoregression (DSGE-VAR) models. Despite having different structural characteristics, the DSGE-VARs are comparable in terms of forecasting performance. As in previous work, DSGE-VARs compare favorably with atheoretical VARs. Model evaluation Priors from DSGE models Economic fluctuations Hours debate Business cycles 4 2009 33 4 864 882 http://www.sciencedirect.com/science/article/B6V85-4TY9MJW-4/2/915160dda69b7fd6339b7434d25ece9b Ghent, Andra C. oai:RePEc:eee:dyncon:v:33:y:2009:i:2:p:477-4902011-03-29RePEc:eee:dyncon article This paper investigates the contribution of monetary policy to the changes in output growth and inflation dynamics in the US. We identify a policy shock and a policy rule in a time-varying coefficients VAR using robust sign restrictions. The transmission of policy shocks has been relatively stable. The variance of the policy shock has decreased over time, but policy shocks account for a small fraction of the level and the variations in inflation and output growth volatility and persistence. Finally we find little evidence of a significant increase in the long run response of the interest rate to inflation. Monetary policy Inflation persistence Transmission of shocks Time-varying coefficients structural VARs 2 2009 33 2 477 490 http://www.sciencedirect.com/science/article/B6V85-4TCHKFX-1/2/245515b8c8731a4099ea84533fe25109 Canova, Fabio Gambetti, Luca oai:RePEc:eee:dyncon:v:33:y:2009:i:6:p:1201-12162011-03-29RePEc:eee:dyncon article The new learning dynamic of Brown et al. [(1950). Solutions of games by differential equation. In: Kuhn, H.W., Tucker, A.W. (Eds.), Contributions to the Theory of Games I. Annals of Mathematics Studies, vol. 24. Princeton University Press, Princeton] is introduced to macroeconomic dynamics via the cobweb model with rational and naive forecasting strategies. This dynamic has appealing properties such as positive correlation and inventiveness. There is persistent heterogeneity in the forecasts and chaotic behavior with bifurcations between periodic orbits and strange attractors for the same range of parameter values as in previous studies. Unlike Brock and Hommes [(1997). A rational route to randomness. Econometrica (65), 1059-1095], however, there exist intuitively appealing steady states where one strategy dominates, and there are qualitative differences in the resulting dynamics of the two approaches. There are similar bifurcations in a parameter that represents how aggressively agents switch to better performing strategies. Chaos Cobweb model Learning BNN 6 2009 33 6 1201 1216 http://www.sciencedirect.com/science/article/B6V85-4V70R6N-2/2/43b6f52709e46681b232370e39b9e4ed Waters, George A. oai:RePEc:eee:dyncon:v:34:y:2010:i:1:p:28-352011-03-29RePEc:eee:dyncon article This paper describes a method to solve models with a continuum of agents, incomplete markets and aggregate uncertainty. I use backward induction on a finite grid of points in the aggregate state space. The aggregate state includes a small number of statistics (moments) of the cross-sectional distribution of capital. For any given set of moments, agents use a specific cross-sectional distribution, called "proxy distribution", to compute the equilibrium. Information from the steady state distribution as well as from simulations can be used to chose a suitable proxy distribution. Heterogeneous agents Backward induction 1 2010 34 1 28 35 http://www.sciencedirect.com/science/article/B6V85-4WYDMW5-6/2/9b9d617936db9cba636eaca008aeb221 Reiter, Michael oai:RePEc:eee:dyncon:v:33:y:2009:i:12:p:2015-20292011-03-29RePEc:eee:dyncon article The present paper examines the effects of consumption externalities on economic performance in a one-sector model with wealth preference. The presence of the wealth preference generates a wealth effect in consumption growth, which plays a crucial role for consumption externalities to have impacts on the economy. Our main findings are: (i) regardless of the assumption of inelastic labor supply, the distortionary effect of consumption externalities stays in the long run; (ii) the income tax as well as the consumption tax can modify the efficiency; and (iii) the numerical simulations supplement theoretical findings. Consumption externalities Wealth preference Wealth effect Optimal tax policy Intertemporal welfare 12 2009 33 12 2015 2029 http://www.sciencedirect.com/science/article/B6V85-4X1J705-1/2/3aafeb7b6ff9301a71d7c146839e0e11 Nakamoto, Yasuhiro oai:RePEc:eee:dyncon:v:33:y:2009:i:9:p:1682-16982011-03-29RePEc:eee:dyncon article In life-cycle portfolio choice models it is standard to assume that all agents invest in a diversified stock market index. In contrast recent empirical evidence, summarized in Campbell [2006. Household finance. Journal of Finance 61, 1553-1604] suggests that households' financial portfolios are under-diversified and that there is substantial heterogeneity in diversification. In the present paper I examine the effects of heterogeneous under-diversification in a life-cycle portfolio choice model with uninsurable uncertain earnings and fixed per-period participation costs. The analysis of the model shows that realistically calibrated under-diversification gives an important contribution to the explanation of two key facts of households' portfolio allocation: the moderate stock market participation rate and the moderate stock share for participants. Portfolio choice Life-cycle Under-diversification Retirement wealth 9 2009 33 9 1682 1698 http://www.sciencedirect.com/science/article/B6V85-4W0WJ2J-2/2/2ca74a510acfd4dd712cb88b8de7acd8 Campanale, Claudio oai:RePEc:eee:dyncon:v:37:y:2013:i:4:p:810-8202013-03-05RePEc:eee:dyncon article The intrinsic comparative dynamics of a ubiquitous class of optimal control problems with a time-varying discount rate and time-distance discounting are derived and shown to be characterized by a positive semidefinite matrix. It is also shown that the said comparative dynamics are invariant to the functional form of the discount rate function and the type of agent. Consequently, if one limits econometric testing to the basic comparative dynamics of the given class of control problems, one cannot determine (i) the functional form of the discount rate function used by an agent, and thus if an agent is a time-consistent or time-inconsistent decision maker, or (ii) if an agent commits to a plan of action or takes into account the changing nature of his preferences when choosing a plan. Comparative dynamics; Optimal control; Precommitment solution; Sophisticated solution; Time-distance discounting; Time inconsistency; Time-varying discount rate; 4 2013 37 810 820 http://www.sciencedirect.com/science/article/pii/S0165188912002321 Caputo, Michael R. oai:RePEc:eee:dyncon:v:37:y:2013:i:4:p:711-7342013-03-05RePEc:eee:dyncon article In a dynamic general equilibrium model with endogenous markups and labor market frictions, we investigate the effects of increased product market competition. Unlike most macroeconomic models of search, we endogenize the labor supply along the extensive margin. We find numerically that a model with endogenous labor force participation decision produces a decline in the unemployment rate which is almost three times larger than that in a model with fixed labor force. For a calibration capturing alternatively the European and the US labor markets, a deregulation episode, which lowers the markup by 3 percentage points, results in a fall in the unemployment rate by 0.17 and 0.05 percentage point, respectively, while the labor share is almost unaffected in the long-run. The sensitivity analysis reveals that product market deregulation is more effective in countries where product and labor market regulations are high, unemployment benefits are small and labor force is more responsive. Imperfect competition; Endogenous markup; Search theory; Unemployment; Deregulation; 4 2013 37 711 734 E24 J63 L16 http:/ /www.sciencedirect.com/science/article/pii/S0165188912002187 Bertinelli, Luisito Cardi, Olivier Sen, Partha oai:RePEc:eee:dyncon:v:37:y:2013:i:4:p:838-8532013-03-05RePEc:eee:dyncon article In this paper, we build a Closed-Loop Nash Equilibrium of a private property productive asset oligopoly. We compare and contrast private with common property in terms of exploitation rates and social welfare, and provide a comparative dynamic analysis with respect to the number of firms in the industry. Contrary to previous studies on oligopolistic exploitation of productive assets, before exploitation begins, the resource is parcelled out: each firm privately owns and manages the assigned parcel over the entire planning horizon. Compared with the common property regime, we find a new set of results, both in the short- and in the long-run. As for social welfare, we provide conditions on the implicit growth rate and the initial asset stock under which the socially optimal allocation of the resource implies a natural monopoly. Closed-Loop Nash Equilibrium; Productive assets; Private property; Common property; Oligopoly; 4 2013 37 838 853 D43 L13 Q20 C73 http:// www.sciencedirect.com/science/article/pii/S0165188912002308 Colombo, Luca Labrecciosa, Paola oai:RePEc:eee:dyncon:v:37:y:2013:i:4:p:854-8742013-03-05RePEc:eee:dyncon article Human capital investment in early childhood can lead to large and persistent gains. Beyond this window of opportunity, human capital accumulation is more costly. Despite compelling evidence in support of this notion, government education spending is allocated disproportionately toward late childhood and young adulthood. We consider the consequences of a reallocation using an overlapping generations model with private and public spending on early and late childhood education. Taking as given the higher returns to early childhood investment, we find that the current allocation may nonetheless be appropriate. When we consider a homogeneous population, this can hold for moderate levels of government spending. With heterogeneity, this can hold for middle income workers. Lower income workers, by contrast, may benefit from a reallocation. Government education expenditures; Human capital; Heterogeneous agents; Life-cycle model; 4 2013 37 854 874 E62 I22 H52 J24 http://www.sciencedirect.com/ science/article/pii/S016518891200231X Abington, Casey Blankenau, William oai:RePEc:eee:dyncon:v:37:y:2013:i:4:p:756-7732013-03-05RePEc:eee:dyncon article This paper proposes a method to structurally estimate a model with a regime shift and evaluates the importance of acknowledging the break in the estimation. We estimate a DSGE model on Swedish data taking into account the regime change in 1993, from exchange rate targeting to inflation targeting. Ignoring the break leads to spurious estimates. Accounting for the break suggests that monetary policy reacted strongly to exchange rate movements in the first regime, and mostly to inflation in the second. The sources of business cycles and their transmission mechanism are significantly affected by the exchange rate regime. Bayesian estimation; DSGE models; Target zone; Inflation targeting; Regime change; 4 2013 37 756 773 C1 C5 E5 F4 http://www.sciencedirect.com/science/article/pii/S0165188912002412 Cúrdia, Vasco Finocchiaro, Daria oai:RePEc:eee:dyncon:v:37:y:2013:i:4:p:821-8372013-03-05RePEc:eee:dyncon article Optimal control of dynamic econometric models has a wide variety of applications including economic policy relevant issues. There are several algorithms extending the basic case of a linear-quadratic optimization and taking nonlinearity and stochastics into account, but being still limited in a variety of ways, e.g., symmetry of the objective function and identical data frequencies of control variables. To overcome these problems, an alternative approach based on heuristics is suggested. To this end, we apply a ‘classical’ algorithm (OPTCON) and a heuristic approach (Differential Evolution) to three different econometric models and compare their performance. In this paper we consider scenarios of symmetric and asymmetric quadratic objective functions. Results provide a strong support for the heuristic approach encouraging its further application to optimum control problems. Differential evolution; Dynamic programming; Nonlinear optimization; Optimal control; 4 2013 37 821 837 C54 C61 E27 E61 E62 http://www.sciencedirect.com/science/article/pii/S0165188912002400 Blueschke, D. Blueschke-Nikolaeva, V. Savin, I. oai:RePEc:eee:dyncon:v:37:y:2013:i:4:p:875-8962013-03-05RePEc:eee:dyncon article In this paper, two analytic solutions for the valuation of European-style Parisian and Parasian options under the Black–Scholes framework are, respectively, presented. A key feature of our solution procedure is the reduction of a three-dimensional problem to a two-dimensional problem through a coordinate transform designed to combine the two time derivatives into one. Compared with some previous analytical solutions, which still require a numerical inversion of Laplace transform, our solutions, written in terms of double integral for the case of Parisian options but multiple integrals for the case of Parasian options, are both of explicit form; numerical evaluation of these integrals is straightforward. Numerical examples are also provided to demonstrate the correctness of our newly derived analytical solutions from the numerical point of view, through comparing the results obtained from our solutions and those obtained from adopting other standard finite difference approaches. Parisian options; Parasian options; Analytical solution; Laplace transform; 4 2013 37 875 896 G13 C02 http://www.sciencedirect.com/science/article/pii/S0165188912002424 Zhu, Song-Ping Chen, Wen-Ting oai:RePEc:eee:dyncon:v:37:y:2013:i:4:p:774-7932013-03-05RePEc:eee:dyncon article We present a flexible approach for the valuation of interest rate derivatives based on affine processes. We extend the methodology proposed in Keller-Ressel et al. (in press) by changing the choice of the state space. We provide semi-closed-form solutions for the pricing of caps and floors. We then show that it is possible to price swaptions in this multifactor setting with a good degree of analytical tractability. This is done via the Edgeworth expansion approach developed in Collin-Dufresne and Goldstein (2002). A numerical exercise illustrates the flexibility of Wishart Libor model in describing the movements of the implied volatility surface. Affine processes; Wishart process; Libor market model; Fast Fourier transform; Caps; Floors; Swaptions; 4 2013 37 774 793 G13 C51 http://www.sciencedirect.com/science/article/pii/S0165188912002291 Da Fonseca, José Gnoatto, Alessandro Grasselli, Martino oai:RePEc:eee:dyncon:v:37:y:2013:i:4:p:735-7552013-03-05RePEc:eee:dyncon article This paper presents a business cycle model capturing the stylized features of housing-market boom-bust cycles in developed countries. The model implies that over-optimism of mortgage borrowers generates housing-market boom-bust cycles, if mortgage borrowers are credit-constrained and savers do not share their optimism. This result holds without price stickiness. If price stickiness is introduced into the model, then the model replicates a low policy interest rate during a housing boom as an endogenous reaction to a low inflation rate, given a Taylor rule. Thus, monetary easing observed during housing booms are consistent with the presence of over-optimism causing boom-bust cycles. Asset price bubbles; Monetary policy; Financial liberalization; House prices; Credit constraints; 4 2013 37 735 755 E44 E52 http://www.sciencedirect.com/science/article/pii/S0165188912002163 Tomura, Hajime oai:RePEc:eee:dyncon:v:37:y:2013:i:4:p:794-8092013-03-05RePEc:eee:dyncon article In this paper I provide a stopping-time-based solution to a long-term contracting problem between a risk-neutral principal and a risk-averse agent. The agent faces a stochastic income stream and cannot commit to the long-term contracting relationship. To compute the optimal contract, I also design an algorithm that is more efficient than value-function iteration. Limited commitment; Risk sharing; Stopping time; Value-function iteration; 4 2013 37 794 809 C63 D82 D86 http://www.sciencedirect.com/science/ article/pii/S0165188912002266 Zhang, Yuzhe oai:RePEc:eee:dyncon:v:37:y:2013:i:4:p:897-9092013-03-05RePEc:eee:dyncon article We interpret the marginal welfare cost of capital income taxes as the present discounted value of consumption distortions. Such an asset market interpretation emphasizes the importance of the interest rate used to value future distortions, especially in the presence of uncertainty. We find that the interest rate decreases as the tax rate increases, thus increasing the welfare cost. The variations in the interest rate are caused by amplified responses of consumption to exogenous shocks as a result of capital taxation. The welfare cost may be underestimated if variations in interest rates are ignored, especially when tax rates are high. Welfare cost; Capital income taxes; Asset market; 4 2013 37 897 909 E22 E62 E44 H25 http://www.sciencedirect.com/science/article/pii/S0165188912002436 Santoro, Marika Wei, Chao oai:RePEc:eee:dyncon:v:33:y:2009:i:3:p:568-5822009-09-18RePEc:eee:dyncon article We investigate dynamic R&D for process innovation in a Cournot duopoly where firms may either undertake independent ventures or form a cartel for cost-reducing R&D investments. By comparing the profit and welfare performances of the two settings in steady state, we show that private and social incentives towards R &D cooperation coincide for all admissible levels of the technological spillovers characterising innovative activity. We also evaluate the whole history of the dynamic system along the transition to the steady state, showing that the conflict between private and social incentives does not necessarily emerge. Differential games Process innovation R&D cooperation Spillovers 3 2009 33 3 568 582 http://www.sciencedirect.com/science/article/B6V85-4TCYCD8-1/2/740e238465bd3822a5aa0fba8e574b3a Cellini, Roberto Lambertini, Luca oai:RePEc:eee:dyncon:v:33:y:2009:i:10:p:1761-17782009-09-18RePEc:eee:dyncon article The literature on R&D-based growth establishes that market equilibrium is inefficient and derives optimal R&D policy. Normative analyses of this type use the assumption of steady state, largely motivated by analytical convenience. This paper questions this steady-state approach by introducing endogenous cycles as long-run equilibria. We show that the government fails to maximize welfare if policy which is optimal in steady state is myopically applied in cyclical equilibria. More specifically, we demonstrate that (i) cycles arise in the (very) standard R&D-based model of Grossman and Helpman [1991. Innovation and Growth in the Global Economy. MIT Press, Cambridge, MA (Chapter 3)] once the model is framed in discrete time, (ii) these cycles are inefficient in the sense that they prevent welfare maximization, (iii) optimal steady-state R&D policy fails to eliminate cycles, and can even create inefficient cycles, (iv) the application of R&D subsidies leads to a trade-off between growth and macroeconomic stability, and (v) optimal R&D policy in a fluctuating economy is state-dependent, which generalizes optimal steady-state R&D policy. R&D Cycles Policy 10 2009 33 10 1761 1778 http://www.sciencedirect.com/science/article/B6V85-4W1SRKP-3/2/ 169744ee3066fac2e86205fe6e737535 Haruyama, Tetsugen oai:RePEc:eee:dyncon:v:33:y:2009:i:4:p:777-7972010-09-17RePEc:eee:dyncon article Early retirement Pension reform Public sector retirement Capital accumulation 4 2009 33 4 777 797 http://www.sciencedirect.com/science/article/B6V85-4TN82D1-2/2/f99b53cd5dd1203d5c84b58692111bb2 Glomm, Gerhard Jung, Juergen Tran, Chung oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:561-5702013-02-12RePEc:eee:dyncon article We propose an estimation method of the new Keynesian Phillips curve (NKPC) based on a univariate noncausal autoregressive model for the inflation rate. By construction, our approach avoids a number of problems related to the GMM estimation of the NKPC. We estimate the hybrid NKPC with quarterly U.S. data (1955:1–2010:3), and both expected future inflation and lagged inflation are found important in determining the inflation rate, with the former clearly dominating. Moreover, inflation persistence turns out to be intrinsic rather than inherited from a persistent driving process. Noncausal time series; Non-Gaussian time series; Inflation; Phillips curve; 3 2013 37 561 570 C22 C51 E31 http:// www.sciencedirect.com/science/article/pii/S0165188912001923 Lanne, Markku Luoto, Jani oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:500-5152013-02-12RePEc:eee:dyncon article In this paper we study a general equilibrium model with a housing market, and use stability under adaptive learning as a criterion to evaluate monetary policy rules. An important feature of the model is that there exist credit-constrained borrowers who use their housing assets as collateral to finance purchases. We evaluate both conventional Taylor rules and rules that incorporate other targets such as housing prices. We find that the effect of responding to housing prices, in addition to output and inflation, depends critically on the assumed information structure of the economy. Adaptive learning; Taylor rule; Housing market; Credit channel; Monetary policy; 3 2013 37 500 515 E3 E4 E5 http://www.sciencedirect.com/science/article/pii/S0165188912002138 Xiao, Wei oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:483-4992013-02-12RePEc:eee:dyncon article Several contributions have recently assessed the size of fiscal multipliers both in RBC models and in New Keynesian models. This paper computes fiscal multipliers within a labor selection model with turnover costs and Nash bargained wages. We find that demand stimuli yield small multipliers, as they have little impact on hiring and firing decisions. By contrast, hiring subsidies, and short-time work (German “Kurzarbeit”) deliver large multipliers, as they stimulate job creation and employment. Fiscal multipliers; Fiscal packages; Labor markets; Short-time work; unemployment; 3 2013 37 483 499 E62 H30 J20 H20 http://www.sciencedirect.com/science/article/pii/S0165188912001881 Faia, Ester Lechthaler, Wolfgang Merkl, Christian oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:693-7102013-02-12RePEc:eee:dyncon article This paper provides new theory and evidence on the relationship between ability and entrepreneurship. I show that there is a U-shaped relationship between the probability of entrepreneurship and both a person's schooling and wage when employed. This pattern can be explained in a model of occupational choice between wage work and entrepreneurship where a firm's productivity is uncertain before entry, potential wages are heterogeneous, and expected productivity is positively related to an entrepreneur's potential wage. Search, or the ability to keep good projects and reject bad ones, attracts low-ability agents into entrepreneurship. The model also explains why low-profit firms do not always exit. Occupational choice; Entrepreneurship; Firm entry; Selection; Search; 3 2013 37 693 710 E20 J23 L11 L16 http://www.sciencedirect.com/science/article/pii/ S0165188912002175 Poschke, Markus oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:666-6792013-02-12RePEc:eee:dyncon article We study ownership dynamics when the manager and the large shareholder, both risk neutral, simultaneously choose effort and monitoring level respectively to serve their non-congruent interests.We show that there is a wedge between the valuation of shares by atomistic shareholders and the large shareholder's valuation. At the Markov-perfect equilibrium, the large shareholder divests her shares. If the incongruence of their interests is mild, divestment is drastic: all her shares are sold immediately. If their interests diverge sharply, the divestment is gradual in order to prevent a sharp fall in share price. In the limit the firm becomes purely managerial. Ownership dynamics; Managerial firms; 3 2013 37 666 679 G3 http://www.sciencedirect.com/science/article/pii/S0165188912002047 Hilli, Amal Laussel, Didier Van Long, Ngo oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:611-6322013-02-12RePEc:eee:dyncon article If a probability distribution is sufficiently close to a normal distribution, its density can be approximated by a Gram/Charlier Series A expansion. In option pricing, this has been used to fit risk-neutral asset price distributions to the implied volatility smile, ensuring an arbitrage-free interpolation of implied volatilities across exercise prices. However, the existing literature is restricted to truncating the series expansion after the fourth moment. This paper presents an option pricing formula in terms of the full (untruncated) series and discusses a fitting algorithm, which ensures that a series truncated at a moment of arbitrary order represents a valid probability density. While it is well known that valid densities resulting from truncated Gram/Charlier Series A expansions do not always have sufficient flexibility to fit all market-observed option prices perfectly, this paper demonstrates that option pricing in a model based on these densities is as tractable as the (far less flexible) original model of Black and Scholes (1973), allowing non-trivial higher moments such as skewness, excess kurtosis and so on to be incorporated into the pricing of exotic options: Generalising the Gram/Charlier Series A approach to the multiperiod, multivariate case, a model calibrated to standard option prices is developed, in which a large class of exotic payoffs can be priced in closed form. Furthermore, this approach, when applied to a foreign exchange option market involving several currencies, can be used to ensure that the volatility smiles for options on the cross exchange rate are constructed in a consistent, arbitrage-free manner. Hermite expansion; Semi-nonparametric estimation; Risk-neutral density; Option-implied distribution; Exotic option; Currency option; 3 2013 37 611 632 C40 C63 G13 F31 http://www.sciencedirect.com/science/article/pii/ S0165188912001996 Schlögl, Erik oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:680-6922013-02-12RePEc:eee:dyncon article This paper derives a general New Keynesian framework with heterogeneous expectations by explicitly solving the micro-foundations underpinning the model. The resulting reduced form is analytically tractable and encompasses the representative rational agent benchmark as a special case. We specify a setup in which some agents, as a result of cognitive limitations, make mistakes when forecasting future macroeconomic variables and update their beliefs as new information becomes available, while other agents have rational expectations. We then address determinacy issues related to the use of different interest rate rules and derive policy implications for a monetary authority aiming at stabilizing the economy in a dynamic feedback system in which macroeconomic variables and heterogeneous expectations co-evolve over time. Heterogeneous expectations; Monetary policy; Determinacy; Evolutionary dynamics; 3 2013 37 680 692 E52 D83 D84 C62 http://www.sciencedirect.com/science/article/pii/S0165188912002151 Massaro, Domenico oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:591-6102013-02-12RePEc:eee:dyncon article We simulate and estimate a new Keynesian search and matching model with sticky wages in which capital has to be financed with cash, at least partially. Our objective is to assess the ability of this framework to account for the persistence of output and inflation observed in the data. We find that our setup generates enough output and inflation persistence with standard stickiness parameters. The key factor driving these results is the inclusion of investment in the CIA constraint, rather than any other nominal or real rigidity. The model reproduces labor market dynamics after a positive increase in productivity: hours fall, nominal wages hardly react, and real wages go up with some delay. Regarding money supply shocks, we investigate the conditions under which our model specification generates the liquidity effect, a fact which is absent in most sticky price models. Persistence; Sticky prices; Staggered bargaining wages; Monetary facts; Labor market facts; Cash-in-advance; 3 2013 37 591 610 E32 E41 E52 http://www.sciencedirect.com/science/article/pii/S0165188912002011 Auray, Stéphane de Blas, Beatriz oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:543-5602013-02-12RePEc:eee:dyncon article Based on a time-varying factor-augmented vector autoregression, we demonstrate that the propagation mechanism of monetary policy disturbances differs across disaggregate components of personal consumption expenditures. While many disaggregate prices rise temporarily in response to a monetary tightening in the early part of the sample, there is no evidence of a price puzzle at the aggregate level. The share of disaggregate prices that exhibit the price puzzle diminishes from the early 1980s onwards. There also is evidence of a substantial decline in the dispersion of disaggregate price responses over time. This gradual decrease in cross-sectional heterogeneity of disaggregate price responses is associated with a dampening effect on aggregate real economic activity and a stronger effect on the aggregate price level. We illustrate by means of a multi-sector sticky-price model augmented by a cost channel how key structural parameters would have had to change to match this evolution of sectoral price dynamics. Structural FAVAR; Time variation; Monetary transmission; Disaggregate prices; Heterogeneous pricing decisions; 3 2013 37 543 560 E30 E32 http://www.sciencedirect.com/science/article/pii/S0165188912001935 Baumeister, Christiane Liu, Philip Mumtaz, Haroon oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:649-6652013-02-12RePEc:eee:dyncon article We study the degree of precommitment that is required to eliminate multiplicity of policy equilibria, which arise if the policy maker acts under pure discretion. We apply a framework developed by Schaumburg and Tambalotti (2007) and Debertoli and Nunes (2010) to a standard New Keynesian model with government debt. We demonstrate the existence of expectation traps under limited commitment and identify the minimum degree of commitment which is needed to escape from these traps. We find that the degree of precommitment which is sufficient to generate uniqueness of the Pareto-preferred equilibrium requires the policy maker to stay in office for a period of two to five years. This is consistent with monetary policy arrangements in many developed countries. Limited commitment; Commitment; Discretion; Multiple equilibria; Monetary and fiscal policy interactions; 3 2013 37 649 665 E31 E52 E58 E61 C61 http://www.sciencedirect.com/science/article/pii/S016518891200214X Himmels, Christoph Kirsanova, Tatiana oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:633-6482013-02-12RePEc:eee:dyncon article No, they are not; at least not in the UK. By examining GDP dynamics we find that, over a time-span of two decades, an easy-to-perform adaptive expectations model systematically outperforms other standard predictors in terms of squared forecasting errors. This should reduce model uncertainty and thereby lead to increased homogeneity in expectations. However, data collected in surveys show that great variety in expectations persists even in this situation. Moreover, Granger tests indicate that the forecasting fitness of the best predictor can be further enhanced by the use of information provided by survey expectations. These results, based on real-time data and robust to both several predictors and nonlinearities, weaken the general validity of approaches assuming predictions based on efficient econometric models. Survey expectations; Heterogeneous expectations; Forecasting models; Bounded rationality; 3 2013 37 633 648 C53 D83 D84 E27 http://www.sciencedirect.com/science/article/ pii/S0165188912002035 Bovi, Maurizio oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:535-5422013-02-12RePEc:eee:dyncon article We estimate a monthly income process using annual longitudinal household-level income data, in order to understand the nature of income risk faced by households at high frequency, and to provide an input for models that wish to study household decision-making at higher frequency than available data. At both frequencies, idiosyncratic earnings shocks have a highly persistent component. At monthly frequency, transitory shocks account for most of the earnings variance; at annual frequency, the persistent component is dominant. We apply our estimates in the context of a standard incomplete-market model, and show that decision-making frequency per se makes a small difference. Idiosyncratic income uncertainty; Frequency; Estimation; 3 2013 37 535 542 E21 E24 http://www.sciencedirect.com/science/article/pii/S016518891200200X Klein, Paul Telyukova, Irina A. oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:577-5902013-02-12RePEc:eee:dyncon article In the production function approach, an accurate output gap assessment requires a careful evaluation of the total factor productivity (TFP) cycle. We build a common cycle model that links TFP to capacity utilization and we show that, in almost all of the pre-enlargement EU countries, using information about capacity utilization reduces both the total estimation error and the revisions in real-time estimates of the concurrent TFP cycle compared to a univariate decomposition. We also argue that relaxing the constant drift hypothesis in favour of a non-linear specification helps to offset a general tendency to underestimate the TFP cycle in the last decade. Cobb–Douglas production function; Markov-switching and mixture innovation models; Real-time; Revisions; 3 2013 37 577 590 C32 C51 D24 E32 http://www.sciencedirect.com/science/article/pii/S0165188912001893 Planas, C. Roeger, W. Rossi, A. oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:571-5762013-02-12RePEc:eee:dyncon article The paper presents a system reduction method (SRM) to improve the computational time to solve a large class of dynamic stochastic general equilibrium (DSGE) models with the methods of Anderson and Moore (1985), Klein (2000), Sims (2002) or Uhlig (1995). I measure the efficiency gains with seven models ranging from 47 to 333 equations. The time reduction for the Anderson–Moore algorithm aim ranges from 10% to 71%; Klein's function solab reduces its time between 51% and 79%; the time reduction for Sims' function gensys increases from 25% to 59%; Uhlig's function solve reduces its time between 31% and 87%. The time reduction can be crucial for Bayesian estimation of medium to large scale models. Solution of DSGE models; System reduction algorithm; Solution of linear rational expectation models; Bayesian estimation; 3 2013 37 571 576 C63 http://www.sciencedirect.com/science/article/pii/ S0165188912001972 Hernandez, Kolver oai:RePEc:eee:dyncon:v:37:y:2013:i:3:p:516-5342013-02-12RePEc:eee:dyncon article This paper characterizes the optimal time path of R&D and capital subsidization. Starting from the steady state under current R&D subsidization in the US, the R&D subsidy should significantly jump upwards and then slightly decrease over time. There is a small loss in welfare, however, from immediately setting the R&D subsidy to its optimal long run level, compared to a time-varying R&D subsidy. The results do not depend on the financing scheme, namely lump sum taxation or factor income taxation. The optimal capital subsidy is time-varying under factor income taxation, but time-invariant when subsidies are financed by lump sum taxes. R&D subsidy; Transitional dynamics; Semi-endogenous growth; Welfare; 3 2013 37 516 534 H20 O30 O40 http://www.sciencedirect.com/science/article/pii/S0165188912002059 Grossmann, Volker Steger, Thomas Trimborn, Timo oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:415-4312012-12-25RePEc:eee:dyncon article 1-3 1996 20 415 431 http://www.sciencedirect.com/science/article/B6V85-3VWPNPX-Y/2/bf2fee9b26382762e0cf22593cc856df Rodriguez, Alvaro oai:RePEc:eee:dyncon:v:21:y:1997:i:4-5:p:873-8942012-12-25RePEc:eee:dyncon article 4-5 1997 21 5 873 894 http://www.sciencedirect.com/science/article/B6V85-3SWY0XD-9/2/ 37905ed0d3bafbdf647c76f885a1a442 Beetsma, Roel M. W. J. Bovenberg, A. Lans oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1284-13022012-12-25RePEc:eee:dyncon article We use weekly survey data on short-term and medium-term sentiment of German investors to estimate the parameters of a stochastic model of opinion formation governed by social interactions. The bivariate nature of our data set also allows us to explore the interaction between the two hypothesized opinion formation processes, while consideration of the simultaneous weekly changes of the stock index DAX enables us to study the influence of sentiment on returns. Technically, we extend the maximum likelihood framework for parameter estimation in agent-based models introduced by Lux (2009a) by generalizing it to bivariate and tri-variate settings. As it turns out, our results are consistent with strong social interaction in short-run sentiment. While one observes abrupt changes of mood in short-run sentiment, medium-term sentiment is a more slowly moving process in which the influence of social interaction seems to be less pronounced. The tri-variate model entails a significant effect from short-run sentiment on prices in-sample, but its out-of-sample predictive performance does not beat the random walk benchmark. Opinion formation; Social interaction; Investor sentiment; 8 2012 36 1284 1302 G12 G17 http:// www.sciencedirect.com/science/article/pii/S016518891200084X Lux, Thomas oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:445-4702012-12-25RePEc:eee:dyncon article 1-3 1996 20 445 470 http:// www.sciencedirect.com/science/article/B6V85-3VWPNPX-11/2/a626a73495dda1d223102fec5ae1a6d3 Bhattacharjya, Ashoke S. oai:RePEc:eee:dyncon:v:25:y:2001:i:11:p:1751-17732012-12-25RePEc:eee:dyncon article 11 2001 25 11 1751 1773 http://www.sciencedirect.com/science/article/B6V85-43DKSHS-3/2/70724903202cd18c86d41f6be55374db Wang, Tan oai:RePEc:eee:dyncon:v:14:y:1990:i:3-4:p:741-7622012-12-25RePEc:eee:dyncon article 3-4 1990 14 10 741 762 http://www.sciencedirect.com/science/article/B6V85-45MFRX7-1D/2/ 0c6ea8dc71b4361116e8132ff335fe5e Engwerda, Jacob Chr. oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:291-2952012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 291 295 http://www.sciencedirect.com/science/ article/B6V85-4D8W2RC-1K/2/91418d21dbcecdca4440924a91b91327 Laffargue, Jean-Pierre oai:RePEc:eee:dyncon:v:27:y:2003:i:11-12:p:2007-20342012-12-25RePEc:eee:dyncon article 11-12 2003 27 9 2007 2034 http://www.sciencedirect.com/science/article/B6V85-470M5XH-2/2/8b60e209ebbb50ae047598dba4a4d971 Boucekkine, Raouf de la Croix, David oai:RePEc:eee:dyncon:v:22:y:1998:i:7:p:1027-10512012-12-25RePEc:eee:dyncon article 7 1998 22 5 1027 1051 http://www.sciencedirect.com/science/article/B6V85-3V5MB4X-3/2/ 9659351c945f39f839016d86cffc2acc Zhou, Chunsheng oai:RePEc:eee:dyncon:v:31:y:2007:i:2:p:493-5132012-12-25RePEc:eee:dyncon article 2 2007 31 2 493 513 http://www.sciencedirect.com/science/article/ B6V85-4JGJJ0B-3/2/95e2cd242edb8c52344df04b731790c1 Toxvaerd, Flavio oai:RePEc:eee:dyncon:v:31:y:2007:i:8:p:2802-28262012-12-25RePEc:eee:dyncon article 8 2007 31 8 2802 2826 http:// www.sciencedirect.com/science/article/B6V85-4MD461D-1/2/cb5f8d2f8f519d0fd095993e25c39d83 Huang, Kevin X.D. Meng, Qinglai oai:RePEc:eee:dyncon:v:20:y:1996:i:6-7:p:1263-12882012-12-25RePEc:eee:dyncon article 6-7 1996 20 1263 1288 http://www.sciencedirect.com/science/article/B6V85-3VW1T3H-G/2/e9fe5868715f237d885b1c1f65a36409 Manasse, Paolo oai:RePEc:eee:dyncon:v:31:y:2007:i:2:p:361-3972012-12-25RePEc:eee:dyncon article 2 2007 31 2 361 397 http://www.sciencedirect.com/science/article/B6V85-4JFGF55-2/2/12b73961beec2b3c6eeab29af013d30e Bali, Turan G. Weinbaum, David oai:RePEc:eee:dyncon:v:22:y:1997:i:1:p:123-1402012-12-25RePEc:eee:dyncon article 1 1997 22 11 123 140 http://www.sciencedirect.com/science/article/B6V85-3SX6H28-7/2/ d3e132922f7cc3cb8592adf88bf57f72 Fagnart, Jean-Francois Licandro, Omar Sneessens, Henri R. oai:RePEc:eee:dyncon:v:26:y:2002:i:7-8:p:1217-12412012-12-25RePEc:eee:dyncon article 7-8 2002 26 7 1217 1241 http://www.sciencedirect.com/science/article/B6V85-459HNNF-8/2/770576076a5e0b64baf6b9e4a4890278 Basak, Suleyman oai:RePEc:eee:dyncon:v:5:y:1983:i:1:p:173-1852012-12-25RePEc:eee:dyncon article 1 1983 5 2 173 185 http://www.sciencedirect.com/science/article/B6V85-4C47HD0-B/2/7303f7c04772c56f5e148938dbb44cb7 de Macedo, Jorge Braga oai:RePEc:eee:dyncon:v:1:y:1979:i:1:p:101-1092012-12-25RePEc:eee:dyncon article 1 1979 1 2 101 109 http://www.sciencedirect.com/science/article/B6V85-4DVNG1X-6/2/d7469c1885b4a9de0d06445128c813b7 Johansen, Leif oai:RePEc:eee:dyncon:v:29:y:2005:i:4:p:595-6002012-12-25RePEc:eee:dyncon article 4 2005 29 4 595 600 http://www.sciencedirect.com/science/article/B6V85-4CSYP36-2/2/ 8e7ca0511c60c5a58178513baedd4872 Kirman, Alan Tuinstra, Jan oai:RePEc:eee:dyncon:v:31:y:2007:i:12:p:3965-39852012-12-25RePEc:eee:dyncon article 12 2007 31 12 3965 3985 http://www.sciencedirect.com/ science/article/B6V85-4NBY8JT-1/2/b1c2707b975e4f6e875521132cb444ea Christiaans, Thomas Eichner, Thomas Pethig, Rudiger oai:RePEc:eee:dyncon:v:19:y:1995:i:8:p:1511-15282012-12-25RePEc:eee:dyncon article 8 1995 19 11 1511 1528 http://www.sciencedirect.com/science/article/B6V85-3YB56J6-B/2/9e622eced73aab44e956d4d92bf73a01 Rao Aiyagari, S. Peled, Dan oai:RePEc:eee:dyncon:v:14:y:1990:i:3-4:p:553-5692012-12-25RePEc:eee:dyncon article 3-4 1990 14 10 553 569 http://www.sciencedirect.com/science/article/B6V85-45MFRX7-14/2/ ecf93775230273d90ce0267c6f448f9e Miller, Marcus Salmon, Mark oai:RePEc:eee:dyncon:v:1:y:1979:i:3:p:271-2822012-12-25RePEc:eee:dyncon article 3 1979 1 271 282 http://www.sciencedirect.com/science/ article/B6V85-4DJ3F4H-3/2/e99d5b60baec025dacaae5827d4bba26 Roberts, Steven M. oai:RePEc:eee:dyncon:v:11:y:1987:i:4:p:499-5112012-12-25RePEc:eee:dyncon article 4 1987 11 12 499 511 http:// www.sciencedirect.com/science/article/B6V85-4GP1TWJ-3/2/10dc7ccc646c82402a3ffe0ba2882780 Wirl, Franz oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:67-712012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 67 71 http://www.sciencedirect.com/science/article/B6V85-4D8W2RC-F/2/308a3dd4f903fc285e2c8c2e543ef943 Shupp, Franklin R. oai:RePEc:eee:dyncon:v:11:y:1987:i:1:p:93-1162012-12-25RePEc:eee:dyncon article 1 1987 11 3 93 116 http://www.sciencedirect.com/science/article/B6V85-4C7WMJR-5/2/1c2c4b467f1c82096e3df007e0c560b5 Le Van, Cuong oai:RePEc:eee:dyncon:v:21:y:1997:i:1:p:115-1432012-12-25RePEc:eee:dyncon article 1 1997 21 1 115 143 http://www.sciencedirect.com/science/article/B6V85-3T7HKH4-4/2/a7a2c4495d5d5f45ecdeabf3878a4b45 Ladron-de-Guevara, Antonio Ortigueira, Salvador Santos, Manuel S. oai:RePEc:eee:dyncon:v:28:y:2004:i:8:p:1635-16602012-12-25RePEc:eee:dyncon article 8 2004 28 6 1635 1660 http://www.sciencedirect.com/science/article/B6V85-48WJSH4-2/2/ 4da237d79ad0791aeec7a571f01e5678 Dennis, Richard oai:RePEc:eee:dyncon:v:30:y:2006:i:9-10:p:1729-17532012-12-25RePEc:eee:dyncon article 9-10 2006 30 1729 1753 http://www.sciencedirect.com/science/ article/B6V85-4K07FJ5-1/2/85078ada312b573dbba261f9b2178a05 Chiarella, Carl He, Xue-Zhong Hommes, Cars oai:RePEc:eee:dyncon:v:30:y:2006:i:2:p:293-3222012-12-25RePEc:eee:dyncon article 2 2006 30 2 293 322 http://www.sciencedirect.com/science/article/B6V85-4G1R3JK-1/2/45f75b308a3a9155408314c00f13cdf3 Westerhoff, Frank H. Dieci, Roberto oai:RePEc:eee:dyncon:v:22:y:1998:i:10:p:1575-16032012-12-25RePEc:eee:dyncon article 10 1998 22 8 1575 1603 http://www.sciencedirect.com/science/article/B6V85-3VW2X45-3/2/ ce51435c0c786e477ee9bfc1f61759e0 Kaganovich, Michael oai:RePEc:eee:dyncon:v:28:y:2004:i:5:p:859-8602012-12-25RePEc:eee:dyncon article 5 2004 28 2 859 860 http://www.sciencedirect.com/science/article/ B6V85-49SFH39-1/2/435f8ec4fd4512b0eac7d40d836d4334 Mitra, Gautam Zenios, Stavros oai:RePEc:eee:dyncon:v:20:y:1996:i:6-7:p:1237-12612012-12-25RePEc:eee:dyncon article 6-7 1996 20 1237 1261 http:// www.sciencedirect.com/science/article/B6V85-3VW1T3H-F/2/726d4c712c1d2241755c685f56a68e6d Tamura, Robert oai:RePEc:eee:dyncon:v:11:y:1987:i:3:p:313-3292012-12-25RePEc:eee:dyncon article 3 1987 11 9 313 329 http://www.sciencedirect.com/science/article/B6V85-4GP1TWK-3/2/f1e60144b4cc5529c63b7fc27f917ac4 Vroman, Susan B. oai:RePEc:eee:dyncon:v:15:y:1991:i:1:p:197-2132012-12-25RePEc:eee:dyncon article 1 1991 15 197 213 http://www.sciencedirect.com/science/article/B6V85-45N4YNB-C/2/cd74fc68a701a3d966b2c567b156d25e Calvo, Guillermo A. oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:361-3842012-12-25RePEc:eee:dyncon article 1-3 1996 20 361 384 http://www.sciencedirect.com/science/article/B6V85-3VWPNPX-W/2/65c19c7d6bcb8ff277624ccd11fbdfeb Bardhan, Indrajit Chao, Xiuli oai:RePEc:eee:dyncon:v:28:y:2004:i:5:p:915-9352012-12-25RePEc:eee:dyncon article 5 2004 28 2 915 935 http://www.sciencedirect.com/science/article/B6V85-48Y6SGR-1/2/ 2f08d45a8f4f6d602c24b0a399019247 Perrakis, Stylianos Lefoll, Jean oai:RePEc:eee:dyncon:v:30:y:2006:i:12:p:2875-29042012-12-25RePEc:eee:dyncon article 12 2006 30 12 2875 2904 http:// www.sciencedirect.com/science/article/B6V85-4HTCTB7-1/2/77ef6383c567a4b773dd5ea836c030ae Chakravorty, Ujjayant Magne, Bertrand Moreaux, Michel oai:RePEc:eee:dyncon:v:28:y:2004:i:6:p:1079-11132012-12-25RePEc:eee:dyncon article 6 2004 28 3 1079 1113 http://www.sciencedirect.com/science/article/B6V85-48GP7VJ-2/2/ ac8b6e1f8a3754aa9ad2f92eb256abd8 Leippold, Markus Trojani, Fabio Vanini, Paolo oai:RePEc:eee:dyncon:v:31:y:2007:i:4:p:1416-14302012-12-25RePEc:eee:dyncon article 4 2007 31 4 1416 1430 http:// www.sciencedirect.com/science/article/B6V85-4KRY3J4-1/2/fccd8978171d3f3764f6e6e108091916 Nishimura, Kazuo Stachurski, John oai:RePEc:eee:dyncon:v:21:y:1997:i:8-9:p:1267-13212012-12-25RePEc:eee:dyncon article 8-9 1997 21 6 1267 1321 http://www.sciencedirect.com/science/article/B6V85-3SWYBJD-2/2/1f9ce22ae471f4e3c46476eae20faa13 Boyle, Phelim Broadie, Mark Glasserman, Paul oai:RePEc:eee:dyncon:v:31:y:2007:i:2:p:433-4722012-12-25RePEc:eee:dyncon article 2 2007 31 2 433 472 http://www.sciencedirect.com/science/article/B6V85-4JGJJ0B-6/2/cfeb87b175b7c4fca9402f567df4d9eb Moyen, Nathalie oai:RePEc:eee:dyncon:v:30:y:2006:i:12:p:2553-25752012-12-25RePEc:eee:dyncon article 12 2006 30 12 2553 2575 http://www.sciencedirect.com/science/article/B6V85-4HHH4TB-2/2/ d3a644a97406e97185c409eaa15ff4a3 Huffaker, Ray Hotchkiss, Rollin oai:RePEc:eee:dyncon:v:23:y:1998:i:1:p:159-1652012-12-25RePEc:eee:dyncon article 1 1998 23 9 159 165 http://www.sciencedirect.com/ science/article/B6V85-3V7JBM1-9/2/c88b461e7ad725a0f2d5b3fa5107f348 Withagen, Cees B. Asheim, Geir oai:RePEc:eee:dyncon:v:13:y:1989:i:3:p:471-4832012-12-25RePEc:eee:dyncon article 3 1989 13 7 471 483 http://www.sciencedirect.com/science/article/B6V85-46X3RNB-7/2/39b68492be04bda7166f25e5227e84e1 Tapiero, Charles S. oai:RePEc:eee:dyncon:v:31:y:2007:i:5:p:1473-14972012-12-25RePEc:eee:dyncon article 5 2007 31 5 1473 1497 http://www.sciencedirect.com/science/article/B6V85-4KM46NG-1/2/fadcde658a634891a63d1afcee354112 Murto, Pauli oai:RePEc:eee:dyncon:v:28:y:2004:i:6:p:1013-10332012-12-25RePEc:eee:dyncon article 6 2004 28 3 1013 1033 http://www.sciencedirect.com/science/article/B6V85-48FK73N-1/2/ 404f6c30e5982bc24e1ad76854e63e17 Bar-Ilan, Avner Perry, David Stadje, Wolfgang oai:RePEc:eee:dyncon:v:32:y:2008:i:11:p:3441-34582012-12-25RePEc:eee:dyncon article The effects of distortional fiscal policies are studied within a model in which there is endogenous investment-specific technological change. Labor is used in the production of output and also for research purposes. Labor or capital taxes then distort the trade-off between developing new technologies, and investing in existing types of capital. It is shown that if there is an externality in the research activity, then it may be socially optimal to impose both a capital tax, and an investment tax credit. The growth rate is shown to be increasing in the rate of capital taxation and decreasing in the rate of labor taxation, although the effect of taxation on the growth rate is modest. This supports the observation that there is relatively little relationship between growth rates of economies, and their rates of taxation. Investment-specific technological change Investment tax credit Optimal taxation Capital taxation Endogenous growth Externalities 11 2008 32 11 3441 3458 http://www.sciencedirect.com/science /article/B6V85-4S21THB-1/2/ca6492b2cdc88b6b7a8e7d53e8327862 Huffman, Gregory W. oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1498-15192012-12-25RePEc:eee:dyncon article This paper models the gold standard as a state contingent commitment technology that is only feasible during peace. Monetary policy during war, when the gold convertibility rule is suspended, can still be credible, if the policymaker's plan is to resume the gold standard in the future. The DSGE model developed in this paper suggests that the resumption of the gold standard was a sustainable plan, which replaced the gold standard as a commitment technology and made monetary policy time consistent. Trigger strategies support the equilibrium: private agents retaliate if a policymaker defaults on its plan to resume the gold standard. Time consistency; Monetary policy; Monetary regimes; Gold standard; 10 2012 36 1498 1519 C61 E31 E4 E5 N13 http://www.sciencedirect.com/science/article/pii/S0165188912000978 Newby, Elisa oai:RePEc:eee:dyncon:v:11:y:1987:i:1:p:65-782012-12-25RePEc:eee:dyncon article 1 1987 11 3 65 78 http://www.sciencedirect.com/science/article/B6V85-4C7WMJR-3/2/64298080cfeb3d0eddfe306b6add2e68 Haurie, Alain Pohjola, Matti oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:33-362012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 33 36 http://www.sciencedirect.com/science/article/B6V85-4D8W2RC-6/2/ 0defcd381d226e0b4ca8edc78fe2edaf Breton, Michele Haurie, Alain Filar, Jerzy A. oai:RePEc:eee:dyncon:v:22:y:1998:i:5:p:679-7022012-12-25RePEc:eee:dyncon article 5 1998 22 5 679 702 http:// www.sciencedirect.com/science/article/B6V85-3SX8BH0-3/2/080e42ae56ede59dcea867216a756d54 Rankin, Neil oai:RePEc:eee:dyncon:v:8:y:1984:i:1:p:33-642012-12-25RePEc:eee:dyncon article 1 1984 8 10 33 64 http://www.sciencedirect.com/science/article/B6V85-4C9BX45-14/2/782868eacf7b3280b2b97e25c4c2ecad Karakitsos, E. Rustem, B. oai:RePEc:eee:dyncon:v:23:y:1999:i:5-6:p:873-9082012-12-25RePEc:eee:dyncon article 5-6 1999 23 4 873 908 http://www.sciencedirect.com/science/article/B6V85-3WF82KY-B/2/9d3c84b6940df69b2b91fb9198c69a57 Turnovsky, Stephen J. oai:RePEc:eee:dyncon:v:17:y:1993:i:5-6:p:759-7692012-12-25RePEc:eee:dyncon article 5-6 1993 17 759 769 http://www.sciencedirect.com/science/article/B6V85-45JK57J-1T/2/f20ae858b81496213b68cecbbb12ced3 Larsen, Erik Reimer Morecroft, John D. W. Thomsen, Jesper Skovhus Mosekilde, Erik oai:RePEc:eee:dyncon:v:1:y:1979:i:1:p:39-582012-12-25RePEc:eee:dyncon article 1 1979 1 2 39 58 http:// www.sciencedirect.com/science/article/B6V85-4DVNG1X-3/2/22049e01cfb9905deba78e802b78cdc8 Arzac, Enrique R. Wilkinson, Maurice oai:RePEc:eee:dyncon:v:1:y:1979:i:1:p:1-22012-12-25RePEc:eee:dyncon article 1 1979 1 2 1 2 http://www.sciencedirect.com/science/article/B6V85-4DVNG1X-1/2/d5a86972563268d945aab4c013c6a7bf Kendrick, David Tse, Edison oai:RePEc:eee:dyncon:v:14:y:1990:i:2:p:465-4902012-12-25RePEc:eee:dyncon article 2 1990 14 5 465 490 http://www.sciencedirect.com/science/article/B6V85-45KNJWV-H/2/c22abe229a9bcf77c66c38e2c7112de3 Farley, Arthur M. Lin, Kuan-Pin oai:RePEc:eee:dyncon:v:28:y:2004:i:11:p:2195-22142012-12-25RePEc:eee:dyncon article 11 2004 28 10 2195 2214 http://www.sciencedirect.com/science/article/ B6V85-4BN0JCF-4/2/98d9879dcb370521df79d28ce12a1a8d Campbell, John Y. Chacko, George Rodriguez, Jorge Viceira, Luis M. oai:RePEc:eee:dyncon:v:31:y:2007:i:4:p:1132-11592012-12-25RePEc:eee:dyncon article 4 2007 31 4 1132 1159 http://www.sciencedirect.com/science/article/B6V85-4K9C54Y-1/2/4e2f418ee669abb476ea4285a355d03e Muthuraman, Kumar oai:RePEc:eee:dyncon:v:31:y:2007:i:9:p:3110-31372012-12-25RePEc:eee:dyncon article 9 2007 31 9 3110 3137 http://www.sciencedirect.com/science/article/B6V85-4MM25S3-1/2/ 7999f766711893b393769b61849a8a7b Bar-Ilan, Avner Marion, Nancy P. Perry, David oai:RePEc:eee:dyncon:v:27:y:2003:i:4:p:645-6502012-12-25RePEc:eee:dyncon article 4 2003 27 2 645 650 http:// www.sciencedirect.com/science/article/B6V85-4724Y30-7/2/67487d31bac616d8ffb9d026485fdc80 Dwyer, Gerald Jr. Williams, K. B. oai:RePEc:eee:dyncon:v:20:y:1996:i:6-7:p:1051-10712012-12-25RePEc:eee:dyncon article 6-7 1996 20 1051 1071 http://www.sciencedirect.com/science/article/B6V85-3VW1T3H-5/2/0039b37490a19dcac54b33160ded50ae Hallett, A. Hughes Ma, Y. Yin, Y. P. oai:RePEc:eee:dyncon:v:32:y:2008:i:1:p:235-2582012-12-25RePEc:eee:dyncon article 1 2008 32 1 235 258 http://www.sciencedirect.com/science/article/B6V85-4ND709J-3/2/563ce1d2a6be22fdfdd24a8f1462453c Tola, Vincenzo Lillo, Fabrizio Gallegati, Mauro Mantegna, Rosario N. oai:RePEc:eee:dyncon:v:24:y:2000:i:2:p:227-2462012-12-25RePEc:eee:dyncon article 2 2000 24 2 227 246 http://www.sciencedirect.com/ science/article/B6V85-3YJY5V9-4/2/1e913ad1019a10876f5b6af7f5ce6714 Zhang, Junxi oai:RePEc:eee:dyncon:v:23:y:1998:i:3:p:459-4622012-12-25RePEc:eee:dyncon article 3 1998 23 11 459 462 http:// www.sciencedirect.com/science/article/B6V85-3V8C8B6-5/2/688b8d0c137ff09c4d499c1bb9ffe11e Ben-Gad, Michael oai:RePEc:eee:dyncon:v:12:y:1988:i:2-3:p:425-4442012-12-25RePEc:eee:dyncon article 2-3 1988 12 425 444 http://www.sciencedirect.com/science/article/B6V85-45MFRW4-N/2/6c0709e261a88f233fc9fd681c92a2d0 Trehan, Bharat Walsh, Carl E. oai:RePEc:eee:dyncon:v:30:y:2006:i:5:p:741-7672012-12-25RePEc:eee:dyncon article 5 2006 30 5 741 767 http://www.sciencedirect.com/science/article/B6V85-4GFV5DB-1/2/389a04d48c4e4a51858f0941fe92adcd Giesecke, Kay Weber, Stefan oai:RePEc:eee:dyncon:v:19:y:1995:i:1-2:p:91-1242012-12-25RePEc:eee:dyncon article 1-2 1995 19 91 124 http://www.sciencedirect.com/science/article/B6V85-3YB56MM-1T/2/ f2fac28f6f3b9c8c65ffa50bf8f9eb23 Gomme, Paul Greenwood, Jeremy oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1142-11612012-12-25RePEc:eee:dyncon article The presence of excess covariance in financial price returns is an accepted empirical fact: the price dynamics of financial assets tend to be more correlated than their fundamentals would justify. We advance an explanation of this fact based on an intertemporal equilibrium multi-assets model of financial markets with an explicit and endogenous price dynamics. The market is driven by an exogenous stochastic process of dividend yields paid by the assets that we identify as market fundamentals. The model is rather flexible and allows for the coexistence of different trading strategies. The evolution of assets price and traders' wealth is described by a high-dimensional stochastic dynamical system. We identify the equilibria of the model consistent with a baseline assumption of procedural rationality. We show that these equilibria are characterized by excess covariance in prices with respect to the dividend process. Moreover, we show that in equilibrium there is a positive expected marginal profit in choosing more risky portfolios. As a consequence, the evolutionary pressure generates a trend towards more remunerative strategies, which, in turn, increase the variance of prices and the dynamic instability of the system. Excess covariance; Capital asset pricing model; Efficient market hypothesis; Heterogeneous agents; Procedurally consistent equilibrium; 8 2012 36 1142 1161 D81 G11 G12 http:// www.sciencedirect.com/science/article/pii/S0165188912000875 Anufriev, Mikhail Bottazzi, Giulio Marsili, Matteo Pin, Paolo oai:RePEc:eee:dyncon:v:29:y:2005:i:9:p:1597-16092012-12-25RePEc:eee:dyncon article 9 2005 29 9 1597 1609 http://www.sciencedirect.com/science/article/B6V85-4DW386K-2/2/987786d38f65d9824def77415102380e Jouvet, Pierre-Andre Michel, Philippe Rotillon, Gilles oai:RePEc:eee:dyncon:v:31:y:2007:i:11:p:3591-36132012-12-25RePEc:eee:dyncon article 11 2007 31 11 3591 3613 http://www.sciencedirect.com/science/article/B6V85-4N3GNK0-3/2/ 5054b2554e8bd45f220712a6555a9b88 Keppo, Jussi Meng, Xu Sullivan, Michael G. oai:RePEc:eee:dyncon:v:22:y:1998:i:3:p:401-4362012-12-25RePEc:eee:dyncon article 3 1998 22 3 401 436 http:// www.sciencedirect.com/science/article/B6V85-3SX82KJ-5/2/5e5b3dd4886b834faf8dae8a709bca8c Cuoco, Domenico Cvitanic, Jaksa oai:RePEc:eee:dyncon:v:31:y:2007:i:5:p:1498-15342012-12-25RePEc:eee:dyncon article 5 2007 31 5 1498 1534 http://www.sciencedirect.com/science/article/B6V85-4KPN9PR-1/2/5a4166cfecdfdc593a062d9e6b013157 Ardagna, Silvia oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1212-12282012-12-25RePEc:eee:dyncon article The recent financial crisis has raised numerous questions about the accuracy of value-at-risk (VaR) as a tool to quantify extreme losses. In this paper we develop data-driven VaR approaches that are based on the principle of optimal combination and that provide robust and precise VaR forecasts for periods when they are needed most, such as the recent financial crisis. Within a comprehensive comparative study we provide the latest piece of empirical evidence on the performance of a wide range of standard VaR approaches and highlight the overall outperformance of the newly developed methods. Value-at-risk; Optimal forecast combination; Quantile regression; Method of moments; Financial crisis; 8 2012 36 1212 1228 C21 C5 G01 G17 G28 G32 http://www.sciencedirect.com/science/article/pii/S0165188912000887 Halbleib, Roxana Pohlmeier, Winfried oai:RePEc:eee:dyncon:v:32:y:2008:i:12:p:3917-39382012-12-25RePEc:eee:dyncon article Skinner's [1988. Risky income, life cycle consumption, and precautionary savings. Journal of Monetary Economics 22, 237-255] second-order approximation to the consumption function under CRRA utility is generalized to accommodate any structure of uninsurable income risk. To second order, a future income shock will induce precautionary saving in the present that depends on the variance of the expectation of the income shock at each intervening period. However, the expected rate of consumption growth depends only on the currently perceived variance of the expected present value of future income. In a finite-horizon model, precautionary saving produces a hump-shaped lifecycle profile of mean consumption primarily because the variance of future income decreases with age, but the lifecycle dynamics of total wealth also affect the shape of the profile. For a Markov income process with autocorrelations on the order of 0.9 or less, the second-order approximation performs surprisingly well for common parameter choices from the literature, but it does poorly as the autocorrelation approaches 1. Precautionary saving Timing of revelation of information Euler equation Consumption growth Consumption hump Lifecycle model 12 2008 32 12 3917 3938 http://www.sciencedirect.com/science/article/ B6V85-4SJ2WRD-1/2/d432b3e1e80f3c145c4ce97bf6039d96 Feigenbaum, James oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1162-11752012-12-25RePEc:eee:dyncon article An investor concerned with the downside risk of a black swan only needs a small portfolio to reap the benefits from diversification. This matches actual portfolio sizes, but does contrast with received wisdom from mean–variance analysis and intuition regarding fat tailed distributed returns. The concern for downside risk and the fat tail property of the distribution of returns can explain the low portfolio diversification. A simulation and calibration study is used to demonstrate the relevance of the theory and to disentangle the relative importance of the different effects. Portfolio diversification; Downside risk; Heavy tails; Calibration; 8 2012 36 1162 1175 G0 G1 C2 C6 http://www.sciencedirect.com/science/article/pii/S0165188912000784 Hyung, Namwon de Vries, Casper G. oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1477-14972012-12-25RePEc:eee:dyncon article Users of regular higher-order perturbation approximations can face two problems: policy functions with odd oscillations and simulated data that explode. We propose a perturbation-based approximation that (i) does not have odd shapes, (ii) generates stable time paths, and (iii) avoids the drawbacks that hamper the pruned perturbation approach of Kim et al. (2008). For models with nontrivial nonlinearities, we find that our alternative and the pruned perturbation approximations give a good qualitative insight in the nonlinear aspects of the true solution, but can differ from the true solution in some quantitative aspects, especially during severe peaks and troughs. Accuracy; Nonlinear numerical solutions; 10 2012 36 1477 1497 C63 E21 http://www.sciencedirect.com/science/article/pii/S0165188912001078 Den Haan, Wouter J. De Wind, Joris oai:RePEc:eee:dyncon:v:26:y:2002:i:7-8:p:1195-12152012-12-25RePEc:eee:dyncon article 7-8 2002 26 7 1195 1215 http://www.sciencedirect.com/science/article/B6V85-459HNNF-7/2/ 2c23f9d496b4f7e2d95e76cf52a4673f Basak, Gopal Jagannathan, Ravi Sun, Guoqiang oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:261-2672012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 261 267 http:// www.sciencedirect.com/science/article/B6V85-4D8W2RC-1F/2/23c2f24c5237c7ff3604d6ac82d356fc Sordi, Serena oai:RePEc:eee:dyncon:v:14:y:1990:i:3-4:p:709-7192012-12-25RePEc:eee:dyncon article 3-4 1990 14 10 709 719 http://www.sciencedirect.com/science/article/B6V85-45MFRX7-1B/2/116896a23a26a781621e7c3f72d0fabd Muller, Eitan Peles, Yoram C. oai:RePEc:eee:dyncon:v:26:y:2002:i:6:p:911-9182012-12-25RePEc:eee:dyncon article 6 2002 26 6 911 918 http://www.sciencedirect.com/science/article/B6V85-44KV265-2/2/ee4f6eb9c6a54c8b77b86207b554f02f Favard, Pascal oai:RePEc:eee:dyncon:v:3:y:1981:i:1:p:385-3872012-12-25RePEc:eee:dyncon article 1 1981 3 11 385 387 http://www.sciencedirect.com/science/article/B6V85-4D9X39C-X/2/ feff7598620e8629e427865ccdee5778 Deaton, Angus oai:RePEc:eee:dyncon:v:13:y:1989:i:1:p:55-802012-12-25RePEc:eee:dyncon article 1 1989 13 1 55 80 http://www.sciencedirect.com/science/article/ B6V85-45GNWGJ-C/2/f749a7bcccd7a2604466a6e31204e4d2 Sorger, Gerhard oai:RePEc:eee:dyncon:v:23:y:1999:i:8:p:1197-12062012-12-25RePEc:eee:dyncon article 8 1999 23 8 1197 1206 http:// www.sciencedirect.com/science/article/B6V85-3X6B5WV-6/2/1e152742759f4959f5b8a863b57b5289 Dechert, W. Davis Sprott, Julien C. Albers, David J. oai:RePEc:eee:dyncon:v:31:y:2007:i:6:p:1910-19372012-12-25RePEc:eee:dyncon article 6 2007 31 6 1910 1937 http://www.sciencedirect.com/science/article/B6V85-4N4S625-1/2/ 35c77d5551d6999b9bfa78f4c53b0ad6 Consiglio, Andrea Russino, Annalisa oai:RePEc:eee:dyncon:v:19:y:1995:i:1-2:p:253-2782012-12-25RePEc:eee:dyncon article 1-2 1995 19 253 278 http:// www.sciencedirect.com/science/article/B6V85-3YB56MM-21/2/ad5ce02671afd98aa5bd3595310a94e7 Cogley, Timothy Nason, James M. oai:RePEc:eee:dyncon:v:30:y:2006:i:9-10:p:1589-16142012-12-25RePEc:eee:dyncon article 9-10 2006 30 1589 1614 http://www.sciencedirect.com/science/article/B6V85-4JVT1J3-3/2/05a886f29007a1516833c4a84d20ab58 Samaniego, Roberto M. oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:281-3132012-12-25RePEc:eee:dyncon article 1-3 1996 20 281 313 http://www.sciencedirect.com/science/article/B6V85-3VWPNPX-S/2/33f0786b7a40fc11fc76207630678564 Drugeon, Jean-Pierre oai:RePEc:eee:dyncon:v:26:y:2002:i:3:p:437-4492012-12-25RePEc:eee:dyncon article 3 2002 26 3 437 449 http://www.sciencedirect.com/science/article/B6V85-4494XGC-5/2/ fb72cebdad02934853dfeacaa2d959b1 Sato, Ryuzo Kim, Youngduk oai:RePEc:eee:dyncon:v:31:y:2007:i:12:p:3881-38882012-12-25RePEc:eee:dyncon article 12 2007 31 12 3881 3888 http://www.sciencedirect.com/ science/article/B6V85-4NB388S-1/2/2754addfe1f4750d6e79106201fb30f4 Gutierrez, Oscar oai:RePEc:eee:dyncon:v:20:y:1996:i:4:p:527-5582012-12-25RePEc:eee:dyncon article 4 1996 20 4 527 558 http:// www.sciencedirect.com/science/article/B6V85-3VWPNNV-1/2/ee65551233446d3cf55c5dadf84a37da Krusell, Per Smith, Anthony Jr. oai:RePEc:eee:dyncon:v:25:y:2001:i:12:p:1951-19712012-12-25RePEc:eee:dyncon article 12 2001 25 12 1951 1971 http://www.sciencedirect.com/science/article/B6V85-43HBY8X-7/2/1810f4dd1f395632f94fa9888d87cce8 Foldes, Lucien oai:RePEc:eee:dyncon:v:20:y:1996:i:9-10:p:1641-16602012-12-25RePEc:eee:dyncon article 9-10 1996 20 1641 1660 http://www.sciencedirect.com/science/article/B6V85-3VV430P-7/2/ 5d747f994f91ce18ca3030b305c15d73 Miller, Marcus Zhang, Lei oai:RePEc:eee:dyncon:v:23:y:1999:i:7:p:967-9952012-12-25RePEc:eee:dyncon article 7 1999 23 6 967 995 http://www.sciencedirect.com/science/ article/B6V85-3WRBPDW-3/2/db9f898d87ddd641b7eb96326a4b83bf Guo, Jang-Ting Lansing, Kevin J. oai:RePEc:eee:dyncon:v:14:y:1990:i:1:p:151-1732012-12-25RePEc:eee:dyncon article 1 1990 14 2 151 173 http:/ /www.sciencedirect.com/science/article/B6V85-45F8Y2D-11/2/88c3cab69fd73d6deaa36b86a6b48e46 Tesfatsion, Leigh Veitch, John M. oai:RePEc:eee:dyncon:v:32:y:2008:i:9:p:2809-28252012-12-25RePEc:eee:dyncon article 9 2008 32 9 2809 2825 http://www.sciencedirect.com/science/article/B6V85-4R98K27-1/2/2c1d312921dbd5e8d6abd4ae078a20bc Georges, Christophre oai:RePEc:eee:dyncon:v:31:y:2007:i:6:p:2108-21332012-12-25RePEc:eee:dyncon article 6 2007 31 6 2108 2133 http://www.sciencedirect.com/science/article/B6V85-4N5KXMH-1/2/ 4479815227e4387b6ea043d830af6e4e Dawid, Herbert oai:RePEc:eee:dyncon:v:30:y:2006:i:5:p:769-8052012-12-25RePEc:eee:dyncon article 5 2006 30 5 769 805 http://www.sciencedirect.com/science/article/ B6V85-4GG2HMR-1/2/2fdc03fdbd16471c4e6e0559151e24c9 Sener, Fuat oai:RePEc:eee:dyncon:v:8:y:1984:i:2:p:151-1652012-12-25RePEc:eee:dyncon article 2 1984 8 11 151 165 http://www.sciencedirect.com/science /article/B6V85-4C9BX3V-X/2/798c427eea71d44a424ff8de0ea8f26c Engle, Robert F. Granger, C. W. J. Kraft, Dennis oai:RePEc:eee:dyncon:v:31:y:2007:i:5:p:1753-17802012-12-25RePEc:eee:dyncon article 5 2007 31 5 1753 1780 http://www.sciencedirect.com/science/article/B6V85-4KXDR1X-1/2/c324ebb447eb23575d14722c44b521cf Gjerstad, Steven oai:RePEc:eee:dyncon:v:28:y:2004:i:6:p:1159-11842012-12-25RePEc:eee:dyncon article 6 2004 28 3 1159 1184 http://www.sciencedirect.com/science/article/B6V85-48V83DF-1/2/ 2b94fdd2f413d94113f49a21318c5ed7 Gerlagh, Reyer Keyzer, Michiel A. oai:RePEc:eee:dyncon:v:22:y:1998:i:8-9:p:1209-12332012-12-25RePEc:eee:dyncon article 8-9 1998 22 8 1209 1233 http:// www.sciencedirect.com/science/article/B6V85-3TMR2BM-4/2/450ea5cacc7c5c16ffc5c16146450139 Marks, Robert oai:RePEc:eee:dyncon:v:20:y:1996:i:6-7:p:1115-11432012-12-25RePEc:eee:dyncon article 6-7 1996 20 1115 1143 http://www.sciencedirect.com/science/article/B6V85-3VW1T3H-8/2/39c395441031dbd6d22fee4d2279831b Hassler, John A. A. oai:RePEc:eee:dyncon:v:27:y:2003:i:3:p:503-5312012-12-25RePEc:eee:dyncon article 3 2003 27 1 503 531 http://www.sciencedirect.com/science/article/B6V85-46YVCK2-8/2/00cb432fb116d591d0ede71ac5b11f13 Chiarella, Carl He, Xue-Zhong oai:RePEc:eee:dyncon:v:23:y:1999:i:7:p:997-10282012-12-25RePEc:eee:dyncon article 7 1999 23 6 997 1028 http://www.sciencedirect.com/science/article/B6V85-3WRBPDW-4/2/b5d116c5bcd099a254dd59f64d7cbf29 Kozicki, Sharon oai:RePEc:eee:dyncon:v:23:y:1998:i:1:p:97-1122012-12-25RePEc:eee:dyncon article 1 1998 23 9 97 112 http://www.sciencedirect.com/science/article/B6V85-3V7JBM1-6/2/ 49adebed2ea9c6701ac2e078fb88924f Ma, Chenghu oai:RePEc:eee:dyncon:v:30:y:2006:i:7:p:1081-11042012-12-25RePEc:eee:dyncon article 7 2006 30 7 1081 1104 http://www.sciencedirect.com/science/article/ B6V85-4GR8MTW-1/2/a9c6ab52fee96543f81852486a534680 Thille, Henry oai:RePEc:eee:dyncon:v:32:y:2008:i:1:p:156-1992012-12-25RePEc:eee:dyncon article 1 2008 32 1 156 199 http://www.sciencedirect.com/ science/article/B6V85-4P5RVPH-1/2/94b3bf65495c93a6632da627461c2898 Bacry, E. Kozhemyak, A. Muzy, Jean-Francois oai:RePEc:eee:dyncon:v:30:y:2006:i:9-10:p:1707-17272012-12-25RePEc:eee:dyncon article 9-10 2006 30 1707 1727 http://www.sciencedirect.com/science/article/B6V85-4K3D307-1/2/9559ce36c2664b88ded7ba581d696183 Dufour, Jean-Marie Khalaf, Lynda Kichian, Maral oai:RePEc:eee:dyncon:v:20:y:1996:i:4:p:583-6002012-12-25RePEc:eee:dyncon article 4 1996 20 4 583 600 http://www.sciencedirect.com/science/article/B6V85-3VWPNNV-3/2/8295b584cacb3de35d8de0bd00d83f9b Brown, Paul M. oai:RePEc:eee:dyncon:v:28:y:2004:i:10:p:1925-19542012-12-25RePEc:eee:dyncon article 10 2004 28 9 1925 1954 http://www.sciencedirect.com/science/article/B6V85-4BN0JCF-2/2/ cf52ffd80929e37b75a1ee8efc470cdf Brandt, M.W.Michael W. Zeng, Qi Zhang, Lu oai:RePEc:eee:dyncon:v:12:y:1988:i:2-3:p:595-6072012-12-25RePEc:eee:dyncon article 2-3 1988 12 595 607 http:// www.sciencedirect.com/science/article/B6V85-45MFRW4-11/2/8185738be467a3901b445bc03dc73d25 Aoki, Masanao oai:RePEc:eee:dyncon:v:23:y:1999:i:4:p:491-5182012-12-25RePEc:eee:dyncon article 4 1999 23 2 491 518 http://www.sciencedirect.com/science/article/B6V85-3VF9C8K-1/2/4004a4c9d794dde8bf9d9aeb00efaf81 Kelly, David L. Kolstad, Charles D. oai:RePEc:eee:dyncon:v:14:y:1990:i:3-4:p:627-6532012-12-25RePEc:eee:dyncon article 3-4 1990 14 10 627 653 http://www.sciencedirect.com/science/article/B6V85-45MFRX7-17/2/ 4480069833cb86c7232fac635dd03290 Boldrin, Michele Deneckere, Raymond J. oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:37-392012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 37 39 http:// www.sciencedirect.com/science/article/B6V85-4D8W2RC-7/2/96f66c387c83f690dda247adceab2c6d Dockner, Engelbert Feichtinger, Gustav oai:RePEc:eee:dyncon:v:25:y:2001:i:12:p:1911-19182012-12-25RePEc:eee:dyncon article 12 2001 25 12 1911 1918 http://www.sciencedirect.com/science/article/B6V85-43HBY8X-4/2/ 5c301da52bd80a7b1c10caf40582c5a5 Ayong Le Kama, Alain D. oai:RePEc:eee:dyncon:v:33:y:2009:i:3:p:649-6652012-12-25RePEc:eee:dyncon article The paper proposes a numerical solution method for general equilibrium models with a continuum of heterogeneous agents that combines elements of projection and of perturbation methods. The basic idea is to solve first for the stationary solution of the model, without aggregate shocks but with fully specified idiosyncratic shocks. Afterwards one computes a first-order perturbation of the solution in the aggregate shocks. This approach allows to include a high-dimensional representation of the cross-sectional distribution in the state vector. The method is applied to a model of household saving with uninsurable income risk and liquidity constraints. Techniques are discussed to reduce the dimension of the state space such that higher order perturbations are feasible. Heterogeneous agents Projection methods Perturbation methods Invariant distribution 3 2009 33 3 649 665 http://www.sciencedirect.com/science/article/B6V85-4TG9HNN-3/2/e514ab7a1688979bd9418ef900622317 Reiter, Michael oai:RePEc:eee:dyncon:v:32:y:2008:i:7:p:2085-21172012-12-25RePEc:eee:dyncon article In a two country world where each country has a traded and a non-traded sector and each sector has sticky prices, optimal independent policy in general cannot replicate the natural-rate allocations. There are potential welfare gains from coordination since the planner under a cooperating regime internalizes a terms-of-trade externality that independent policymakers overlook. If the countries have symmetric trading structures, however, the gains from coordination are quantitatively small. With asymmetric trading structures, the gains can be sizable since, in addition to internalizing the terms-of-trade externality, the planner optimally engineers a terms-of-trade bias that favors the country with a larger traded sector. 7 2008 32 7 2085 2117 http://www.sciencedirect.com/science/article/B6V85-4PN05HJ-1/1/058fe1c3e8c14e2da74fb1ec4fc1baaa Liu, Zheng Pappa, Evi oai:RePEc:eee:dyncon:v:31:y:2007:i:8:p:2744-27732012-12-25RePEc:eee:dyncon article 8 2007 31 8 2744 2773 http://www.sciencedirect.com/science/article/B6V85-4MCWM9K-1/2/ 95ebeb5170f7be376dbe22c543153ddc Johri, Alok Letendre, Marc-Andre oai:RePEc:eee:dyncon:v:21:y:1997:i:6:p:905-9062012-12-25RePEc:eee:dyncon article 6 1997 21 6 905 906 http://www.sciencedirect.com/ science/article/B6V85-3SWYBJD-F/2/20d5a8feb4b6fd3d8b67f9fd3538f001 Amman, Hans M. oai:RePEc:eee:dyncon:v:20:y:1996:i:5:p:963-9662012-12-25RePEc:eee:dyncon article 5 1996 20 5 963 966 http:// www.sciencedirect.com/science/article/B6V85-3VVVR8J-D/2/8da8d51fd890a6cc7d444e270543d346 Ferreira, Eva Regulez, Marta oai:RePEc:eee:dyncon:v:20:y:1996:i:6-7:p:1289-13052012-12-25RePEc:eee:dyncon article 6-7 1996 20 1289 1305 http://www.sciencedirect.com/science/article/B6V85-3VW1T3H-H/2/6be4c2f2c1bad9586fee7270214968e5 Tsur, Yacov Zemel, Amos oai:RePEc:eee:dyncon:v:24:y:2000:i:2:p:189-2172012-12-25RePEc:eee:dyncon article 2 2000 24 2 189 217 http://www.sciencedirect.com/science/article/B6V85-3YJY5V9-2/2/18b7392543d6020f6b9768aae227ddce Noe, Thomas H. Pi, Lynn oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:521-5232012-12-25RePEc:eee:dyncon article 1-3 1996 20 521 523 http://www.sciencedirect.com/science/article/B6V85-488R78P-2/2/ 60e3d8be35d257d201d23534670dec4c Cripps, M. W. oai:RePEc:eee:dyncon:v:19:y:1995:i:5-7:p:1297-12982012-12-25RePEc:eee:dyncon article 5-7 1995 19 1297 1298 http://www.sciencedirect.com/science/article/ B6V85-3YB56JR-11/2/3d7e9186064e0530ea3a4aed5ee57982 Sethi, Suresh P. Taksar, Michael I. Presman, Ernst L. oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:385-4132012-12-25RePEc:eee:dyncon article 1-3 1996 20 385 413 http://www.sciencedirect.com/science/article/B6V85-3VWPNPX-X/2/cb7591134a304e8597e65ed2d8915708 Streufert, Peter A. oai:RePEc:eee:dyncon:v:11:y:1987:i:1:p:123-1452012-12-25RePEc:eee:dyncon article 1 1987 11 3 123 145 http://www.sciencedirect.com/science/article/B6V85-4C7WMJR-7/2/70a15e1fd469671d6c24ab53647e45ab Van Der Ploeg, F. oai:RePEc:eee:dyncon:v:15:y:1991:i:2:p:387-4082012-12-25RePEc:eee:dyncon article 2 1991 15 4 387 408 http://www.sciencedirect.com/science/article/B6V85-45FCJHP-8/2/3a6e5ad08b4efcbd1e156b10245c2509 McCall, Brian P. oai:RePEc:eee:dyncon:v:32:y:2008:i:1:p:200-2342012-12-25RePEc:eee:dyncon article 1 2008 32 1 200 234 http://www.sciencedirect.com/science/article/B6V85-4PK8MF5-1/2/ fe13694a952183e12a79fa98ef8ef1d3 Mike, Szabolcs Farmer, J. Doyne oai:RePEc:eee:dyncon:v:20:y:1996:i:5:p:945-9612012-12-25RePEc:eee:dyncon article 5 1996 20 5 945 961 http://www.sciencedirect.com/ science/article/B6V85-3VVVR8J-C/2/5e7e3a9e5b78dba318a85ad11ae91e72 Kollmann, Robert oai:RePEc:eee:dyncon:v:31:y:2007:i:12:p:3822-38422012-12-25RePEc:eee:dyncon article 12 2007 31 12 3822 3842 http:// www.sciencedirect.com/science/article/B6V85-4N9MYFR-1/2/9bfe8984cc418c617f77792a9d16181e Calzolari, Giacomo Lambertini, Luca oai:RePEc:eee:dyncon:v:23:y:1999:i:8:p:1177-11962012-12-25RePEc:eee:dyncon article 8 1999 23 8 1177 1196 http://www.sciencedirect.com/science/article/B6V85-3X6B5WV-5/2/098ebd3db3052dbeb901cc46fa7a58e8 Coloma, German oai:RePEc:eee:dyncon:v:22:y:1998:i:2:p:247-2662012-12-25RePEc:eee:dyncon article 2 1998 22 2 247 266 http://www.sciencedirect.com/science/article/B6V85-3SX6H28-D/2/1955d060496d0adcfa658275f18a335a Deissenberg, Christophe Nyssen, Jules oai:RePEc:eee:dyncon:v:14:y:1990:i:2:p:329-3732012-12-25RePEc:eee:dyncon article 2 1990 14 5 329 373 http://www.sciencedirect.com/science/article/B6V85-45KNJWV-B /2/721de0d30eb3997080e69e656093021b Marimon, Ramon McGrattan, Ellen Sargent, Thomas J. oai:RePEc:eee:dyncon:v:23:y:1999:i:8:p:1133-11532012-12-25RePEc:eee:dyncon article 8 1999 23 8 1133 1153 http:// www.sciencedirect.com/science/article/B6V85-3X6B5WV-3/2/0ad2e76d1e3babd60e63431a7681f811 Jensen, Henrik oai:RePEc:eee:dyncon:v:26:y:2002:i:9-10:p:1557-15832012-12-25RePEc:eee:dyncon article 9-10 2002 26 8 1557 1583 http://www.sciencedirect.com/science/article/B6V85-44VG4D4-5/2/3f1870b3df89a221566f9021f6c75f46 Judd, Kenneth L. oai:RePEc:eee:dyncon:v:20:y:1996:i:5:p:905-9232012-12-25RePEc:eee:dyncon article 5 1996 20 5 905 923 http://www.sciencedirect.com/science/article/B6V85-3VVVR8J-9/2/ae45596c18373165acc26a403dc6a13a Cheng, Leonard K. Dinopoulos, Elias oai:RePEc:eee:dyncon:v:31:y:2007:i:7:p:2350-23732012-12-25RePEc:eee:dyncon article 7 2007 31 7 2350 2373 http://www.sciencedirect.com/science/article/ B6V85-4M04DVJ-1/2/fb1c2ae1835a47c5f1109fdef5c2f9b0 Elliott, Robert J. Hyndman, Cody. B. oai:RePEc:eee:dyncon:v:22:y:1998:i:2:p:179-2072012-12-25RePEc:eee:dyncon article 2 1998 22 2 179 207 http:// www.sciencedirect.com/science/article/B6V85-3SX6H28-9/2/477d4b078c6dd32805cf9d1dd4cb9091 Bullard, James Duffy, John oai:RePEc:eee:dyncon:v:23:y:1999:i:4:p:641-6692012-12-25RePEc:eee:dyncon article 4 1999 23 2 641 669 http://www.sciencedirect.com/science/article/B6V85-3VF9C8K-7/2/8b0aabcf25a579b2368a3d58627b8ceb Weeren, A. J. T. M. Schumacher, J. M. Engwerda, J. C. oai:RePEc:eee:dyncon:v:28:y:2003:i:2:p:209-2532012-12-25RePEc:eee:dyncon article 2 2003 28 11 209 253 http://www.sciencedirect.com/science/article/B6V85-475BBY6-2/2/baf732769bdc998355a938c05a28c0da Damgaard, Anders Fuglsbjerg, Brian Munk, Claus oai:RePEc:eee:dyncon:v:31:y:2007:i:3:p:994-10142012-12-25RePEc:eee:dyncon article 3 2007 31 3 994 1014 http://www.sciencedirect.com/science/article/ B6V85-4K1HDMB-1/2/867b5c7f64105226e39771ed05038800 Odening, Martin Mu[ss]hoff, Oliver Hirschauer, Norbert Balmann, Alfons oai:RePEc:eee:dyncon:v:19:y:1995:i:5-7:p:1065-10892012-12-25RePEc:eee:dyncon article 5-7 1995 19 1065 1089 http://www.sciencedirect.com/science/article/B6V85-3YB56JR-N/2/e919e5ce70f1112b070ccf8185d6acb6 Fudenberg, Drew Levine, David K. oai:RePEc:eee:dyncon:v:23:y:1999:i:9-10:p:1487-15162012-12-25RePEc:eee:dyncon article 9-10 1999 23 9 1487 1516 http://www.sciencedirect.com/science/article/B6V85-3Y9RKX5-B/2/ d2a747beda3bf29f6ddf1abd2a699afc LeBaron, Blake Arthur, W. Brian Palmer, Richard oai:RePEc:eee:dyncon:v:32:y:2008:i:8:p:2398-24272012-12-25RePEc:eee:dyncon article 8 2008 32 8 2398 2427 http:// www.sciencedirect.com/science/article/B6V85-4PR3G6X-1/2/02a3ad49fc0a4ab48da7dba0ad4bf331 Fujiwara, Ippei Teranishi, Yuki oai:RePEc:eee:dyncon:v:16:y:1992:i:2:p:225-2412012-12-25RePEc:eee:dyncon article 2 1992 16 4 225 241 http://www.sciencedirect.com/science/article/B6V85-45NHVY2-3/2/d184cd216ede2f7d6566b6a325bffa91 Ioannides, Yannis M. Taub, Bart oai:RePEc:eee:dyncon:v:31:y:2007:i:7:p:2461-24852012-12-25RePEc:eee:dyncon article 7 2007 31 7 2461 2485 http://www.sciencedirect.com/science/article/B6V85-4M69JC0-1/2/ 1166a7f6b1c2f05b320b67f64351de8e Alvarez, Luis H.R. Koskela, Erkki oai:RePEc:eee:dyncon:v:26:y:2002:i:6:p:1029-10522012-12-25RePEc:eee:dyncon article 6 2002 26 6 1029 1052 http:// www.sciencedirect.com/science/article/B6V85-44KV265-8/2/3cd57b36a8f85e76fc27017c822599d2 Konishi, Hideo Sandfort, Michael T. oai:RePEc:eee:dyncon:v:33:y:2009:i:2:p:507-5242012-12-25RePEc:eee:dyncon article We study the impact of price cap regulation on the level and timing of investment in an oligopolistic (Cournot) industry facing stochastic demand. We find that a price ceiling affects investment decisions in two mutually competing ways: it makes the option to defer investment more valuable, but at the same time it reduces the incentive for firms to strategically underinvest in order to raise prices. We show that while sensible price cap regulation speeds up investment, a low price cap can be a disincentive for investment. There exists an optimal price cap independent of market concentration - the competitive investment price trigger - that maximizes investment incentives and in the long term increases industry installed capacity. This optimal price cap becomes less effective and less robust as the market becomes more competitive and as demand volatility increases. Errors in estimation of the optimal price cap have asymmetric effects: underestimation has more dire consequences than overestimation. Real options Stochastic games Price cap regulation Demand uncertainty Utility industries 2 2009 33 2 507 524 http://www.sciencedirect.com/science/article/ B6V85-4TCR1KF-1/2/781aa72be2bf55c29d704b4579771fbe Roques, Fabien A. Savva, Nicos oai:RePEc:eee:dyncon:v:30:y:2006:i:8:p:1431-14402012-12-25RePEc:eee:dyncon article 8 2006 30 8 1431 1440 http:// www.sciencedirect.com/science/article/B6V85-4H27BVX-1/2/0a45d2e3be7fa4ce33161d69cbdc1cbb Misina, Miroslav oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1600-16252012-12-25RePEc:eee:dyncon article This paper shows that portfolio constraints have important implications for management compensation and performance evaluation. In particular, in the presence of portfolio constraints, allowing for benchmarking can be beneficial. Benchmark design arises as an alternative effort inducement mechanism vis-a-vis relaxing portfolio constraints. Numerically, we solve jointly for the manager's linear incentive fee and the optimal benchmark. The size of the incentive fee and the risk adjustment in the benchmark composition are increasing in the investor's risk tolerance and the manager's ability to acquire and process private information. Market timing; Incentive fee; Benchmarking; Portfolio constraints; 10 2012 36 1600 1625 D81 D82 J33 http://www.sciencedirect.com/science/article/pii/ S0165188912001133 Agarwal, Vikas Gómez, Juan-Pedro Priestley, Richard oai:RePEc:eee:dyncon:v:2:y:1980:i:1:p:395-3962012-12-25RePEc:eee:dyncon article 1 1980 2 5 395 396 http://www.sciencedirect.com/ science/article/B6V85-4D9X3KF-3X/2/616941bb0353b8ea73d6399d2f05aeca Kalman, R. E. oai:RePEc:eee:dyncon:v:26:y:2002:i:7-8:p:1243-12742012-12-25RePEc:eee:dyncon article 7-8 2002 26 7 1243 1274 http:// www.sciencedirect.com/science/article/B6V85-459HNNF-9/2/fb826e2f371a130283053024262671b0 Ang, Andrew Bekaert, Geert oai:RePEc:eee:dyncon:v:13:y:1989:i:3:p:401-4202012-12-25RePEc:eee:dyncon article 3 1989 13 7 401 420 http://www.sciencedirect.com/science/article/B6V85-46X3RNB-4/2/dd900545a13bc5bbbb53425177c2e42d Fukuda, Shin-ichi oai:RePEc:eee:dyncon:v:24:y:2000:i:2:p:219-2252012-12-25RePEc:eee:dyncon article 2 2000 24 2 219 225 http://www.sciencedirect.com/science/article/B6V85-3YJY5V9-3/2/3fbe4d7b87e3343e1aea1af4186afc62 Sarkar, Sudipto oai:RePEc:eee:dyncon:v:23:y:1999:i:9-10:p:1329-13532012-12-25RePEc:eee:dyncon article 9-10 1999 23 9 1329 1353 http://www.sciencedirect.com/science/article/B6V85-3Y9RKX5-5/2/ 2b5d2a15a6784e319c3c07958e3688e4 Reiter, Michael oai:RePEc:eee:dyncon:v:22:y:1998:i:3:p:483-4872012-12-25RePEc:eee:dyncon article 3 1998 22 3 483 487 http://www.sciencedirect.com/science/article/ B6V85-3SX82KJ-8/2/7329f706cd51a679c0cbc80f1ddcc2c6 McAdam, Peter oai:RePEc:eee:dyncon:v:27:y:2003:i:4:p:701-7162012-12-25RePEc:eee:dyncon article 4 2003 27 2 701 716 http://www.sciencedirect.com/ science/article/B6V85-44HWSHX-1/2/692bf6e9b81dd83c5a86a1ac1657c749 Haunschmied, Josef L. Kort, Peter M. Hartl, Richard F. Feichtinger, Gustav oai:RePEc:eee:dyncon:v:20:y:1996:i:6-7:p:1101-11132012-12-25RePEc:eee:dyncon article 6-7 1996 20 1101 1113 http://www.sciencedirect.com/science/article/B6V85-3VW1T3H-7/2/ dbc493e77d59623db0dfcbd0f6f09b15 Lioui, Abraham Poncet, Patrice oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1585-15992012-12-25RePEc:eee:dyncon article Should an investor unwind his portfolio in the face of changing economic conditions? We study an investor's optimal trading strategy with finite horizon and transaction costs in an economy that switches stochastically between two market conditions. We fully characterize the investor's time dependent investment strategy in a “bull” market and a “bear” market. We show that when the market switches from the “bull” market to the “bear” market, complete deleveraging, reducing the degree of leverage, or keeping leverage unchanged may all be optimal strategies, subject to underlying market conditions. We further show that the investor may optimally keep leverage unchanged in the “bear” market, particularly so for illiquid asset. On the other hand, a lower borrowing cost in the “bear” market would prevent sell offs. Leverage; Portfolio selection; Bull–bear switching market; Transaction costs; 10 2012 36 1585 1599 D11 D91 G11 C61 http://www.sciencedirect.com/science/article/pii/S0165188912000930 Dai, Min Wang, Hefei Yang, Zhou oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1534-15502012-12-25RePEc:eee:dyncon article The purpose of this paper is to understand differences in cyclical phenomena across a broad range of developed and emerging countries based on the behavior of two key economic times series—industrial production and employment. The paper characterizes the series in question as a recurring Markov chain. Univariate processes are estimated for each series individually, and a composite indicator is constructed by using information on both series. Based on tests of equality of the estimated Markov chains across countries as well as the expected times to switch between different states, we find evidence that (i) the developed and emerging economies are “de-coupled” from each other in terms of their cyclical dynamics, and (ii) the behavior of industrial production and employment growth are “de-coupled” for the emerging economies. Our results suggest new directions for the analysis of emerging economy cyclical fluctuations. Markov chain; Tests of time homogeneity and time dependence; Composite indicator; 10 2012 36 1534 1550 C14 E32 http://www.sciencedirect.com/science/article/pii/ S0165188912000917 Altug, Sumru Tan, Barış Gencer, Gözde oai:RePEc:eee:dyncon:v:30:y:2006:i:9-10:p:1615-16462012-12-25RePEc:eee:dyncon article 9-10 2006 30 1615 1646 http://www.sciencedirect.com/ science/article/B6V85-4JXPS1T-1/2/652bb373ab1b01292694fcc52c65a814 Sanchez-Marcos, Virginia Sanchez-Martin, Alfonso R. oai:RePEc:eee:dyncon:v:22:y:1997:i:1:p:87-1082012-12-25RePEc:eee:dyncon article 1 1997 22 11 87 108 http://www.sciencedirect.com/science/article/B6V85-3SX6H28-5/2/dec33fd66f7a2e4c0bea6c7a55f25f55 Ireland, Peter N. oai:RePEc:eee:dyncon:v:27:y:2003:i:8:p:1437-14572012-12-25RePEc:eee:dyncon article 8 2003 27 6 1437 1457 http://www.sciencedirect.com/science/article/B6V85-45V6V3W-4/2/ ab21e9dc778873b201e1c6e24c240df4 Honkapohja, Seppo Mitra, Kaushik oai:RePEc:eee:dyncon:v:7:y:1984:i:3:p:233-2402012-12-25RePEc:eee:dyncon article 3 1984 7 9 233 240 http://www.sciencedirect.com/ science/article/B6V85-4C9BX2K-3/2/dcf3aa95cfcfc2e7587c1121a9b0a837 Daly, Michael J. Naqib, Fadle oai:RePEc:eee:dyncon:v:26:y:2002:i:6:p:1053-10682012-12-25RePEc:eee:dyncon article 6 2002 26 6 1053 1068 http://www.sciencedirect.com/science/article/B6V85-44KV265-9/2/4ea3ea7d95a8f6a81069853cfe2b22ca Steger, Thomas M. oai:RePEc:eee:dyncon:v:24:y:2000:i:2:p:247-2722012-12-25RePEc:eee:dyncon article 2 2000 24 2 247 272 http://www.sciencedirect.com/science/article/B6V85-3YJY5V9-5/2/1204b48b2135ede2ba0a18eb4bf52eab Pintus, Patrick Sands, Duncan de Vilder, Robin oai:RePEc:eee:dyncon:v:23:y:1999:i:7:p:1065-10762012-12-25RePEc:eee:dyncon article 7 1999 23 6 1065 1076 http://www.sciencedirect.com/science/article/B6V85-3WRBPDW-6/2/ e92f717e0584024a990a2ddafd4c0502 Hansen, Per Svejstrup oai:RePEc:eee:dyncon:v:33:y:2009:i:4:p:985-9962012-12-25RePEc:eee:dyncon article In the New Keynesian framework, the public's expectation about the future path of monetary policy is an important determinant of current economic conditions. This paper examines the impact of unobservable shifts in the central bank's output gap target on inflation and output dynamics. I show that when the degree of persistence of a shock is private information of the central bank, and policy is discretionary in nature, it is optimal for the central bank not to reveal the future expected path of the output gap target. Perfect transparency unambiguously increases inflation and output volatility and thus lowers welfare. Discretionary monetary policy New Keynesian Phillips curve Transparency Kalman filter Learning 4 2009 33 4 985 996 http://www.sciencedirect.com/science/article/B6V85-4V34D3J-1/2/cb59a3994ad3b655244a06e7f06a82f5 Westelius, Niklas J. oai:RePEc:eee:dyncon:v:25:y:2001:i:1-2:p:115-1482012-12-25RePEc:eee:dyncon article 1-2 2001 25 1 115 148 http://www.sciencedirect.com/science/article/B6V85-418PPNR-4/2/ b636864c40fb38602a477c66baefcd8e Isard, Peter Laxton, Douglas Eliasson, Ann-Charlotte oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:99-1072012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 99 107 http:// www.sciencedirect.com/science/article/B6V85-4D8W2RC-M/2/a2c0c0201488c74e4913130382660ec4 Livesey, D. A. oai:RePEc:eee:dyncon:v:25:y:2001:i:12:p:1935-19502012-12-25RePEc:eee:dyncon article 12 2001 25 12 1935 1950 http://www.sciencedirect.com/science/article/B6V85-43HBY8X-6/2/2493041356837df6fab4d011696d9a9c Makris, Miltiadis oai:RePEc:eee:dyncon:v:25:y:2001:i:3-4:p:527-5592012-12-25RePEc:eee:dyncon article 3-4 2001 25 3 527 559 http://www.sciencedirect.com/science/article/B6V85-419JHMW-9/2/ 6993fe45a072ba867142dccad684ab42 Rouchier, Juliette Bousquet, Francois Requier-Desjardins, Melanie Antona, Martine oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1267-12832012-12-25RePEc:eee:dyncon article This paper explores the link between the leverage of the US financial sector, of households and of non-financial businesses, and real activity. We document that leverage is negatively correlated with the future growth of real activity, and positively linked to the conditional volatility of future real activity and of equity returns. The joint information in sectoral leverage series is more relevant for predicting future real activity than the information contained in any individual leverage series. Using in-sample regressions and out-of sample forecasts, we show that the predictive power of leverage is roughly comparable to that of macro and financial predictors commonly used by forecasters. Leverage information would not have allowed to predict the ‘Great Recession’ of 2008–2009 any better than conventional macro/financial predictors. Leverage; Financial crisis; Forecasts; Real activity; Volatility; 8 2012 36 1267 1283 E32 E37 C53 G20 http://www.sciencedirect.com/ science/article/pii/S0165188912000826 Kollmann, Robert Zeugner, Stefan oai:RePEc:eee:dyncon:v:33:y:2009:i:2:p:363-3762012-12-25RePEc:eee:dyncon article This paper develops a detailed computational model of price formation in the England and Wales electricity pool, as it operated for 11 years from 1990 to 2001. It is clear that during this period, the repeated nature of the daily auction, between a small number of generators, with a substantial amount of information in common, gave rise to a continuous evolution of learning and gaming in practice with no evidence of convergence to a stationary Nash solution. In terms of representing reality, a computational approach inspired by evolutionary economics, can succeed in reflecting well the type of behaviour observed, to an extent that cannot be matched by alternative analytical models. Cycles of pricing appear in the model, apparently as they seem to do in practice, yet average behaviour has been validated against the theoretical supply function results for the more stylised circumstances where analytical results are possible. The paper therefore makes a methodological contribution in the development of a model of competitive electricity markets inspired by computational learning and gaming. It also makes an applied contribution by providing a more realistic basis for identifying whether high market prices can be ascribed to problems of market structure or market conduct. Auctions Electricity Computational learning Market power 2 2009 33 2 363 376 http://www.sciencedirect.com/science/article/ B6V85-4T13CF8-1/2/81c81adc9d9a74a936cbcf1342498aab Bunn, Derek W. Day, Christopher J. oai:RePEc:eee:dyncon:v:28:y:2003:i:2:p:287-3062012-12-25RePEc:eee:dyncon article 2 2003 28 11 287 306 http:// www.sciencedirect.com/science/article/B6V85-479TM0V-1/2/d20cf092c9022cee3b08550024eaeb6c Groot, Fons Withagen, Cees de Zeeuw, Aart oai:RePEc:eee:dyncon:v:18:y:1994:i:1:p:251-2712012-12-25RePEc:eee:dyncon article 1 1994 18 1 251 271 http://www.sciencedirect.com/science/article/B6V85-46MMW30-10/2/fb302f3ee70949a5cf0eb648ad48faf4 Berndsen, Ron Daniels, Hennie oai:RePEc:eee:dyncon:v:14:y:1990:i:1:p:73-952012-12-25RePEc:eee:dyncon article 1 1990 14 2 73 95 http://www.sciencedirect.com/science/article/B6V85-45F8Y2D-X/2/ 4c2182e8d5b98f2660d6b9629f18c05d Lindsey, Robin oai:RePEc:eee:dyncon:v:31:y:2007:i:2:p:531-5552012-12-25RePEc:eee:dyncon article 2 2007 31 2 531 555 http://www.sciencedirect.com/science/article/ B6V85-4JGBF1Y-1/2/09b209a25206251fc09291c2e402331f Joshi, Sumit oai:RePEc:eee:dyncon:v:30:y:2006:i:2:p:163-1832012-12-25RePEc:eee:dyncon article 2 2006 30 2 163 183 http://www.sciencedirect.com/ science/article/B6V85-4FFX9CT-2/2/2cdede7e3764f490b3c58fde35c937fd Chang, Shun-Chiao Wu, Ho-Mou oai:RePEc:eee:dyncon:v:32:y:2008:i:1:p:85-1002012-12-25RePEc:eee:dyncon article 1 2008 32 1 85 100 http://www.sciencedirect.com/science/article/B6V85-4N9MYFR-4/2/ec663b864b6b4ba9fc35f86b3e88df3c Challet, Damien oai:RePEc:eee:dyncon:v:5:y:1983:i:1:p:201-2342012-12-25RePEc:eee:dyncon article 1 1983 5 2 201 234 http://www.sciencedirect.com/science/article/B6V85-4C47HD0-D/2/ff53e1b8999cb42aa3b8d96b051802f0 Leban, Raymond Lesourne, Jacques oai:RePEc:eee:dyncon:v:5:y:1983:i:1:p:235-2472012-12-25RePEc:eee:dyncon article 1 1983 5 2 235 247 http://www.sciencedirect.com/science/article/B6V85-4C47HD0-F/2/cb9dcad439b86e97c60163c9e72112be Calzolari, Giorgio oai:RePEc:eee:dyncon:v:25:y:2001:i:3-4:p:419-4572012-12-25RePEc:eee:dyncon article 3-4 2001 25 3 419 457 http://www.sciencedirect.com/science/article/B6V85-419JHMW-6/2/ fbae4596d745bc6f03c888152d5cfc3e Tesfatsion, Leigh oai:RePEc:eee:dyncon:v:31:y:2007:i:4:p:1081-11052012-12-25RePEc:eee:dyncon article 4 2007 31 4 1081 1105 http://www.sciencedirect.com/science/ article/B6V85-4K7FJ7B-1/2/5985ca3c8b527160923af255190b6ceb Lindset, Snorre Lund, Arne-Christian oai:RePEc:eee:dyncon:v:25:y:2001:i:9:p:1451-14562012-12-25RePEc:eee:dyncon article 9 2001 25 9 1451 1456 http://www.sciencedirect.com/science/article/B6V85-435CH46-7/2/df1bd2fc5f7c37486366a1699127f2bf Den Haan, Wouter J. oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:43-622012-12-25RePEc:eee:dyncon article 1-3 1996 20 43 62 http://www.sciencedirect.com/science/article/B6V85-3VWPNPX-F/2/0516d81b37759e4e8b2a2b834b26b022 Nagurney, Anna Zhang, Ding oai:RePEc:eee:dyncon:v:27:y:2003:i:4:p:551-5722012-12-25RePEc:eee:dyncon article 4 2003 27 2 551 572 http://www.sciencedirect.com/science/article/B6V85-4724Y30-3/2/6471b0f5e075f7024699f427f6c6a535 Tsur, Yacov Zemel, Amos oai:RePEc:eee:dyncon:v:11:y:1987:i:3:p:331-3572012-12-25RePEc:eee:dyncon article 3 1987 11 9 331 357 http://www.sciencedirect.com/science/article/B6V85-4GP1TWK-4/2/ 689819038e2b8721f33109e38f841603 Laitner, John oai:RePEc:eee:dyncon:v:31:y:2007:i:12:p:3791-38212012-12-25RePEc:eee:dyncon article 12 2007 31 12 3791 3821 http://www.sciencedirect.com/science/article /B6V85-4NB2SCW-1/2/d4084283d8e0e2708d39d4691fc4f161 Perez-Sebastian, Fidel oai:RePEc:eee:dyncon:v:3:y:1981:i:1:p:329-3412012-12-25RePEc:eee:dyncon article 1 1981 3 11 329 341 http:// www.sciencedirect.com/science/article/B6V85-4D9X39C-T/2/891034aea279aebcb6b83a76f419545a Feigin, Paul Landsberger, Michael oai:RePEc:eee:dyncon:v:19:y:1995:i:1-2:p:279-3022012-12-25RePEc:eee:dyncon article 1-2 1995 19 279 302 http://www.sciencedirect.com/science/article/B6V85-3YB56MM-22/2/de228bff2922c5619a58d37c3e182ff4 Miller, Marcus Weller, Paul oai:RePEc:eee:dyncon:v:32:y:2008:i:9:p:3032-30532012-12-25RePEc:eee:dyncon article 9 2008 32 9 3032 3053 http://www.sciencedirect.com/science/article/B6V85-4RKMHVW-1/2/ 4b1d0a50b6dd6c693a205049fa6f8104 Goetz, Renan-Ulrich Hritonenko, Natali Yatsenko, Yuri oai:RePEc:eee:dyncon:v:22:y:1998:i:3:p:437-4632012-12-25RePEc:eee:dyncon article 3 1998 22 3 437 463 http:// www.sciencedirect.com/science/article/B6V85-3SX82KJ-6/2/ce280f2479bbcf1ca1fddde1eee81a4d Bar-Ilan, Avner Strange, William C. oai:RePEc:eee:dyncon:v:28:y:2004:i:4:p:801-8152012-12-25RePEc:eee:dyncon article 4 2004 28 1 801 815 http://www.sciencedirect.com/science/article/B6V85-48BC1PP-1/2/c73a06b247cf50c82ececef047e94a8d Favard, Pascal Karp, Larry oai:RePEc:eee:dyncon:v:20:y:1996:i:6-7:p:1007-10252012-12-25RePEc:eee:dyncon article 6-7 1996 20 1007 1025 http://www.sciencedirect.com/science/article/B6V85-3VW1T3H-3/2/ 328552b3dce1b73824793c4bc3d5d70b Gregory, Allan W. Smith, Gregor W. oai:RePEc:eee:dyncon:v:12:y:1988:i:2-3:p:489-5022012-12-25RePEc:eee:dyncon article 2-3 1988 12 489 502 http://www.sciencedirect.com /science/article/B6V85-45MFRW4-T/2/5b49940eebf4c332bbabaa408c8eccf5 Ljungqvist, Lars Park, Myungsoo Stock, James H. Watson, Mark W. oai:RePEc:eee:dyncon:v:31:y:2007:i:2:p:515-5302012-12-25RePEc:eee:dyncon article 2 2007 31 2 515 530 http://www.sciencedirect.com/science/article/B6V85-4JFGF55-3/2/c342f0ce59a828bcb48e90ae36208b01 Lombardo, Giovanni Sutherland, Alan oai:RePEc:eee:dyncon:v:30:y:2006:i:12:p:2823-28572012-12-25RePEc:eee:dyncon article 12 2006 30 12 2823 2857 http://www.sciencedirect.com/science/article/ B6V85-4HRMTTY-1/2/528bf8ceef53c320562127b341f9ae45 Bruckner, Matthias Schabert, Andreas oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1176-11922012-12-25RePEc:eee:dyncon article It is traditionally assumed in finance models that the fundamental value of an asset is known with certainty. In this paper we depart from that assumption. We propose a simple model of the exchange rate in which agents have biased and unbiased beliefs about the fundamental rate. We show that such a model produces waves of optimism and pessimism unrelated to the underlying fundamental value. In addition, the model shows that in a world characterized by the existence of heterogeneous beliefs about the fundamental, exchange rate movements can be remarkably complex even if only fundamentalist traders operate in the market. Foreign exchange market; Behavioral finance; Uncertainty about fundamentals; 8 2012 36 1176 1192 F31 C62 http://www.sciencedirect.com/science/article/pii/S0165188912000796 De Grauwe, Paul Rovira Kaltwasser, Pablo oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:479-5192012-12-25RePEc:eee:dyncon article 1-3 1996 20 479 519 http://www.sciencedirect.com/science/article/B6V85-3VWPNPX-13/2/ 5925a7be806a7adcbc1add9b007ccbd0 Highfield, Richard A. O'Hara, Maureen Smith, Bruce oai:RePEc:eee:dyncon:v:31:y:2007:i:11:p:3545-35672012-12-25RePEc:eee:dyncon article 11 2007 31 11 3545 3567 http:// www.sciencedirect.com/science/article/B6V85-4MY6MYR-1/2/97bd4b0fdcf15ea86d0f54122b20ae5c Zhang, Jie Zhang, Junsen oai:RePEc:eee:dyncon:v:31:y:2007:i:4:p:1106-11312012-12-25RePEc:eee:dyncon article 4 2007 31 4 1106 1131 http://www.sciencedirect.com/science/article/B6V85-4KBX4DJ-1/2/acca1d725eb90a5565d00e3def049adf Sennewald, Ken oai:RePEc:eee:dyncon:v:23:y:1999:i:8:p:1225-12422012-12-25RePEc:eee:dyncon article 8 1999 23 8 1225 1242 http://www.sciencedirect.com/science/article/B6V85-3X6B5WV-8/2/ dd1041b40ef505a57046c29dd93835ff Chang, Wen-ya oai:RePEc:eee:dyncon:v:21:y:1997:i:6:p:981-10032012-12-25RePEc:eee:dyncon article 6 1997 21 6 981 1003 http://www.sciencedirect.com/science/article/ B6V85-3SWYBJD-K/2/6ed0ca5065fb186e7f32e677b228fd31 Akdeniz, Levent Dechert, W. Davis oai:RePEc:eee:dyncon:v:19:y:1995:i:1-2:p:3-372012-12-25RePEc:eee:dyncon article 1-2 1995 19 3 37 http:// www.sciencedirect.com/science/article/B6V85-3YB56MM-1N/2/c2776118cd977eed40099344f17fa4ac Nagurney, Anna Takayama, Takashi Zhang, Ding oai:RePEc:eee:dyncon:v:4:y:1982:i:1:p:295-3012012-12-25RePEc:eee:dyncon article 1 1982 4 11 295 301 http://www.sciencedirect.com/science/article/B6V85-4D9X3DB-1S/2/920013917a9ace33a953540c73551175 Kasanen, Eero oai:RePEc:eee:dyncon:v:22:y:1998:i:7:p:977-10002012-12-25RePEc:eee:dyncon article 7 1998 22 5 977 1000 http://www.sciencedirect.com/science/article/B6V85-3V5MB4X-1/2/ d401821f1aae4cc9df10dfa1fe17794b Christou, Costas Swamy, P. A. V. B. Tavlas, George S. oai:RePEc:eee:dyncon:v:26:y:2002:i:6:p:889-9092012-12-25RePEc:eee:dyncon article 6 2002 26 6 889 909 http:// www.sciencedirect.com/science/article/B6V85-44KV265-1/2/3afc8fef3ccbf8d3baaf8dc22adbdb2b Costa, O. L. V. Paiva, A. C. oai:RePEc:eee:dyncon:v:27:y:2002:i:1:p:51-622012-12-25RePEc:eee:dyncon article 1 2002 27 11 51 62 http://www.sciencedirect.com/science/article/B6V85-46H21HP-4/2/c0f1a77b516eca870c8a6c87113b5a03 Cellini, Roberto Lambertini, Luca oai:RePEc:eee:dyncon:v:18:y:1994:i:6:p:1069-10922012-12-25RePEc:eee:dyncon article 6 1994 18 11 1069 1092 http://www.sciencedirect.com/science/article/B6V85-45MFRVH-2/2/ 0b2e3147e41dec5eacebb03f23906c74 Dutta, Prajit K. Majumdar, Mukul K. Sundaram, Rangarajan K. oai:RePEc:eee:dyncon:v:27:y:2002:i:2:p:283-3012012-12-25RePEc:eee:dyncon article 2 2002 27 12 283 301 http://www.sciencedirect.com/science/article/B6V85-46SVXBT-6/2/30fe25a0feb58f3c572e90cb428f8079 Huang, Kevin X. D. oai:RePEc:eee:dyncon:v:17:y:1993:i:5-6:p:705-7212012-12-25RePEc:eee:dyncon article 5-6 1993 17 705 721 http://www.sciencedirect.com/science/article/B6V85-45JK57J-1R/2/796f96ec61b424a903dfe1b4c44f79bc Evans, George W. Honkapohja, Seppo Sargent, Thomas J. oai:RePEc:eee:dyncon:v:29:y:2005:i:8:p:1449-14692012-12-25RePEc:eee:dyncon article 8 2005 29 8 1449 1469 http://www.sciencedirect.com/science/article/B6V85-4DR1SGX-1/2/ 9d5e1240d782262120cb7a8930b0893f Johdo, Wataru Hashimoto, Ken-ichi oai:RePEc:eee:dyncon:v:29:y:2005:i:10:p:1737-17642012-12-25RePEc:eee:dyncon article 10 2005 29 10 1737 1764 http:// www.sciencedirect.com/science/article/B6V85-4F924JC-1/2/f8accb11c343243e1be60c6c8fb5b36e Blanchet-Scalliet, Christophette El Karoui, Nicole Martellini, Lionel oai:RePEc:eee:dyncon:v:32:y:2008:i:10:p:3376-33952012-12-25RePEc:eee:dyncon article 10 2008 32 10 3376 3395 http://www.sciencedirect.com/science/article/B6V85-4RX06XX-1/2/ 49500eb0bffefb5d577cd636e6d279dc Pagan, A.R. Pesaran, M. Hashem oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:89-912012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 89 91 http://www.sciencedirect.com/ science/article/B6V85-4D8W2RC-J/2/2796d80a454fccd91a5be755867ace57 Carvalhais, Z. Davis, M. H. A. oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1248-12662012-12-25RePEc:eee:dyncon article As the introduction of financial transaction taxes is increasingly discussed by political leaders we explore possible consequences such taxes could have on markets. Here we examine how “stylized facts”, namely fat tails and volatility clustering, are affected by different tax regimes in laboratory experiments. We find that leptokurtosis of price returns is highest and clustered volatility is weakest in unilaterally taxed markets (where tax havens exist). Instead, tails are slimmest and volatility clustering is strongest in tax havens. When an encompassing financial transaction tax is levied, stylized facts hardly change compared to a scenario with no tax on all markets. Financial transaction tax; Stylized facts; Fat tails; Volatility clustering; Experiment; 8 2012 36 1248 1266 C91 E62 F31 http://www.sciencedirect.com/science/article/pii/S0165188912000838 Huber, Jürgen Kleinlercher, Daniel Kirchler, Michael oai:RePEc:eee:dyncon:v:2:y:1980:i:1:p:79-912012-12-25RePEc:eee:dyncon article 1 1980 2 5 79 91 http://www.sciencedirect.com/science/article/B6V85-4D9X3KF-37/2/f16468c5c32d9d28ce71dad45bcfff6c Kydland, Finn E. Prescott, Edward C. oai:RePEc:eee:dyncon:v:20:y:1996:i:5:p:925-9442012-12-25RePEc:eee:dyncon article 5 1996 20 5 925 944 http://www.sciencedirect.com/science/article/B6V85-3VVVR8J-B/2/5c91ef47069b55ff12668492bbe49a6c Palokangas, Tapio oai:RePEc:eee:dyncon:v:3:y:1981:i:1:p:97-1182012-12-25RePEc:eee:dyncon article 1 1981 3 11 97 118 http://www.sciencedirect.com/science/article/B6V85-4D9X39C-6/2/ 631f230e113976d39a5c4d3f5c8cd9ef Michel, Philippe oai:RePEc:eee:dyncon:v:17:y:1993:i:4:p:659-6782012-12-25RePEc:eee:dyncon article 4 1993 17 7 659 678 http://www.sciencedirect.com/science/article/ B6V85-45MFRY0-1S/2/3e4da823b6a0a156350684df4ec2f937 Ehtamo, Harri Hamalainen, Raimo P. oai:RePEc:eee:dyncon:v:31:y:2007:i:12:p:3941-39642012-12-25RePEc:eee:dyncon article 12 2007 31 12 3941 3964 http://www.sciencedirect.com/science/article/B6V85-4PKX5M3-1/2/299acff27228f766d347d1f647cf213b Chen, Been-Lon oai:RePEc:eee:dyncon:v:2:y:1980:i:1:p:377-3932012-12-25RePEc:eee:dyncon article 1 1980 2 5 377 393 http://www.sciencedirect.com/science/article/B6V85-4D9X3KF-3W/2/2f55c904017b07f0264e565a44aff24f Stanhouse, Bryan E. Fackler, James S. oai:RePEc:eee:dyncon:v:31:y:2007:i:3:p:1015-10362012-12-25RePEc:eee:dyncon article 3 2007 31 3 1015 1036 http://www.sciencedirect.com/science/article/B6V85-4K07FJ5-2/2/ e1a9166b59e44509b946b418a4057ecf Ganelli, Giovanni oai:RePEc:eee:dyncon:v:14:y:1990:i:2:p:237-2532012-12-25RePEc:eee:dyncon article 2 1990 14 5 237 253 http://www.sciencedirect.com/science/article/ B6V85-45KNJWV-4/2/d99a53f4bef792391a5ec8fda31ea278 Miller, Ross M. oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:237-2562012-12-25RePEc:eee:dyncon article 1-3 1996 20 237 256 http://www.sciencedirect.com/ science/article/B6V85-3VWPNPX-P/2/63d4bc91831793423c25715e108883c4 Ambler, Steve Paquet, Alain oai:RePEc:eee:dyncon:v:31:y:2007:i:2:p:399-4312012-12-25RePEc:eee:dyncon article 2 2007 31 2 399 431 http://www.sciencedirect.com/science/article/B6V85-4JFHDVC-1/2/65789d60e4199a690979042aa911b37f Kimura, Takeshi Kurozumi, Takushi oai:RePEc:eee:dyncon:v:28:y:2003:i:1:p:79-992012-12-25RePEc:eee:dyncon article 1 2003 28 10 79 99 http://www.sciencedirect.com/science/article/B6V85-47287N6-6/2/0f498bbb8c4a978d9ace2397e89a3bbf Germain, Marc Toint, Philippe Tulkens, Henry de Zeeuw, Aart oai:RePEc:eee:dyncon:v:1:y:1979:i:1:p:3-372012-12-25RePEc:eee:dyncon article 1 1979 1 2 3 37 http://www.sciencedirect.com/science/article/ B6V85-4DVNG1X-2/2/04a25a80bf156c6620c2705a2471d7f6 Aoki, Masanao oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:157-1612012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 157 161 http://www.sciencedirect.com/ science/article/B6V85-4D8W2RC-X/2/fa6fd0dc2dcaed7ef12c15d138bf06df Claverie, Pierre Szpiro, Daniel Topol, Richard oai:RePEc:eee:dyncon:v:14:y:1990:i:1:p:97-1162012-12-25RePEc:eee:dyncon article 1 1990 14 2 97 116 http://www.sciencedirect.com/science/article/B6V85-45F8Y2D-Y/2/26eb5e5c710e23ca7c9bac98ac5a0dc4 Basar, Tamer Salmon, Mark oai:RePEc:eee:dyncon:v:32:y:2008:i:5:p:1381-13982012-12-25RePEc:eee:dyncon article We analyze whether different learning abilities of firms with respect to general equilibrium effects lead to different levels of unemployment. We consider a general equilibrium model, where firms in one sector compete a la Cournot and a real wage rigidity leads to unemployment. If firms consider only partial equilibrium effects when choosing quantities, the observation of general equilibrium feedback effects will lead to repeated quantity adjustments until a steady state is reached. When labor is mobile across industries, unemployment in the steady state is higher than when all general equilibrium effects are incorporated at once. The opposite result is true if labor is immobile. 5 2008 32 5 1381 1398 http://www.sciencedirect.com/science/article/B6V85-4NYD8JW-2/1/83f4861a14da98597b75b0efeba3541d Gersbach, Hans Schniewind, Achim oai:RePEc:eee:dyncon:v:27:y:2003:i:4:p:533-5492012-12-25RePEc:eee:dyncon article 4 2003 27 2 533 549 http://www.sciencedirect.com/science/article/B6V85-4724Y30-2/2/d725c653295d9edf0ff275afe16fcfda Kim, Jinill oai:RePEc:eee:dyncon:v:23:y:1999:i:5-6:p:699-7262012-12-25RePEc:eee:dyncon article 5-6 1999 23 4 699 726 http://www.sciencedirect.com/science/article/B6V85-3WF82KY-3/2/ 4be0a05199d274244b17996c6e658903 Wong, Kar-yiu Yip, Chong Kee oai:RePEc:eee:dyncon:v:22:y:1998:i:8-9:p:1275-12892012-12-25RePEc:eee:dyncon article 8-9 1998 22 8 1275 1289 http://www.sciencedirect.com /science/article/B6V85-3TMR2BM-6/2/06de60a6b76b8523617b6c908c133664 Gilli, Manfred Pauletto, Giorgio oai:RePEc:eee:dyncon:v:26:y:2002:i:2:p:217-2452012-12-25RePEc:eee:dyncon article 2 2002 26 2 217 245 http://www.sciencedirect.com/science/article/B6V85-43Y9W8B-4/2/9f65e25d2797097361cb82cea5cb4a73 Rabault, Guillaume oai:RePEc:eee:dyncon:v:24:y:2000:i:11-12:p:1721-17462012-12-25RePEc:eee:dyncon article 11-12 2000 24 10 1721 1746 http://www.sciencedirect.com/science/article/B6V85-412RWNR-9/2/6477d5a245661833acf9b9abc39acfd1 Darolles, Serge Laurent, Jean-Paul oai:RePEc:eee:dyncon:v:32:y:2008:i:7:p:2118-21362012-12-25RePEc:eee:dyncon article This paper models the scale of the technology shocks as a decision variable whose value is determined by the production manager. It is shown that smaller shocks enhance profit in several ways and thus the firm has an incentive to adopt more reliable production technologies. The adoption of these technologies may account for the "good luck" hypothesis in which the stabilization of Gross Domestic Product (GDP) since 1984 is attributed to smaller shocks. It differs from this hypothesis in two respects. First, the reduced volatility should be permanent. Second, the stabilization does not require smaller intrinsic shocks to the economy. 7 2008 32 7 2118 2136 http://www.sciencedirect.com/ science/article/B6V85-4PPW6XG-1/1/d8c8a4728524d4d3b4f33e14009f1558 Bivin, David G. oai:RePEc:eee:dyncon:v:16:y:1992:i:3-4:p:509-5322012-12-25RePEc:eee:dyncon article 3-4 1992 16 509 532 http:// www.sciencedirect.com/science/article/B6V85-45R2GY4-M/2/b6f5daf9ae49a93c78cad9bc1fc88dfb Danthine, Jean-Pierre Donaldson, John B. Mehra, Rajnish oai:RePEc:eee:dyncon:v:33:y:2009:i:4:p:832-8422012-12-25RePEc:eee:dyncon article This paper reexamines Goodwin's business cycle model with nonlinear acceleration principle that gives rise to cyclic oscillations when its stationary state is locally unstable. Fixed time delay in the investment is replaced by continuously distributed time delay. It is first demonstrated that the latter has stronger stabilizing effect than the former and, second, that multiple limit cycles may coexist when the stationary state is locally stable. Fixed time delay Continuously distributed time delay S-shaped investment function Coexistence of multiple limit cycles 4 2009 33 4 832 842 http://www.sciencedirect.com/science/article/B6V85-4V0TCY7-1/2/c3f0c81a149e87def21fc29ea779c663 Matsumoto, Akio oai:RePEc:eee:dyncon:v:15:y:1991:i:4:p:657-6732012-12-25RePEc:eee:dyncon article 4 1991 15 10 657 673 http://www.sciencedirect.com/science/article/B6V85-45R2GXH-3/2/b77cd263ecf27c2298ab15941f996071 Dixit, Avinash oai:RePEc:eee:dyncon:v:33:y:2009:i:3:p:554-5672012-12-25RePEc:eee:dyncon article In a recent paper, d'Albis [2007. Demographic structure and capital accumulation. Journal of Economic Theory 132, 411-434] shows that the effect of population growth on capital accumulation is ambiguous in overlapping-generations models with age-specific mortality rates, contrasting to the predicted negative effect in Diamond [1965. National debt in a neoclassical growth model. American Economic Review 55, 1126-1150] and Blanchard [1985. Debt, deficits, and finite horizons. Journal of Political Economy 93, 223-247]. The quantitative exercises of this paper indicate that while in principle a positive relation between population growth and capital accumulation is possible, this relation is practically always negative for industrial countries. Intuition based on capital dilution and aggregate saving effects is provided. This paper also complements d'Albis [2007. Demographic structure and capital accumulation. Journal of Economic Theory 132, 411-434] in characterizing the steady-state equilibrium in more familiar economic concepts. Age-specific mortality rates Overlapping generations Population growth Capital accumulation 3 2009 33 3 554 567 http://www.sciencedirect.com/science/article/B6V85-4TB777J-2/2/979a2d957745836e4f7a8962fbb3edbc Lau, Sau-Him Paul oai:RePEc:eee:dyncon:v:21:y:1997:i:10:p:1777-17802012-12-25RePEc:eee:dyncon article 10 1997 21 8 1777 1780 http://www.sciencedirect.com/science/article/B6V85-3SX0PR1-B/2/ c7c3b03d06dc0832198e243590a6ec3d Boukas, El-Kebir oai:RePEc:eee:dyncon:v:33:y:2009:i:3:p:710-7242012-12-25RePEc:eee:dyncon article This paper concerns optimal asset-liability management when the assets and the liabilities are modeled by means of correlated geometric Brownian motions as suggested in Gerber and Shiu [2003. Geometric Brownian motion models for assets and liabilities: from pension funding to optimal dividends. North American Actuarial Journal 7(3), 37-51]. In a first part, we apply singular stochastic control techniques to derive a free boundary equation for the optimal value creation as a growth of liabilities or as dividend payment to shareholders. We provide analytical solutions to the Hamilton-Jacobi-Bellman (HJB) optimality equation in a rather general context. In a second part, we study the convergence of the cash flows to the optimal value creation using spectral methods. For particular cases, we also provide a series expansion for the probabilities of bankruptcy in finite time. Asset-liability management HJB principle Local time Spectral theory Free boundary problem 3 2009 33 3 710 724 http://www.sciencedirect.com/science/article/ B6V85-4TN82D1-3/2/c5714a28b3cba0db26825d2739465f9c Decamps, Marc De Schepper, Ann Goovaerts, Marc oai:RePEc:eee:dyncon:v:29:y:2005:i:9:p:1495-15152012-12-25RePEc:eee:dyncon article 9 2005 29 9 1495 1515 http://www.sciencedirect.com/science/article/B6V85-4DVBVTJ-1/2/776bc50f793b9495990fd7d559d96c5d Dai, Min Kwok, Yue Kuen oai:RePEc:eee:dyncon:v:31:y:2007:i:4:p:1160-11842012-12-25RePEc:eee:dyncon article 4 2007 31 4 1160 1184 http://www.sciencedirect.com/science/article/B6V85-4K8S5J0-1/2/2ebfa8882240ae0fc1f2a98efee93b3d Everaert, Gerdie Pozzi, Lorenzo oai:RePEc:eee:dyncon:v:24:y:2000:i:11-12:p:1563-15902012-12-25RePEc:eee:dyncon article 11-12 2000 24 10 1563 1590 http://www.sciencedirect.com/science/article/B6V85-412RWNR-4/2/ 4eccddca471fb10cd6ac7b187a9323fc Zvan, R. Vetzal, K. R. Forsyth, P. A. oai:RePEc:eee:dyncon:v:11:y:1987:i:2:p:249-2542012-12-25RePEc:eee:dyncon article 2 1987 11 6 249 254 http:// www.sciencedirect.com/science/article/B6V85-45JK54D-J/2/3712bbd2e9e9bd973b8977e95b83af41 Petkovski, Djordjija B. oai:RePEc:eee:dyncon:v:31:y:2007:i:12:p:3889-39032012-12-25RePEc:eee:dyncon article 12 2007 31 12 3889 3903 http://www.sciencedirect.com/science/article/B6V85-4NBR8H2-1/2/ff286a40f340919b27d9fde7d051ca10 Matilla-Garcia, Mariano oai:RePEc:eee:dyncon:v:16:y:1992:i:2:p:207-2232012-12-25RePEc:eee:dyncon article 2 1992 16 4 207 223 http://www.sciencedirect.com/science/article/B6V85-45NHVY2-2/2/e43d317a15eff306d48d332393483440 Dow, James Jr. Olson, Lars J. oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1101-11202012-12-25RePEc:eee:dyncon article Recent studies suggest that the type of strategic environment or expectation feedback can have a large impact on whether the market can learn the rational fundamental price. We present an experiment where the fundamental price experiences large unexpected shocks. Markets with negative expectation feedback (strategic substitutes) quickly converge to the new fundamental, while markets with positive expectation feedback (strategic complements) do not converge, but show underreaction in the short run and overreaction in the long run. A simple evolutionary selection model of individual learning explains these differences in aggregate outcomes. Expectation feedback; Under and overreaction; Strategic substitutes and strategic complements; Heuristic switching model; Experimental economics; 8 2012 36 1101 1120 C92 G14 D84 D83 E37 http://www.sciencedirect.com/science/article/ pii/S0165188912000772 Bao, Te Hommes, Cars Sonnemans, Joep Tuinstra, Jan oai:RePEc:eee:dyncon:v:28:y:2004:i:8:p:1681-17012012-12-25RePEc:eee:dyncon article 8 2004 28 6 1681 1701 http:// www.sciencedirect.com/science/article/B6V85-48WJSH4-1/2/0bd6bbcd5a1fe4935e976254a8914ea3 Cardia, Emanuela Michel, Philippe oai:RePEc:eee:dyncon:v:30:y:2006:i:12:p:2725-27482012-12-25RePEc:eee:dyncon article 12 2006 30 12 2725 2748 http://www.sciencedirect.com/science/article/B6V85-4HNYMBD-1/2/c5c307a8ad5bd5da8e42a46618161673 Adam, Klaus Evans, George W. Honkapohja, Seppo oai:RePEc:eee:dyncon:v:22:y:1998:i:8-9:p:1235-12742012-12-25RePEc:eee:dyncon article 8-9 1998 22 8 1235 1274 http://www.sciencedirect.com/science/article/B6V85-3TMR2BM-5/2/ 20b09e1d825dc39cd5776911bf0fd700 Brock, William A. Hommes, Cars H. oai:RePEc:eee:dyncon:v:31:y:2007:i:4:p:1392-14152012-12-25RePEc:eee:dyncon article 4 2007 31 4 1392 1415 http:// www.sciencedirect.com/science/article/B6V85-4KDBKXD-2/2/3b8ffc77ec85e80422db59424a499839 Caliendo, Frank Aadland, David oai:RePEc:eee:dyncon:v:19:y:1995:i:3:p:655-6612012-12-25RePEc:eee:dyncon article 3 1995 19 4 655 661 http://www.sciencedirect.com/science/article/B6V85-3YB56M4-1M/2/9d1ac1322ae55eed99faa7c429ef9dfd Ghiglino, Christian Tvede, Mich oai:RePEc:eee:dyncon:v:17:y:1993:i:4:p:589-6202012-12-25RePEc:eee:dyncon article 4 1993 17 7 589 620 http://www.sciencedirect.com/science/article/B6V85-45MFRY0-1N/2/dd885b55537af4246955e8b654ebec47 Fukuda, Shin-ichi oai:RePEc:eee:dyncon:v:11:y:1987:i:3:p:461-4632012-12-25RePEc:eee:dyncon article 3 1987 11 9 461 463 http://www.sciencedirect.com/science/article/B6V85-4GP1TWK-D/2/ 167a7603cd745ceb127c8c81f2053c63 Khilnani, Arvind T.S.E., Edison T.S. oai:RePEc:eee:dyncon:v:27:y:2003:i:4:p:651-6652012-12-25RePEc:eee:dyncon article 4 2003 27 2 651 665 http://www.sciencedirect.com /science/article/B6V85-44CNPCP-1/2/1dc94e0b721482c3ce4d406ed44b75f4 Thille, Henry oai:RePEc:eee:dyncon:v:27:y:2003:i:5:p:875-9052012-12-25RePEc:eee:dyncon article 5 2003 27 3 875 905 http:// www.sciencedirect.com/science/article/B6V85-452WD22-1/2/e6ed1c62449f27713f73aac965d28b53 Breton, Michele St-Amour, Pascal Vencatachellum, Desire oai:RePEc:eee:dyncon:v:13:y:1989:i:3:p:421-4482012-12-25RePEc:eee:dyncon article 3 1989 13 7 421 448 http://www.sciencedirect.com/science/article/B6V85-46X3RNB-5/2/b7c612596ea577f43b1d32a6dafdfe8b Mason, Charles F. oai:RePEc:eee:dyncon:v:31:y:2007:i:11:p:3644-36702012-12-25RePEc:eee:dyncon article 11 2007 31 11 3644 3670 http://www.sciencedirect.com/science/article/B6V85-4N516T2-1/2/ 38e96713683353d0bb70826f9375c19c Furukawa, Yuichi oai:RePEc:eee:dyncon:v:18:y:1994:i:2:p:411-4322012-12-25RePEc:eee:dyncon article 2 1994 18 3 411 432 http://www.sciencedirect.com/science/article/ B6V85-45CX0JG-7/2/fcdd30d1b158d5d058b70ad40d2239bc Cho, Jang-Ok Cooley, Thomas F. oai:RePEc:eee:dyncon:v:26:y:2002:i:7-8:p:1275-12992012-12-25RePEc:eee:dyncon article 7-8 2002 26 7 1275 1299 http:// www.sciencedirect.com/science/article/B6V85-459HNNF-B/2/7f612989e1e0327d6762bfb313f8c1f1 Hodrick, Robert Vassalou, Maria oai:RePEc:eee:dyncon:v:31:y:2007:i:6:p:1938-19702012-12-25RePEc:eee:dyncon article 6 2007 31 6 1938 1970 http://www.sciencedirect.com/science/article/B6V85-4N44043-1/2/481138d80c80a321fd966a52b6413b06 Boswijk, H. Peter Hommes, Cars H. Manzan, Sebastiano oai:RePEc:eee:dyncon:v:27:y:2003:i:10:p:1743-17702012-12-25RePEc:eee:dyncon article 10 2003 27 8 1743 1770 http://www.sciencedirect.com/science/article/B6V85-461XK5K-1/2/ edd6c2ad27d24102551ed6fbbfe3a41a Potzelberger, Klaus Sogner, Leopold oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1088-11002012-12-25RePEc:eee:dyncon article The breakdown of short-term funding markets was a key feature of the global financial crisis of 2007/2008. Drawing on ideas from global games and network growth, we show how network topology interacts with the funding structure of financial institutions to determine system-wide crises. Bad news about a financial institution can lead others to lose confidence in it and their withdrawals, in turn, trigger problems across the interbank network. Once broken, credit relations take a long time to re-establish as a result of common knowledge of the equilibrium. Our findings shed light on public policy responses during and after the crisis. Interbank networks; Credit crisis; Liquidity freeze; 8 2012 36 1088 1100 C72 G01 G21 http://www.sciencedirect.com/science/article/pii/S0165188912000760 Anand, Kartik Gai, Prasanna Marsili, Matteo oai:RePEc:eee:dyncon:v:30:y:2006:i:1:p:55-792012-12-25RePEc:eee:dyncon article 1 2006 30 1 55 79 http://www.sciencedirect.com/science/article/B6V85-4FFX9CT-1/2/32e533bf5b7e511875b5e3d7eec97018 Baccara, Mariagiovanna Battauz, Anna Ortu, Fulvio oai:RePEc:eee:dyncon:v:27:y:2002:i:2:p:303-3272012-12-25RePEc:eee:dyncon article 2 2002 27 12 303 327 http://www.sciencedirect.com/science/article/ B6V85-46SVXBT-7/2/35364bcd12288bac4b0e850fdbd99d73 Hughes Hallett, A. J. Piscitelli, Laura oai:RePEc:eee:dyncon:v:1:y:1979:i:3:p:283-3002012-12-25RePEc:eee:dyncon article 3 1979 1 283 300 http:// www.sciencedirect.com/science/article/B6V85-4DJ3F4H-4/2/a940a5aaa8f4188ca1af18fba6361d6e Rustem, B. Zarrop, M.B. oai:RePEc:eee:dyncon:v:31:y:2007:i:11:p:3568-35902012-12-25RePEc:eee:dyncon article 11 2007 31 11 3568 3590 http://www.sciencedirect.com/science/article/B6V85-4N2M68R-2/2/eb26de7683e08fe369cb2cf80556ce67 LiCalzi, Marco Pellizzari, Paolo oai:RePEc:eee:dyncon:v:19:y:1995:i:3:p:621-6532012-12-25RePEc:eee:dyncon article 3 1995 19 4 621 653 http://www.sciencedirect.com/science/article/B6V85-3YB56M4-1K/2/b9b940ae4b1807759003072e0880b534 Ghiglino, Christian Tvede, Mich oai:RePEc:eee:dyncon:v:22:y:1998:i:3:p:369-3992012-12-25RePEc:eee:dyncon article 3 1998 22 3 369 399 http://www.sciencedirect.com/science/article/B6V85-3SX82KJ-4/2/ a7eb99c6fb3257db4f6c9fcc257a15c2 Hori, Hajime oai:RePEc:eee:dyncon:v:31:y:2007:i:12:p:3986-40152012-12-25RePEc:eee:dyncon article 12 2007 31 12 3986 4015 http://www.sciencedirect.com/science/article/ B6V85-4N9MYFR-3/2/bf3e4a4bcfb586897a1439ef807d0b0b Elekdag, Selim Tchakarov, Ivan oai:RePEc:eee:dyncon:v:32:y:2008:i:4:p:1204-12112012-12-25RePEc:eee:dyncon article A sufficient condition is derived for the global stability of a unique interior Nash equilibrium in an aggregative game. The condition is applied to investigate the global stability of the Nash-Cournot equilibrium in Cournot oligopoly without product differentiation and that of the Nash equilibrium in rent-seeking games. 4 2008 32 4 1204 1211 http://www.sciencedirect.com/science/article/B6V85-4NTRT06-2/1/ 00cc73cbe922c197409d524ed3e7e0c8 Okuguchi, Koji Yamazaki, Takeshi oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1626-16582012-12-25RePEc:eee:dyncon article Numerous optimal control models analyzed in economics are formulated as discounted infinite time horizon problems, where the defining functions are nonlinear as well in the states as in the controls. As a consequence solutions can often only be found numerically. Moreover, the long run optimal solutions are mostly limit sets like equilibria or limit cycles. Using these specific solutions a BVP approach together with a continuation technique is used to calculate the parameter dependent dynamic structure of the optimal vector field. We use a one dimensional optimal control model of a fishery to exemplify the numerical techniques. But these methods are applicable to a much wider class of optimal control problems with a moderate number of state and control variables. Optimal vector field; BVP; Continuation; Multiple optimal solutions; Threshold point; 10 2012 36 1626 1658 C02 C61 C63 http://www.sciencedirect.com/science/article/pii/S0165188912000966 Grass, D. oai:RePEc:eee:dyncon:v:26:y:2002:i:6:p:1009-10272012-12-25RePEc:eee:dyncon article 6 2002 26 6 1009 1027 http://www.sciencedirect.com/science/article/B6V85-44KV265-7/2/ e6b3281f1624bf8f3e1293978296632f Fethke, Gary Jagannathan, Raj oai:RePEc:eee:dyncon:v:26:y:2002:i:2:p:271-3022012-12-25RePEc:eee:dyncon article 2 2002 26 2 271 302 http://www.sciencedirect.com/ science/article/B6V85-43Y9W8B-6/2/4a65c3ca2be864b742511b545914a098 Kwark, Noh-Sun oai:RePEc:eee:dyncon:v:30:y:2006:i:4:p:623-6542012-12-25RePEc:eee:dyncon article 4 2006 30 4 623 654 http:// www.sciencedirect.com/science/article/B6V85-4GFNFVT-1/2/f6e1d3814f6289de8d37df3c3b83e55d Coakley, Jerry Fuertes, Ana-Maria oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1121-11412012-12-25RePEc:eee:dyncon article The recent financial crisis poses the challenge to understand how systemic risk arises endogenously and what architecture can make the financial system more resilient to global crises. This paper shows that a financial network can be most resilient for intermediate levels of risk diversification, and not when this is maximal, as generally thought so far. This finding holds in the presence of the financial accelerator, i.e. when negative variations in the financial robustness of an agent tend to persist in time because they have adverse effects on the agent's subsequent performance through the reaction of the agent's counterparties. Systemic risk; Network models; Contagion; Financial acceleration; Financial crisis; 8 2012 36 1121 1141 D85 G01 G21 http:// www.sciencedirect.com/science/article/pii/S0165188912000899 Battiston, Stefano Delli Gatti, Domenico Gallegati, Mauro Greenwald, Bruce Stiglitz, Joseph E. oai:RePEc:eee:dyncon:v:25:y:2001:i:1-2:p:85-1132012-12-25RePEc:eee:dyncon article 1-2 2001 25 1 85 113 http://www.sciencedirect.com/science/article/B6V85-418PPNR-3/2/d57807c859ab3212ad2ef54a0e82738c Eicher, Theo S. Turnovsky, Stephen J. oai:RePEc:eee:dyncon:v:8:y:1984:i:3:p:265-2752012-12-25RePEc:eee:dyncon article 3 1984 8 12 265 275 http://www.sciencedirect.com/science/article/B6V85-4C9BX3B-J/ 2/d1eff6ffd5497d905df5a303f65d72b4 Barron, John M. Loewenstein, Mark A. Black, Dan A. oai:RePEc:eee:dyncon:v:23:y:1999:i:5-6:p:773-7952012-12-25RePEc:eee:dyncon article 5-6 1999 23 4 773 795 http:// www.sciencedirect.com/science/article/B6V85-3WF82KY-6/2/9a1399b82eb9a56cb078f714fbc2fc6d Devereux, Michael B. oai:RePEc:eee:dyncon:v:30:y:2006:i:1:p:111-1412012-12-25RePEc:eee:dyncon article 1 2006 30 1 111 141 http://www.sciencedirect.com/science/article/B6V85-4FFN4XG-1/2/dd38b8c9272f7784116c8139af0c60e5 Fernandes, Marcelo oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1462-14762012-12-25RePEc:eee:dyncon article We study optimal investment in technologies characterized by the learning curve. There are two investment patterns depending on the shape of the learning curve. If the learning process is slow, firms invest relatively late and on a larger scale. If the curve is steep, firms invest earlier and on a smaller scale. We further demonstrate that learning investment differs greatly from investment in technologies without learning effects. Learning investments generate substantial initial losses and are very sensitive to downside risk. We show that the most susceptible to losses and risk are technologies with intermediate speed of learning. Learning-curve technology; Investment timing; Investment size; Real options; 10 2012 36 1462 1476 C61 D92 http://www.sciencedirect.com/science/article/pii/S0165188912000863 Della Seta, Marco Gryglewicz, Sebastian Kort, Peter M. oai:RePEc:eee:dyncon:v:23:y:1999:i:9-10:p:1299-13272012-12-25RePEc:eee:dyncon article 9-10 1999 23 9 1299 1327 http://www.sciencedirect.com/science/article/B6V85-3Y9RKX5-4/2/ 0f13eafc9fc9cfb7177d010f2e97d952 Kozicki, Sharon Tinsley, P. A. oai:RePEc:eee:dyncon:v:28:y:2003:i:3:p:531-5532012-12-25RePEc:eee:dyncon article 3 2003 28 12 531 553 http://www.sciencedirect.com/ science/article/B6V85-4846M85-2/2/45a9af4fbb9878c88f74119ee6121f63 Khalaf, Lynda Saphores, Jean-Daniel Bilodeau, Jean-Francois oai:RePEc:eee:dyncon:v:31:y:2007:i:4:p:1300-13252012-12-25RePEc:eee:dyncon article 4 2007 31 4 1300 1325 http://www.sciencedirect.com/science/article/B6V85-4K9C6D7-1/2/ 31cda8740cc0e35c2b62e3b05f166d21 Chang, Sheng-Kai oai:RePEc:eee:dyncon:v:32:y:2008:i:11:p:3631-36602012-12-25RePEc:eee:dyncon article Analytic methods for solving asset pricing models are developed to solve asset pricing models. Campbell and Cochrane's [1999. By force of habit, a consumption-based explanation of aggregate stock market behavior. Journal of Political Economy 107, 205-251] habit persistence model provides a prototypical example to illustrate this method. When the parameters involved satisfy certain conditions, the integral equation of this model has a solution in the space of continuous functions that grows exponentially at infinity. However, the parameters advocated by Campbell and Cochrane do not satisfy one of these conditions. The existence problem is removed by restricting the price-dividend function to avoid values of dividend growth that are extreme. Thus, existence and uniqueness of the solution in the space of continuous and bounded functions is proved. Using complex analysis the price-dividend function is also shown to be analytic in a region large enough to cover all relevant values of dividend growth. Next, a numerical method is presented for computing higher order polynomial approximations of the solution. Finally, a uniform upper bound on the error of these approximations is derived. An intensive search of the parameter space results in no parameter values for which the solution matches the historic equity premium and Sharpe ratio within Campbell and Cochrane's model. Analyticity Asset pricing Habit 11 2008 32 11 3631 3660 http:// www.sciencedirect.com/science/article/B6V85-4S4JYKH-1/2/dfd76d0a600534f68596c9588f87a88d Chen, Yu Cosimano, Thomas F. Himonas, Alex A. oai:RePEc:eee:dyncon:v:31:y:2007:i:10:p:3370-33952012-12-25RePEc:eee:dyncon article 10 2007 31 10 3370 3395 http://www.sciencedirect.com/science/article/B6V85-4MYMNRR-1/2/ 15dd07198ecf59022d15c414981928e5 White, T. Kirk oai:RePEc:eee:dyncon:v:27:y:2003:i:5:p:853-8732012-12-25RePEc:eee:dyncon article 5 2003 27 3 853 873 http://www.sciencedirect.com/science/article/ B6V85-45517B6-1/2/9981e35bf06e6c77e3466704252f4319 Kwan, Yum K. Lai, Edwin L. -C. oai:RePEc:eee:dyncon:v:36:y:2012:i:9:p:1364-13712012-12-25RePEc:eee:dyncon article It is often believed that governments should either abstain from leading activist policies, or if they lead such policies, that these policies should somehow be “stabilizing”, in the sense of reducing the volatilities of some endogenous variables. We construct a model with explicit foundations where the optimal policies are activist, and they make both employment and output more volatile than in the no intervention case. Destabilizing optimal policies; Business cycles; Monetary policy; 9 2012 36 1364 1371 E32 E52 http://www.sciencedirect.com/science/article/pii/S0165188912001091 Bénassy, Jean-Pascal oai:RePEc:eee:dyncon:v:32:y:2008:i:1:p:303-3192012-12-25RePEc:eee:dyncon article 1 2008 32 1 303 319 http://www.sciencedirect.com/science/article/B6V85-4ND709J-4/2/1e1784f1d5eb1e30b25f799f80acc1a8 Gabaix, Xavier Gopikrishnan, Parameswaran Plerou, Vasiliki Eugene Stanley, H. oai:RePEc:eee:dyncon:v:21:y:1997:i:8-9:p:1323-13522012-12-25RePEc:eee:dyncon article 8-9 1997 21 6 1323 1352 http:// www.sciencedirect.com/science/article/B6V85-3SWYBJD-3/2/b09d6a2428a76f385868373beda6a38a Broadie, Mark Glasserman, Paul oai:RePEc:eee:dyncon:v:21:y:1997:i:2-3:p:525-5502012-12-25RePEc:eee:dyncon article 2-3 1997 21 525 550 http://www.sciencedirect.com/science/article/B6V85-3SWV8HG-D/2/5f4540d7d64a3afa8aa170864a875f6c Hindy, Ayman Huang, Chi-fu Zhu, Steven H. oai:RePEc:eee:dyncon:v:22:y:1997:i:1:p:67-862012-12-25RePEc:eee:dyncon article 1 1997 22 11 67 86 http://www.sciencedirect.com/science/article/B6V85-3SX6H28-4/2/9460a7e41915877443f186bde7eebff9 Amir, Rabah oai:RePEc:eee:dyncon:v:30:y:2006:i:2:p:205-2272012-12-25RePEc:eee:dyncon article 2 2006 30 2 205 227 http://www.sciencedirect.com/science/article/B6V85-4FHRPSD-2/2/ ec3fc50adc24cb73c1979eecfaf2ff7b Grinols, Earl Lin, Hwan C. oai:RePEc:eee:dyncon:v:19:y:1995:i:1-2:p:303-3252012-12-25RePEc:eee:dyncon article 1-2 1995 19 303 325 http://www.sciencedirect.com/science /article/B6V85-3YB56MM-23/2/533ea0412d6287690e6e70971d6edf4c Bertocchi, Graziella Kehagias, Athanasios oai:RePEc:eee:dyncon:v:29:y:2005:i:3:p:449-4682012-12-25RePEc:eee:dyncon article 3 2005 29 3 449 468 http://www.sciencedirect.com/science/article/B6V85-4CC311W-1/2/3ea5d1883268bf4f29d335babc141330 El Karoui, Nicole Jeanblanc, Monique Lacoste, Vincent oai:RePEc:eee:dyncon:v:23:y:1998:i:1:p:125-1582012-12-25RePEc:eee:dyncon article 1 1998 23 9 125 158 http://www.sciencedirect.com/science/article/B6V85-3V7JBM1-8/2/918a21678e7db0686408ea44fad33e71 Kim, Se-Jik oai:RePEc:eee:dyncon:v:14:y:1990:i:1:p:53-632012-12-25RePEc:eee:dyncon article 1 1990 14 2 53 63 http://www.sciencedirect.com/science/article/B6V85-45F8Y2D-V/2/ 3625a0a9c67258149a772b514a621592 Zeira, Joseph oai:RePEc:eee:dyncon:v:22:y:1998:i:4:p:489-5012012-12-25RePEc:eee:dyncon article 4 1998 22 4 489 501 http://www.sciencedirect.com/science/article/ B6V85-3SX7FYV-1/2/d6e7e29e2bbc0007baa3aeeff04713cc Armstrong, John Black, Richard Laxton, Douglas Rose, David oai:RePEc:eee:dyncon:v:36:y:2012:i:9:p:1303-13212012-12-25RePEc:eee:dyncon article During the accumulation phase of a defined-contribution pension scheme, a scheme member invests part of their stochastic income in a portfolio of a stock and a bond in order to build up sufficient funds for retirement. It is assumed that the remainder of their salary pre-retirement is consumed, an annuity is purchased at retirement, and the stock allocation and consumption pre-retirement maximise the total expected lifetime consumption using a CARA utility function. Perfect correlation between the scheme member's income and the stock price leads to analytical expressions for the controls for a general income model. If the correlation is imperfect then analytical controls are found for two particular stochastic income models. Defined-contribution pension; Lifecycle model; Stochastic income; Replacement ratio; 9 2012 36 1303 1321 C61 D31 D52 D91 http://www.sciencedirect.com/science/article/pii/S0165188912000279 Emms, Paul oai:RePEc:eee:dyncon:v:30:y:2006:i:9-10:p:1671-16862012-12-25RePEc:eee:dyncon article 9-10 2006 30 1671 1686 http://www.sciencedirect.com/science/article/B6V85-4JVT1J3-2/2/ 1349f4c4977944e9c2309a668e15b2a8 Kuzin, Vladimir oai:RePEc:eee:dyncon:v:23:y:1999:i:5-6:p:747-7722012-12-25RePEc:eee:dyncon article 5-6 1999 23 4 747 772 http://www.sciencedirect.com/science/article/ B6V85-3WF82KY-5/2/4a8c0180958b48975310cbb174015ad2 Ambler, Steve Cardia, Emanuela Farazli, Jeannine oai:RePEc:eee:dyncon:v:19:y:1995:i:4:p:787-8112012-12-25RePEc:eee:dyncon article 4 1995 19 5 787 811 http://www.sciencedirect.com/science/article/B6V85-3YB56KP-16/2/5d755e3bf2aa349bb90f5a026ff11df7 Zapatero, Fernando oai:RePEc:eee:dyncon:v:20:y:1996:i:6-7:p:1073-11002012-12-25RePEc:eee:dyncon article 6-7 1996 20 1073 1100 http://www.sciencedirect.com/science/article/B6V85-3VW1T3H-6/2/8b1871b53e8eb9f7360a253097c80f74 Cabrales, Antonio Hoshi, Takeo oai:RePEc:eee:dyncon:v:32:y:2008:i:9:p:2788-28082012-12-25RePEc:eee:dyncon article 9 2008 32 9 2788 2808 http://www.sciencedirect.com/science/article/B6V85-4R53R7V-1/2/ dd8ae4c65f3e373dd7541496edf42dc3 Shintani, Mototsugu oai:RePEc:eee:dyncon:v:28:y:2004:i:4:p:661-6902012-12-25RePEc:eee:dyncon article 4 2004 28 1 661 690 http://www.sciencedirect.com/science/article/ B6V85-48BTYTB-1/2/57c71bcc05959a808cfa95340d8de676 Svensson, Lars E. O. Woodford, Michael oai:RePEc:eee:dyncon:v:13:y:1989:i:2:p:283-3002012-12-25RePEc:eee:dyncon article 2 1989 13 4 283 300 http:// www.sciencedirect.com/science/article/B6V85-46MMW4J-1B/2/52f43987be652d8d255733b09553d883 Currier, Kevin M. oai:RePEc:eee:dyncon:v:32:y:2008:i:1:p:137-1552012-12-25RePEc:eee:dyncon article 1 2008 32 1 137 155 http://www.sciencedirect.com/science/article/B6V85-4N9MYFR-2/2/3d3647c67198928a04e048bc6c760e23 Levy, Moshe oai:RePEc:eee:dyncon:v:14:y:1990:i:1:p:65-712012-12-25RePEc:eee:dyncon article 1 1990 14 2 65 71 http://www.sciencedirect.com/science/article/B6V85-45F8Y2D-W/2/5a19dd02ec89851095e629df348da6d5 Pujol, Thierry oai:RePEc:eee:dyncon:v:23:y:1999:i:9-10:p:1355-13862012-12-25RePEc:eee:dyncon article 9-10 1999 23 9 1355 1386 http://www.sciencedirect.com/science/article/B6V85-3Y9RKX5-6/2/ b2c8d44b14ecd8e57bddf250ff4f6ab7 Leisen, Dietmar P. J. oai:RePEc:eee:dyncon:v:13:y:1989:i:4:p:569-5952012-12-25RePEc:eee:dyncon article 4 1989 13 10 569 595 http://www.sciencedirect.com/science/ article/B6V85-45GNWG5-4/2/33dfb91dc130c9e8023ea77a27fa251f Clarke, Harry R. Reed, William J. oai:RePEc:eee:dyncon:v:14:y:1990:i:1:p:187-1922012-12-25RePEc:eee:dyncon article 1 1990 14 2 187 192 http: //www.sciencedirect.com/science/article/B6V85-45F8Y2D-14/2/06028abfec5e50a1b2eb37c26f03f0c1 Goldberg, Matthew S. Borjas, George J. oai:RePEc:eee:dyncon:v:31:y:2007:i:7:p:2374-23972012-12-25RePEc:eee:dyncon article 7 2007 31 7 2374 2397 http://www.sciencedirect.com/science/article/B6V85-4M57GYR-1/2/ 57987a279b73ed4a67b540d97b9381a0 Dawid, Herbert Day, Richard H. oai:RePEc:eee:dyncon:v:31:y:2007:i:5:p:1672-16962012-12-25RePEc:eee:dyncon article 5 2007 31 5 1672 1696 http://www.sciencedirect.com/ science/article/B6V85-4KPP41C-1/2/3a87ae64b45d7ac70549f9bb6a4e6da6 Navas, Jorge Kort, Peter M. oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:149-1552012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 149 155 http://www.sciencedirect.com/science/article/B6V85-4D8W2RC-W/2/750abcef5f3a20e3e6e8a547663c9492 Aoki, Masanao Havenner, Arthur oai:RePEc:eee:dyncon:v:22:y:1997:i:1:p:109-1212012-12-25RePEc:eee:dyncon article 1 1997 22 11 109 121 http://www.sciencedirect.com/science/article/B6V85-3SX6H28-6/2/f04282dacb9da613fa4f97dd6851ed5e Hairault, Jean-Olivier Langot, Francois Portier, Franck oai:RePEc:eee:dyncon:v:20:y:1996:i:9-10:p:1609-16392012-12-25RePEc:eee:dyncon article 9-10 1996 20 1609 1639 http://www.sciencedirect.com/science/article/B6V85-3VV430P-6/2/ 64c9e1de0ff99fb87903eec38e39a8b9 Feve, Patrick Langot, Francois oai:RePEc:eee:dyncon:v:12:y:1988:i:2-3:p:447-4612012-12-25RePEc:eee:dyncon article 2-3 1988 12 447 461 http://www.sciencedirect.com/ science/article/B6V85-45MFRW4-P/2/4b75da1da7bce1c22b72754f9d0e978e Clark, Peter K. oai:RePEc:eee:dyncon:v:28:y:2004:i:4:p:841-8562012-12-25RePEc:eee:dyncon article 4 2004 28 1 841 856 http:// www.sciencedirect.com/science/article/B6V85-48B5P38-2/2/07a3b5bc8caf02934ebc1b34b43adc89 Alonso-Carrera, Jaime Freire-Seren, Maria Jesus oai:RePEc:eee:dyncon:v:22:y:1998:i:5:p:729-7622012-12-25RePEc:eee:dyncon article 5 1998 22 5 729 762 http://www.sciencedirect.com/science/article/B6V85-3SX8BH0-5/2/e0e74ae261e2ad78430aa3b157362468 Engwerda, Jacob C. oai:RePEc:eee:dyncon:v:18:y:1994:i:2:p:317-3442012-12-25RePEc:eee:dyncon article 2 1994 18 3 317 344 http://www.sciencedirect.com/science/article/B6V85-45CX0JG-2/2/ 621d2a24ff02a89ffd1787dc984863f6 Haurie, Alain Roche, Michel oai:RePEc:eee:dyncon:v:25:y:2001:i:12:p:1973-19872012-12-25RePEc:eee:dyncon article 12 2001 25 12 1973 1987 http://www.sciencedirect.com/ science/article/B6V85-43HBY8X-8/2/38f2ebeaafd7700f45db61ff40eaa0a9 Jorgensen, Steffen Zaccour, Georges oai:RePEc:eee:dyncon:v:19:y:1995:i:3:p:599-6192012-12-25RePEc:eee:dyncon article 3 1995 19 4 599 619 http://www.sciencedirect.com/science/article/B6V85-3YB56M4-1J/2/7fb2947c46811bf93c9b4777984bb992 Montrucchio, Luigi oai:RePEc:eee:dyncon:v:21:y:1997:i:1:p:23-732012-12-25RePEc:eee:dyncon article 1 1997 21 1 23 73 http://www.sciencedirect.com/science/article/B6V85-3T7HKH4-2/2/5f409fe4054f06bf75defe5a671f76e7 de la Fuente, Angel oai:RePEc:eee:dyncon:v:26:y:2002:i:6:p:919-9362012-12-25RePEc:eee:dyncon article 6 2002 26 6 919 936 http://www.sciencedirect.com/science/article/B6V85-44KV265-3/2/2de106bf0a3561caacb2169edde619e0 Behrens, Doris A. Caulkins, Jonathan P. Tragler, Gernot Feichtinger, Gustav oai:RePEc:eee:dyncon:v:19:y:1995:i:5-7:p:1033-10642012-12-25RePEc:eee:dyncon article 5-7 1995 19 1033 1064 http:// www.sciencedirect.com/science/article/B6V85-3YB56JR-M/2/3be16363a8098fb3cb8de9db53116c7d Jovanovic, Boyan Nyarko, Yaw oai:RePEc:eee:dyncon:v:31:y:2007:i:11:p:3459-34772012-12-25RePEc:eee:dyncon article 11 2007 31 11 3459 3477 http://www.sciencedirect.com/science/article/B6V85-4MY0TM6-1/2/2a5781844c4170067c7249b8bee67449 Tsur, Yacov Zemel, Amos oai:RePEc:eee:dyncon:v:3:y:1981:i:1:p:307-3172012-12-25RePEc:eee:dyncon article 1 1981 3 11 307 317 http://www.sciencedirect.com/science/article/B6V85-4D9X39C-N/2/5f4e1b9f33a7da5a8745bd56f89ea807 Bradford, David F. Kelehan, Harry H. oai:RePEc:eee:dyncon:v:27:y:2003:i:11-12:p:1961-19912012-12-25RePEc:eee:dyncon article 11-12 2003 27 9 1961 1991 http://www.sciencedirect.com/science/article/ B6V85-47287N6-4/2/be79ae515083d4590402a60f621734b9 Cadiou, Loic Dees, Stephane Laffargue, Jean-Pierre oai:RePEc:eee:dyncon:v:31:y:2007:i:7:p:2293-23162012-12-25RePEc:eee:dyncon article 7 2007 31 7 2293 2316 http://www.sciencedirect.com/science/article/B6V85-4KXDW9W-1/2/14cb8e50047a7029375058de95cfde93 Moslener, Ulf Requate, Till oai:RePEc:eee:dyncon:v:5:y:1983:i:1:p:151-1712012-12-25RePEc:eee:dyncon article 1 1983 5 2 151 171 http://www.sciencedirect.com/science/article/B6V85-4C47HD0-9/2/dc869677cd040f6b3ff340e2342f8e7e Abrams, Richard K. Froyen, Richard T. Waud, Roger N. oai:RePEc:eee:dyncon:v:23:y:1999:i:9-10:p:1387-14242012-12-25RePEc:eee:dyncon article 9-10 1999 23 9 1387 1424 http://www.sciencedirect.com/ science/article/B6V85-3Y9RKX5-7/2/76e8f0f733b0507c775c3d7e63dda915 Chiarella, Carl El-Hassan, Nadima Kucera, Adam oai:RePEc:eee:dyncon:v:23:y:1999:i:8:p:1207-12242012-12-25RePEc:eee:dyncon article 8 1999 23 8 1207 1224 http://www.sciencedirect.com/science/article/B6V85-3X6B5WV-7/2/ff2c7b869d666568a1811ec177319818 Glomm, Gerhard Ravikumar, B. oai:RePEc:eee:dyncon:v:26:y:2002:i:5:p:869-8872012-12-25RePEc:eee:dyncon article 5 2002 26 5 869 887 http://www.sciencedirect.com/science/article/B6V85-44JJ2BT-8/2/a5728b70a17da6e462e03dc64fd7e20c Mehra, Rajnish Sah, Raaj oai:RePEc:eee:dyncon:v:30:y:2006:i:12:p:2749-27742012-12-25RePEc:eee:dyncon article 12 2006 30 12 2749 2774 http://www.sciencedirect.com/science/article/B6V85-4HKMPKG-1/2/ 8181247a7a3f0e9d7a089eee7ba79332 Tripier, Fabien oai:RePEc:eee:dyncon:v:24:y:2000:i:2:p:165-1882012-12-25RePEc:eee:dyncon article 2 2000 24 2 165 188 http://www.sciencedirect.com/science/article/ B6V85-3YJY5V9-1/2/7400d4e54baa77f64ffa051d61dcfa0d Ferris, Michael C. Munson, Todd S. oai:RePEc:eee:dyncon:v:14:y:1990:i:3-4:p:685-7082012-12-25RePEc:eee:dyncon article 3-4 1990 14 10 685 708 http:// www.sciencedirect.com/science/article/B6V85-45MFRX7-19/2/1e481fba04aa13ff87370ca3ac02d2ca Lipman, Barton L. oai:RePEc:eee:dyncon:v:36:y:2012:i:9:p:1402-14132012-12-25RePEc:eee:dyncon article This paper formulates a continuous-time information transmission model in which an altruistic sender privately observes a stochastic state variable, and incurs a communication cost when she broadcasts a message. We characterize the sender's optimal announcement strategy using an ordinary differential equation. We prove the optimality of the sender's strategies using a martingale verification argument and show that the sender's optimal strategy involves sending discrete messages. Furthermore, we apply the model to the timing decision of credit rating announcements and provide a framework to study various aspects of rating announcements, such as the probability of rating reversals and the expected time before a rating change. Dynamic information transmission; Costly talk; Credit rating announcement; 9 2012 36 1402 1413 D81 D83 C61 http://www.sciencedirect.com/science/article/pii/S0165188912000656 Wang, Hefei oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:51-572012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 51 57 http://www.sciencedirect.com/science/article/B6V85-4D8W2RC-B/2/fe86c7ea7270508dfa3fb9daf5191daf Hughes Hallett, A. J. oai:RePEc:eee:dyncon:v:14:y:1990:i:2:p:435-4502012-12-25RePEc:eee:dyncon article 2 1990 14 5 435 450 http://www.sciencedirect.com/science/article/B6V85-45KNJWV-F/2/ 02ee53899d1da12945b580e336afa8e4 Berndsen, Ron Daniels, Hennie oai:RePEc:eee:dyncon:v:36:y:2012:i:9:p:1372-14012012-12-25RePEc:eee:dyncon article The rate of information diffusion and, consequently, price discovery are conditional not only upon the design of the market microstructure but also the informational structure. This paper presents a market microstructure model showing that an increasing number of information hierarchies among informed competitive traders leads to a slower information diffusion rate and informational inefficiency. The model illustrates that informed traders may prefer trading with each other rather than with noise traders in the presence of information hierarchies. The empirical investigation using transaction data on China's equity market supports that the information hierarchies decrease the speed of price discovery. Information hierarchies; Information diffusion rate; Price discovery; Momentum; 9 2012 36 1372 1401 G10 G11 D43 D82 http://www.sciencedirect.com/science/article/pii/S0165188912000620 Xue, Yi Gençay, Ramazan oai:RePEc:eee:dyncon:v:33:y:2009:i:4:p:798-8162012-12-25RePEc:eee:dyncon article We propose a numerical approach for structural estimation of a class of discrete (Markov) decision processes emerging in real options applications. The approach is specifically designed to account for two typical features of aggregate data sets in real options: the endogeneity of firms' decisions; the unobserved heterogeneity of firms. The approach extends the nested fixed point algorithm by Rust [1987. Optimal replacement of GMC bus engines: an empirical model of Harold Zurcher. Econometrica 55(5), 999-1033; 1988. Maximum likelihood estimation of discrete control processes. SIAM Journal of Control and Optimization 26(5), 1006-1024] because both the nested optimization algorithm and the integration over the distribution of the unobserved heterogeneity are accommodated using a simulation method based on a polynomial approximation of the value function and on recursive least squares estimation of the coefficients. The Monte Carlo study shows that omitting unobserved heterogeneity produces a significant estimation bias because the model can be highly non-linear with respect to the parameters. Real options Markov decision processes Discrete decision processes Monte Carlo methods 4 2009 33 4 798 816 http://www.sciencedirect.com/science/article/B6V85-4TRK0J7-1/2/8fbfc9ae358a54490d1a13b8dce8ee69 Gamba, Andrea Tesser, Matteo oai:RePEc:eee:dyncon:v:19:y:1995:i:4:p:747-7862012-12-25RePEc:eee:dyncon article 4 1995 19 5 747 786 http://www.sciencedirect.com/science/article/B6V85-3YB56KP-15/2/95ae7354ad29651213579a4d88fb5250 Turnovsky, Stephen J. Fisher, Walter H. oai:RePEc:eee:dyncon:v:31:y:2007:i:5:p:1451-14722012-12-25RePEc:eee:dyncon article 5 2007 31 5 1451 1472 http://www.sciencedirect.com/science/article/ B6V85-4KGPNBK-1/2/45c366ad84f6cbc3c787411e424a2b77 Auerbach, Alan J. Hassett, Kevin oai:RePEc:eee:dyncon:v:31:y:2007:i:3:p:861-8862012-12-25RePEc:eee:dyncon article 3 2007 31 3 861 886 http:// www.sciencedirect.com/science/article/B6V85-4JT3RXM-2/2/2a3d7ba1392b48bc2082f9d6e17856d3 Zeng, Jinli Zhang, Jie oai:RePEc:eee:dyncon:v:31:y:2007:i:12:p:3904-39402012-12-25RePEc:eee:dyncon article 12 2007 31 12 3904 3940 http://www.sciencedirect.com/science/article/B6V85-4NF2H6N-1/2/f788e9be1f017051d6728bad3541a476 Alexopoulos, Michelle oai:RePEc:eee:dyncon:v:22:y:1998:i:8-9:p:1467-14852012-12-25RePEc:eee:dyncon article 8-9 1998 22 8 1467 1485 http://www.sciencedirect.com/science/article/B6V85-3TMR2BM-J/2/ 495f9d3fa2feba6d74ac2164cfd7cb5d Nagurney, Anna Zhang, Ding oai:RePEc:eee:dyncon:v:31:y:2007:i:7:p:2317-23492012-12-25RePEc:eee:dyncon article 7 2007 31 7 2317 2349 http://www.sciencedirect.com/ science/article/B6V85-4M0BH97-1/2/e24728826af3f2c139f0b439698dc6ca Sircar, Ronnie Xiong, Wei oai:RePEc:eee:dyncon:v:31:y:2007:i:5:p:1781-18002012-12-25RePEc:eee:dyncon article 5 2007 31 5 1781 1800 http://www.sciencedirect.com/science/article/B6V85-4MBCJGS-1/2/c88e694178028dc2f006230ad107e659 Hugonnier, Julien Morellec, Erwan oai:RePEc:eee:dyncon:v:22:y:1998:i:7:p:1001-10262012-12-25RePEc:eee:dyncon article 7 1998 22 5 1001 1026 http://www.sciencedirect.com/science/article/B6V85-3V5MB4X-2/2/ d61c460b6f0d40f84b88ccd7e78e81c9 Jin, Xing oai:RePEc:eee:dyncon:v:24:y:2000:i:11-12:p:1703-17192012-12-25RePEc:eee:dyncon article 11-12 2000 24 10 1703 1719 http://www.sciencedirect.com/science/ article/B6V85-412RWNR-8/2/6e88337c0970ac012b7b5a8fd624f565 Aliprantis, C. D. Brown, D. J. Werner, J. oai:RePEc:eee:dyncon:v:31:y:2007:i:5:p:1728-17522012-12-25RePEc:eee:dyncon article 5 2007 31 5 1728 1752 http://www.sciencedirect.com/science/article/B6V85-4KV2RBR-1/2/64154983a363c051cf2d0585692a192c Leach, Andrew J. oai:RePEc:eee:dyncon:v:30:y:2006:i:9-10:p:1441-14442012-12-25RePEc:eee:dyncon article 9-10 2006 30 1441 1444 http://www.sciencedirect.com/science/article/B6V85-4JVSWH0-1/2/ cea65a253da68a0f6556a85afb689eef Bullard, Jim Diks, Cees Wagener, Florian oai:RePEc:eee:dyncon:v:11:y:1987:i:4:p:465-4812012-12-25RePEc:eee:dyncon article 4 1987 11 12 465 481 http:// www.sciencedirect.com/science/article/B6V85-4GP1TWJ-1/2/ea9c5ee95a4fedf306c272af3110c5f8 Anderson, Gary oai:RePEc:eee:dyncon:v:28:y:2004:i:8:p:1661-16802012-12-25RePEc:eee:dyncon article 8 2004 28 6 1661 1680 http://www.sciencedirect.com/science/article/B6V85-48WB7Y0-1/2/4fcc9a21b72249f1cc8c4cdce7ec3c4d Costello, Christopher Karp, Larry oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1193-12112012-12-25RePEc:eee:dyncon article In the framework of small-scale agent-based financial market models, the paper starts out from the concept of structural stochastic volatility, which derives from different noise levels in the demand of fundamentalists and chartists and the time-varying market shares of the two groups. It advances several different specifications of the endogenous switching between the trading strategies and then estimates these models by the method of simulated moments (MSMs), where the choice of the moments reflects the basic stylized facts of the daily returns of a stock market index. In addition to the standard version of MSM with a quadratic loss function, we also take into account how often a great number of Monte Carlo simulation runs happen to yield moments that are all contained within their empirical confidence intervals. The model contest along these lines reveals a strong role for a (tamed) herding component. The quantitative performance of the winner model is so good that it may provide a standard for future research. Method of simulated moments; Moment coverage ratio; Herding; Discrete choice approach; Transition probability approach; 8 2012 36 1193 1211 D84 G12 G14 G15 http://www.sciencedirect.com/science/article/pii/S0165188912000802 Franke, Reiner Westerhoff, Frank oai:RePEc:eee:dyncon:v:23:y:1999:i:5-6:p:675-6982012-12-25RePEc:eee:dyncon article 5-6 1999 23 4 675 698 http://www.sciencedirect.com/science/article/B6V85-3WF82KY-2/2/ 3fbeff3f1fc5fe40a9afbb769ace3704 Van Long, Ngo Shimomura, Koji oai:RePEc:eee:dyncon:v:23:y:1999:i:7:p:909-9282012-12-25RePEc:eee:dyncon article 7 1999 23 6 909 928 http://www.sciencedirect.com/ science/article/B6V85-3WRBPDW-1/2/37ac3ff07fee98855bfa974dff6e7856 Kiviet, Jan F. Phillips, Garry D. A. Schipp, Bernhard oai:RePEc:eee:dyncon:v:33:y:2009:i:2:p:454-4622012-12-25RePEc:eee:dyncon article In this study we offer a new approach to proving the differentiability of the value function, which complements and extends the literature on dynamic programming. This result is then applied to the analysis of equilibrium in the recent class of monetary economies developed in [Lagos, R., Wright, R., 2005. A unified framework for monetary theory and policy analysis. Journal of Political Economy 113, 463-484]. For this type of environments we demonstrate that the value function is differentiable and this guarantees that the marginal value of money balances is well defined. Value function Optimal plans Money 2 2009 33 2 454 462 http://www.sciencedirect.com/science/article/B6V85-4T26309-1/2/b86bb5f3b944100792860f76ee674590 Aliprantis, C.D. Camera, G. Ruscitti, F. oai:RePEc:eee:dyncon:v:27:y:2003:i:11-12:p:2151-21702012-12-25RePEc:eee:dyncon article 11-12 2003 27 9 2151 2170 http://www.sciencedirect.com/science/article/B6V85-46Y59JJ-2/2/ fcabea0656bf60b3474f56f0df1902fb Engle-Warnick, Jim oai:RePEc:eee:dyncon:v:26:y:2002:i:7-8:p:1159-11932012-12-25RePEc:eee:dyncon article 7-8 2002 26 7 1159 1193 http://www.sciencedirect.com/science/ article/B6V85-459HNNF-6/2/ee291fc8700b89c0a838c2b98e8ff90f Alexander, Gordon J. Baptista, Alexandre M. oai:RePEc:eee:dyncon:v:20:y:1996:i:6-7:p:1027-10492012-12-25RePEc:eee:dyncon article 6-7 1996 20 1027 1049 http://www.sciencedirect.com/science/article/B6V85-3VW1T3H-4/2/ba21dc875450ff050c45909c3632ea08 Evans, Paul oai:RePEc:eee:dyncon:v:32:y:2008:i:6:p:2031-20602012-12-25RePEc:eee:dyncon article Previous analyses of population aging mainly focused on the social security implications of the aging trend. This paper addresses aging in an open economy framework with two regions that have politically responsive fiscal policy regarding education finance. Demographic shocks start an economic growth process but results are sensitive to a critical parameter in the model that indicates return to education spending. Low values of this parameter are associated with less favorable economic outcomes. Hence, a policy implication emerges that enhancing the education system might pay off in terms of easing the negative growth and welfare consequences of expected demographic shocks. 6 2008 32 6 2031 2060 http://www.sciencedirect.com/science/article/B6V85-4PT1SBR-1/1/ 0c51f21618e0f22a5b13d8b14188ff75 Tosun, Mehmet Serkan oai:RePEc:eee:dyncon:v:15:y:1991:i:3:p:515-5382012-12-25RePEc:eee:dyncon article 3 1991 15 7 515 538 http://www.sciencedirect.com/science/article /B6V85-458X21K-4/2/2ad1161d27d2764e8be52ab64f090541 Mizrach, Bruce oai:RePEc:eee:dyncon:v:15:y:1991:i:3:p:425-4532012-12-25RePEc:eee:dyncon article 3 1991 15 7 425 453 http://www.sciencedirect.com/ science/article/B6V85-458X21K-1/2/751095bc707d941c636a9a6529eeacab Detemple, Jerome B. oai:RePEc:eee:dyncon:v:13:y:1989:i:2:p:271-2822012-12-25RePEc:eee:dyncon article 2 1989 13 4 271 282 http:// www.sciencedirect.com/science/article/B6V85-46MMW4J-19/2/a6db2b27f38be4ff8ed4deee321fcb24 Lichtenberg, Frank R. oai:RePEc:eee:dyncon:v:11:y:1987:i:3:p:285-3122012-12-25RePEc:eee:dyncon article 3 1987 11 9 285 312 http://www.sciencedirect.com/science/article/B6V85-4GP1TWK-2/2/7cd32663ccdc1c950b069b6735418155 Sotomayor, Marilda de Oliveira oai:RePEc:eee:dyncon:v:22:y:1998:i:5:p:703-7282012-12-25RePEc:eee:dyncon article 5 1998 22 5 703 728 http://www.sciencedirect.com/science/article/B6V85-3SX8BH0-4/2/e16465c11e528f5c8554161ee71e5aab Reichlin, Pietro Rustichini, Aldo oai:RePEc:eee:dyncon:v:23:y:1999:i:5-6:p:727-7462012-12-25RePEc:eee:dyncon article 5-6 1999 23 4 727 746 http://www.sciencedirect.com/science/article/B6V85-3WF82KY-4 /2/59254159f9718207ac057e321a066506 Eicher, T. S. oai:RePEc:eee:dyncon:v:28:y:2004:i:6:p:1115-11482012-12-25RePEc:eee:dyncon article 6 2004 28 3 1115 1148 http://www.sciencedirect.com/science/article /B6V85-48FK73N-2/2/66bd7915ef0dd428beecf2c07e337ff1 Bodie, Zvi Detemple, Jerome B. Otruba, Susanne Walter, Stephan oai:RePEc:eee:dyncon:v:29:y:2005:i:8:p:1331-13602012-12-25RePEc:eee:dyncon article 8 2005 29 8 1331 1360 http://www.sciencedirect.com/science/article/B6V85-4DTKH21-1/2/e9116808d94faaba35be05bc4e2e08a9 Becsi, Zsolt Li, Victor E. Wang, Ping oai:RePEc:eee:dyncon:v:10:y:1986:i:3:p:367-3942012-12-25RePEc:eee:dyncon article 3 1986 10 9 367 394 http://www.sciencedirect.com/science/article/B6V85-4C40JJV-2T/2/31a8211b15184aecc0791a385cb96cb9 Coles, Jeffrey L. oai:RePEc:eee:dyncon:v:21:y:1997:i:10:p:1645-16672012-12-25RePEc:eee:dyncon article 10 1997 21 8 1645 1667 http://www.sciencedirect.com/science/article/B6V85-3SX0PR1-5/2/ 4167032ab3d2790e0f542f5f3b46dc49 Lau, Sau-Him Paul oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:433-4442012-12-25RePEc:eee:dyncon article 1-3 1996 20 433 444 http://www.sciencedirect.com/science/article/ B6V85-3VWPNPX-10/2/87299cb6b45c3069d1b1ce73d475b15c Lozada, Gabriel A. oai:RePEc:eee:dyncon:v:18:y:1994:i:3-4:p:897-9082012-12-25RePEc:eee:dyncon article 3-4 1994 18 897 908 http:// www.sciencedirect.com/science/article/B6V85-45JK58T-2X/2/8accaa764bebc21067597c471efe78ad West, Kenneth D. Wilcox, David W. oai:RePEc:eee:dyncon:v:22:y:1997:i:1:p:141-1782012-12-25RePEc:eee:dyncon article 1 1997 22 11 141 178 http://www.sciencedirect.com/science/article/B6V85-3SX6H28-8/2/e701cc5839231d3cac131cd6ed7c3b63 Siow, Aloysius Zhu, Xiaodong oai:RePEc:eee:dyncon:v:27:y:2003:i:11-12:p:2059-20942012-12-25RePEc:eee:dyncon article 11-12 2003 27 9 2059 2094 http://www.sciencedirect.com/science/article/B6V85-47287N6-3/2/ ad43d24577dba252b276ecebb7d766d5 Batini, Nicoletta Harrison, Richard Millard, Stephen P. oai:RePEc:eee:dyncon:v:18:y:1994:i:6:p:1051-10682012-12-25RePEc:eee:dyncon article 6 1994 18 11 1051 1068 http://www.sciencedirect.com/science/article/B6V85-45MFRVH-1/2/a7ffde5747306bc07c895c313a1a7047 Imrohoroglu, Selahattin oai:RePEc:eee:dyncon:v:33:y:2009:i:3:p:745-7572012-12-25RePEc:eee:dyncon article This paper considers heterogeneities in preferences over the local public good, human capital formation, and residential locations as primary underlying forces of economic stratification in an endogenously growing economy. We construct a two-period overlapping-generations model with two regions and various forms of human capital externalities where altruistic agents determine intertemporal allocation of time, investment in a child's education and residential location. We fully characterize a balanced growth equilibrium with no migration across generations to elaborate on how changes in preference, human capital accumulation, production, and interregional commuting parameters may affect the equilibrium stratification outcome in the long run. Economic stratification Human capital evolution and income growth Local public good and taxes 3 2009 33 3 745 757 http://www.sciencedirect.com/science/article/B6V85-4TNWGRW-1/2/1161cf6caf611b419bc224423f5c5f07 Chen, Been-Lon Peng, Shin-Kun Wang, Ping oai:RePEc:eee:dyncon:v:32:y:2008:i:11:p:3520-35372012-12-25RePEc:eee:dyncon article This paper describes a method to solve the free-boundary problem that arises in the pricing of American options. Most numerical methods for American option pricing exploit the representation of the option price as the expected pay-off under the risk-neutral measure and calculate the price for a given time to expiration and stock price. They do not solve the related free-boundary problem explicitly. The advantage of solving the free-boundary problem is that it provides the entire price function as well as the optimal exercise boundary explicitly. Our approach, which we term the Moving Boundary Approach, is based on using a boundary guess and the value associated with the guess to construct an improved boundary. It is also shown that on iteration, the sequence of boundaries converge monotonically to the optimal exercise boundary. Examples illustrating the convergence behavior as well as discussions providing insight into the method are also presented. Finally, we compare runtimes and speeds with other methods that solve the free-boundary problem and compute the optimal boundaries explicitly, like the front-fixing method, penalty method, method based on the integral representations and the method by Brennan and Schwartz [1977. The valuation of American put options. Journal of Finance 32 (2), 449-462]. American option pricing Stochastic control Hamilton-Jacobi-Bellman equation Free-boundary 11 2008 32 11 3520 3537 http:// www.sciencedirect.com/science/article/B6V85-4S2F5H4-1/2/0f29bedf6a46e616037ab41ece623861 Muthuraman, Kumar oai:RePEc:eee:dyncon:v:29:y:2005:i:6:p:1025-10412012-12-25RePEc:eee:dyncon article 6 2005 29 6 1025 1041 http://www.sciencedirect.com/science/article/B6V85-4DK68WB-1/2/7f685b74f6f62150f06dc9b8a8b05b34 Malchow-Moller, Nikolaj Thorsen, Bo Jellesmark oai:RePEc:eee:dyncon:v:28:y:2004:i:11:p:2277-22952012-12-25RePEc:eee:dyncon article 11 2004 28 10 2277 2295 http://www.sciencedirect.com/science/article/B6V85-4BN0JCF-7/2/ dee22cf21f76de8c004b512c9c9dee58 Aadland, David Huang, Kevin X. D. oai:RePEc:eee:dyncon:v:31:y:2007:i:11:p:3671-36982012-12-25RePEc:eee:dyncon article 11 2007 31 11 3671 3698 http:// www.sciencedirect.com/science/article/B6V85-4N3GNK0-2/2/c8cf419debda22e6ef8735330b779615 Fujita, Shigeru Ramey, Garey oai:RePEc:eee:dyncon:v:30:y:2006:i:12:p:2661-26702012-12-25RePEc:eee:dyncon article 12 2006 30 12 2661 2670 http://www.sciencedirect.com/science/article/B6V85-4HGD72T-2/2/746980c35d69842e700db7b50417fd11 Desai, Meghnad Henry, Brian Mosley, Alexander Pemberton, Malcolm oai:RePEc:eee:dyncon:v:8:y:1984:i:2:p:137-1492012-12-25RePEc:eee:dyncon article 2 1984 8 11 137 149 http://www.sciencedirect.com/science/article/B6V85-4C9BX3V-W/2/a3d17f1429eeb6544d902fbc927645c1 Norman, Alfred Lorn oai:RePEc:eee:dyncon:v:26:y:2002:i:3:p:451-4812012-12-25RePEc:eee:dyncon article 3 2002 26 3 451 481 http://www.sciencedirect.com/science/article/B6V85-4494XGC-6/2/ 7cf61fa97021a3b67b59a0024bb6f582 Engwerda, Jacob C. van Aarle, Bas Plasmans, Joseph E. J. oai:RePEc:eee:dyncon:v:31:y:2007:i:6:p:1808-18432012-12-25RePEc:eee:dyncon article 6 2007 31 6 1808 1843 http://www.sciencedirect.com/science/article/B6V85-4N2M68R-1/2/607de828a15646f4b2fb4f139cebb374 Lux, Thomas Kaizoji, Taisei oai:RePEc:eee:dyncon:v:25:y:2001:i:11:p:1827-18402012-12-25RePEc:eee:dyncon article 11 2001 25 11 1827 1840 http://www.sciencedirect.com/science/article/B6V85-43DKSHS-6/2/1363adaa98e4a025c3e905c237453505 Lin, Wen-Zhung Yang, C. C. oai:RePEc:eee:dyncon:v:14:y:1990:i:1:p:183-1852012-12-25RePEc:eee:dyncon article 1 1990 14 2 183 185 http://www.sciencedirect.com/science/article/B6V85-45F8Y2D-13/2/ec969968bcc59ad335b2228eaa048095 Kalaba, R. Tesfatsion, L. oai:RePEc:eee:dyncon:v:2:y:1980:i:1:p:353-3762012-12-25RePEc:eee:dyncon article 1 1980 2 5 353 376 http://www.sciencedirect.com/science/article/B6V85-4D9X3KF-3V/2/ cbd479b7dddc364156258281476c90fe Stutzer, Michael J. oai:RePEc:eee:dyncon:v:13:y:1989:i:3:p:319-3372012-12-25RePEc:eee:dyncon article 3 1989 13 7 319 337 http://www.sciencedirect.com/science/article/ B6V85-46X3RNB-1/2/77c41f81fd3bb28d8c2e7428b767a3eb Tomiyama, Ken Rossana, Robert J. oai:RePEc:eee:dyncon:v:18:y:1994:i:3-4:p:511-5382012-12-25RePEc:eee:dyncon article 3-4 1994 18 511 538 http:// www.sciencedirect.com/science/article/B6V85-45JK58T-2B/2/68351bbfea9bbc1b37222c3e2d4bb6d2 El-Gamal, Mahmoud A. oai:RePEc:eee:dyncon:v:28:y:2004:i:7:p:1437-14602012-12-25RePEc:eee:dyncon article 7 2004 28 4 1437 1460 http://www.sciencedirect.com/science/article/B6V85-491J1MB-2/2/23484c5386665b62be3db918ab3df791 Dangl, Thomas Wirl, Franz oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1566-15842012-12-25RePEc:eee:dyncon article We present an extensive analysis of the consequences for global equilibrium determinacy in flexible-price open economies of implementing active interest rate rules, i.e., monetary rules where the nominal interest rate responds more than proportionally to inflation. We show that conditions under which these rules generate aggregate instability by inducing liquidity traps, endogenous cycles, and chaotic dynamics depend on specific characteristics of open economies. In particular, rules that respond to expected future inflation are more prone to induce endogenous cyclical and chaotic dynamics the more open the economy to trade. Small open economy; Interest rate rules; Taylor rules; Multiple equilibria; Chaos and endogenous fluctuations; 10 2012 36 1566 1584 E32 E52 F41 http://www.sciencedirect.com/science/article/pii/S0165188912001376 Airaudo, Marco Zanna, Luis-Felipe oai:RePEc:eee:dyncon:v:22:y:1998:i:3:p:465-4822012-12-25RePEc:eee:dyncon article 3 1998 22 3 465 482 http://www.sciencedirect.com/science/article/B6V85-3SX82KJ-7/2/4f812bc869d4b634d0f01f0ce226b618 Wu, Yangru Zhang, Junxi oai:RePEc:eee:dyncon:v:24:y:2000:i:4:p:483-4992012-12-25RePEc:eee:dyncon article 4 2000 24 4 483 499 http://www.sciencedirect.com/science/article/B6V85-3Y6HDF4-1/2/ b9318b28767aaffb264098f2e255ac4c Ozyildirim, Suheyla Alemdar, Nedim M. oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1448-14612012-12-25RePEc:eee:dyncon article We study how environmental regulation in the form of a cap on aggregate emissions from a fossil fuel (e.g., coal) interacts with the arrival of a clean substitute (e.g., solar energy). The cost of the substitute is assumed to decrease with cumulative use because of learning-by-doing. We show that optimal energy prices may initially increase because of pollution regulation, but fall due to learning, and rise again because of scarcity of the resource, finally falling after transition to the clean substitute. Thus nonrenewable resource prices may exhibit cyclical behavior even in a purely deterministic setting. Climate change; Energy markets; Environmental externalities; Nonrenewable resources; Technological change; 10 2012 36 1448 1461 http://www.sciencedirect.com/science/article/pii/S0165188912000954 Chakravorty, Ujjayant Leach, Andrew Moreaux, Michel oai:RePEc:eee:dyncon:v:28:y:2004:i:4:p:691-7062012-12-25RePEc:eee:dyncon article 4 2004 28 1 691 706 http://www.sciencedirect.com/science/article/B6V85-486G8DM-3/2/ 60d0c70e8f67e725cac7b07602ebae30 Li, Victor E. Chang, Chia-Ying oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:281-2892012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 281 289 http://www.sciencedirect.com/ science/article/B6V85-4D8W2RC-1J/2/d21fd473b4c27a507894f527c99254c8 Karakitsos, Elias oai:RePEc:eee:dyncon:v:31:y:2007:i:1:p:325-3592012-12-25RePEc:eee:dyncon article 1 2007 31 1 325 359 http:// www.sciencedirect.com/science/article/B6V85-4JHMFC4-1/2/9401f244c0931e9814b50a6de020c074 Heijdra, Ben J. Ligthart, Jenny E. oai:RePEc:eee:dyncon:v:19:y:1995:i:8:p:1449-14692012-12-25RePEc:eee:dyncon article 8 1995 19 11 1449 1469 http://www.sciencedirect.com/science/article/B6V85-3YB56J6-7/2/f5cc33fc456b7f72a02e4a5aa9a6b2e6 Conlon, John R. oai:RePEc:eee:dyncon:v:23:y:1999:i:8:p:1077-10982012-12-25RePEc:eee:dyncon article 8 1999 23 8 1077 1098 http://www.sciencedirect.com/science/article/B6V85-3X6B5WV-1/2/ 313679a395e03e2aad048e1a099bf733 Gowrisankaran, Gautam oai:RePEc:eee:dyncon:v:33:y:2009:i:2:p:296-3162012-12-25RePEc:eee:dyncon article In recent years, the learnability of rational expectations equilibria (REE) and determinacy of economic structures have rightfully joined the usual performance criteria among the sought-after goals of policy design. Some contributions to the literature, including Bullard and Mitra [2002. Learning about monetary policy rules. Journal of Monetary Economics 49 (6), 1105-1139] and Evans and Honkapohja [2006. Monetary Policy, Expectations, and Commitment, Scandinavian Journal of Economics 108, 15-38], have made significant headway in establishing certain features of monetary policy rules that facilitate learning. However a treatment of policy design for learnability in worlds where agents have potentially misspecified their learning models has yet to surface. This paper provides such a treatment. We begin with the notion that because the profession has yet to settle on a consensus model of the economy, it is unreasonable to expect private agents to have collective rational expectations. We assume that agents have only an approximate understanding of the workings of the economy and that their learning the reduced forms of the economy is subject to potentially destabilizing perturbations. The issue is then whether a central bank can design policy to account for perturbations and still assure the learnability of the model. We provide two examples one of which-the canonical New Keynesian business cycle model-serves as a test case. For different parameterizations of a given policy rule, we use structured singular value analysis (from robust control theory) to find the largest ranges of misspecifications that can be tolerated in a learning model without compromising convergence to an REE. In addition, we study the cost, in terms of performance in the steady state of a central bank that acts to robustify learnability on the transition path to REE. Monetary policy Learning E-stability Learnability Robust control 2 2009 33 2 296 316 http://www.sciencedirect.com/science/article/ B6V85-4SY6W26-1/2/45f39d41f560d0db44aaaedcf7be3d8d Tetlow, Robert J. von zur Muehlen, Peter oai:RePEc:eee:dyncon:v:28:y:2003:i:1:p:171-1812012-12-25RePEc:eee:dyncon article 1 2003 28 10 171 181 http: //www.sciencedirect.com/science/article/B6V85-472BKTN-1/2/d84f00b0273dbcbb2d2365c2b1881680 Evans, George W. Honkapohja, Seppo oai:RePEc:eee:dyncon:v:27:y:2003:i:4:p:573-5972012-12-25RePEc:eee:dyncon article 4 2003 27 2 573 597 http://www.sciencedirect.com/science/article/B6V85-4724Y30-4/2/279f68912a966333eb132bfb548baca9 Coto-Martinez, Javier Dixon, Huw oai:RePEc:eee:dyncon:v:28:y:2003:i:1:p:141-1512012-12-25RePEc:eee:dyncon article 1 2003 28 10 141 151 http://www.sciencedirect.com/science/article/B6V85-47287N6-5/2/9cf7ebee7cb06f2befdf3901566488ec Hazari, Bharat R. Sgro, Pasquale M. oai:RePEc:eee:dyncon:v:21:y:1997:i:7:p:1229-12582012-12-25RePEc:eee:dyncon article 7 1997 21 6 1229 1258 http://www.sciencedirect.com/science/article/ B6V85-3SWYBJD-11/2/f1bfea3e2e16cfac5d05b889b5c0f373 Naik, Narayan Y. oai:RePEc:eee:dyncon:v:29:y:2005:i:1-2:p:321-3342012-12-25RePEc:eee:dyncon article 1-2 2005 29 1 321 334 http:// www.sciencedirect.com/science/article/B6V85-4BRPN38-3/2/dcbee9456e118cb2bcd3bba0017b0e7f Klemm, Konstantin Eguiluz, Victor M. Toral, Raul Miguel, Maxi San oai:RePEc:eee:dyncon:v:8:y:1984:i:1:p:19-322012-12-25RePEc:eee:dyncon article 1 1984 8 10 19 32 http://www.sciencedirect.com/science/article/B6V85-4C9BX45-13/2/911bec9a153d52590642b31f9206279c Hori, Hajime oai:RePEc:eee:dyncon:v:12:y:1988:i:1:p:173-1792012-12-25RePEc:eee:dyncon article 1 1988 12 3 173 179 http://www.sciencedirect.com/science/article/B6V85-45JK55T-1M/2/ 18fac04d52238581560e5b4ff89defb5 Brillet, Jean-Louis Laurent, Jean-Paul oai:RePEc:eee:dyncon:v:30:y:2006:i:9-10:p:1647-16692012-12-25RePEc:eee:dyncon article 9-10 2006 30 1647 1669 http:// www.sciencedirect.com/science/article/B6V85-4JTR8VM-1/2/a398d4d52100d32237b6f612fa03da93 Diks, Cees Panchenko, Valentyn oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1431-14472012-12-25RePEc:eee:dyncon article We propose an evolutionary analysis of a voting game where citizens have a preference for conformism that adds to the instrumental preference for the electoral outcome. Multiple equilibria arise, and some generate high turnout. Simulations of best response dynamics show that high turnout is asymptotically stable if conformism matters but its likelihood depends on the reference group for conformism: high turnout is more likely when voters care about their own group's choice, as this better overrides the free rider problem of voting games. Comparative statics on the voting cost distribution, and the groups' composition are also done. Turnout; Coordination games; Poisson games; Conformism; Selection dynamics; 10 2012 36 1431 1447 D72 C72 C73 http://www.sciencedirect.com/ science/article/pii/S0165188912001248 Landi, M. Sodini, M. oai:RePEc:eee:dyncon:v:25:y:2001:i:3-4:p:395-4172012-12-25RePEc:eee:dyncon article 3-4 2001 25 3 395 417 http://www.sciencedirect.com/ science/article/B6V85-419JHMW-5/2/dca0da785f7970463bbc20756e32c75d Arifovic, Jasmina oai:RePEc:eee:dyncon:v:18:y:1994:i:3-4:p:807-8132012-12-25RePEc:eee:dyncon article 3-4 1994 18 807 813 http:// www.sciencedirect.com/science/article/B6V85-45JK58T-2R/2/a551ceb744888c4c89775153715dd1e5 Benhabib, Jess Rustichini, Aldo oai:RePEc:eee:dyncon:v:16:y:1992:i:3-4:p:601-6202012-12-25RePEc:eee:dyncon article 3-4 1992 16 601 620 http://www.sciencedirect.com/science/article/B6V85-45R2GY4-S/2/616e5add3687211f469ab3f50ea1bfd2 Heaton, John Lucas, Deborah oai:RePEc:eee:dyncon:v:26:y:2002:i:6:p:937-9612012-12-25RePEc:eee:dyncon article 6 2002 26 6 937 961 http://www.sciencedirect.com/science/article/B6V85-44KV265-4/2/8fabcf9fdb96c07d2a2d06870d110bc9 van den Broek, W. A. oai:RePEc:eee:dyncon:v:26:y:2002:i:1:p:99-1162012-12-25RePEc:eee:dyncon article 1 2002 26 1 99 116 http://www.sciencedirect.com/science/article/B6V85-43SV5MV-5/2/ d579c9780176cc45a237abb6d7096b5a Hartwick, John M. Karp, Larry Long, Ngo Van oai:RePEc:eee:dyncon:v:1:y:1979:i:1:p:59-832012-12-25RePEc:eee:dyncon article 1 1979 1 2 59 83 http:// www.sciencedirect.com/science/article/B6V85-4DVNG1X-4/2/054006f86eae7dbc556b357bbd6543c7 Craine, Roger oai:RePEc:eee:dyncon:v:32:y:2008:i:9:p:2745-27872012-12-25RePEc:eee:dyncon article 9 2008 32 9 2745 2787 http://www.sciencedirect.com/science/article/B6V85-4R5F1PN-1/2/f09c335192311e430bc8ee2c42106ea3 Brock, William Xepapadeas, Anastasios oai:RePEc:eee:dyncon:v:19:y:1995:i:3:p:553-5682012-12-25RePEc:eee:dyncon article 3 1995 19 4 553 568 http://www.sciencedirect.com/science/article/B6V85-3YB56M4-1G/2/3335d64d688afb5e1ada2451bf5ac7eb Blenman, L. P. Cantrell, R. S. Fennell, R. E. Parker, D. F. Reneke, J. A. Wang, L. F. S. Womer, N. K. oai:RePEc:eee:dyncon:v:23:y:1998:i:3:p:333-3692012-12-25RePEc:eee:dyncon article 3 1998 23 11 333 369 http://www.sciencedirect.com/science/article/B6V85-3V8C8B6-1/2/929e1baf228a7626f1e3d597bdc0ad2a Das, Sanjiv Ranjan oai:RePEc:eee:dyncon:v:31:y:2007:i:5:p:1535-15562012-12-25RePEc:eee:dyncon article 5 2007 31 5 1535 1556 http://www.sciencedirect.com/science/article/B6V85-4KMYFPB-1/2/967dbd45d61361ad53193e72baf13c99 Caulkins, Jonathan P. Hartl, Richard F. Kort, Peter M. Feichtinger, Gustav oai:RePEc:eee:dyncon:v:26:y:2002:i:6:p:963-9842012-12-25RePEc:eee:dyncon article 6 2002 26 6 963 984 http://www.sciencedirect.com/science/article/B6V85-44KV265-5/2/ c903806b8a51b4835e6c6cfd7ac88625 Boileau, Martin oai:RePEc:eee:dyncon:v:19:y:1995:i:3:p:569-5972012-12-25RePEc:eee:dyncon article 3 1995 19 4 569 597 http://www.sciencedirect.com/science/article/ B6V85-3YB56M4-1H/2/83ee158698ec10ec6b68e4883c5eaf3a Reffett, Kevin L. oai:RePEc:eee:dyncon:v:8:y:1984:i:2:p:197-2092012-12-25RePEc:eee:dyncon article 2 1984 8 11 197 209 http://www.sciencedirect.com/ science/article/B6V85-4C9BX3V-10/2/fdf00d2caa9ab329acea62bec69ad042 Elbers, Chris Withagen, Cees oai:RePEc:eee:dyncon:v:21:y:1997:i:4-5:p:831-8522012-12-25RePEc:eee:dyncon article 4-5 1997 21 5 831 852 http://www.sciencedirect.com/science/article/B6V85-3SWY0XD-7/2/eacff924f2cc78f84d532df86ced8d23 Abel, Andrew B. Eberly, Janice C. oai:RePEc:eee:dyncon:v:12:y:1988:i:2-3:p:365-3842012-12-25RePEc:eee:dyncon article 2-3 1988 12 365 384 http://www.sciencedirect.com/science/article/B6V85-45MFRW4-K/2/797a66cac31e71237623424344c5d8c8 Harvey, A. C. Stock, James H. oai:RePEc:eee:dyncon:v:31:y:2007:i:11:p:3503-35442012-12-25RePEc:eee:dyncon article 11 2007 31 11 3503 3544 http://www.sciencedirect.com/science/article/B6V85-4N1SSD8-1/ 2/4c4f90e6fa725b64dff73cf4a32bc78a Guidolin, Massimo Timmermann, Allan oai:RePEc:eee:dyncon:v:26:y:2002:i:2:p:171-1852012-12-25RePEc:eee:dyncon article 2 2002 26 2 171 185 http:// www.sciencedirect.com/science/article/B6V85-43Y9W8B-1/2/eadeebf81ff8a54a3f33a695b38d27fd Sogner, Leopold Mitlohner, Hans oai:RePEc:eee:dyncon:v:31:y:2007:i:5:p:1633-16712012-12-25RePEc:eee:dyncon article 5 2007 31 5 1633 1671 http://www.sciencedirect.com/science/article/B6V85-4KSD80R-1/2/659e88b935e4b7c1421790af8fb7a783 Medio, Alfredo Raines, Brian oai:RePEc:eee:dyncon:v:27:y:2003:i:6:p:909-9352012-12-25RePEc:eee:dyncon article 6 2003 27 4 909 935 http://www.sciencedirect.com/science/article/B6V85-45VCK9R-1/2/9205d85f951b7298994c45edc4141516 Gencay, Ramazan Dacorogna, Michel Olsen, Richard Pictet, Olivier oai:RePEc:eee:dyncon:v:23:y:1999:i:8:p:1155-11752012-12-25RePEc:eee:dyncon article 8 1999 23 8 1155 1175 http://www.sciencedirect.com/ science/article/B6V85-3X6B5WV-4/2/976e46ce54f1857572bfdb653cbeb65f Asea, Patrick K. Zak, Paul J. oai:RePEc:eee:dyncon:v:26:y:2002:i:7-8:p:1301-13212012-12-25RePEc:eee:dyncon article 7-8 2002 26 7 1301 1321 http://www.sciencedirect.com/science/article/B6V85-459HNNF-C/2/55809d9ef11723069e2eed5c026be28d Bhattacharya, Sudipto Plank, Manfred Strobl, Gunter Zechner, Josef oai:RePEc:eee:dyncon:v:26:y:2002:i:11:p:1955-19742012-12-25RePEc:eee:dyncon article 11 2002 26 9 1955 1974 http://www.sciencedirect.com/science/article/B6V85-45S90GW-6/2/ 7377530e15117400cf53380ee08c9156 Biederman, Daniel K. oai:RePEc:eee:dyncon:v:20:y:1996:i:6-7:p:979-10062012-12-25RePEc:eee:dyncon article 6-7 1996 20 979 1006 http://www.sciencedirect.com/science/ article/B6V85-3VW1T3H-2/2/6eb557351f5ecd9a96b8bb6217995512 Moore, Bartholomew Schaller, Huntley oai:RePEc:eee:dyncon:v:20:y:1996:i:9-10:p:1809-18132012-12-25RePEc:eee:dyncon article 9-10 1996 20 1809 1813 http://www.sciencedirect.com/science/article/B6V85-3VV430P-J/2/1c42b3967e462e463b5b422f884f74a7 Cappelen, Adne oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1551-15652012-12-25RePEc:eee:dyncon article Using a two-good, two-country model, we examine macroeconomic adjustment by allowing for decreasing and increasing marginal impatience (DMI and IMI). In the reference case where both countries have IMI, a negative output shock in one country lowers the interest rate and both countries' welfare levels in steady state, whereas, when either one country has DMI, the negative income shock raises the interest rate, thereby benefiting the IMI country and harming the DMI one in steady state. In a country either with IMI or DMI, the Harberger–Laursen–Metzler effect takes place if negative ‘welfare-supporting’ effects dominate positive ‘income-compensating’ effects. Decreasing (increasing) marginal impatience; Two-country economy; Terms of trade; Current account; 10 2012 36 1551 1565 F41 F32 E00 http://www.sciencedirect.com/science/article/pii/S0165188912000929 Hirose, Ken-Ichi Ikeda, Shinsuke oai:RePEc:eee:dyncon:v:12:y:1988:i:2-3:p:505-5222012-12-25RePEc:eee:dyncon article 2-3 1988 12 505 522 http://www.sciencedirect.com/science/article/B6V85-45MFRW4-V/2/f6382f45a6570ff11ab5e88a088b5c23 Campbell, John Y. Shiller, Robert J. oai:RePEc:eee:dyncon:v:27:y:2003:i:10:p:1917-19382012-12-25RePEc:eee:dyncon article 10 2003 27 8 1917 1938 http://www.sciencedirect.com/science/article/B6V85-46RKPYC-1/2/ 6c7ddf1e920f7110a955fd6bd99b9e4e Onozaki, Tamotsu Sieg, Gernot Yokoo, Masanori oai:RePEc:eee:dyncon:v:1:y:1979:i:1:p:85-992012-12-25RePEc:eee:dyncon article 1 1979 1 2 85 99 http:// www.sciencedirect.com/science/article/B6V85-4DVNG1X-5/2/551197ffbc4442133ed31fa31fe3a47f Turnovsky, Stephen J. oai:RePEc:eee:dyncon:v:12:y:1988:i:2-3:p:333-3462012-12-25RePEc:eee:dyncon article 2-3 1988 12 333 346 http://www.sciencedirect.com/science/article/B6V85-45MFRW4-H/2/8ef229550ab5c4985104c41dc17dc9e9 Cerchi, Marlene Havenner, Arthur oai:RePEc:eee:dyncon:v:24:y:2000:i:11-12:p:1641-17012012-12-25RePEc:eee:dyncon article 11-12 2000 24 10 1641 1701 http://www.sciencedirect.com/science/article/B6V85-412RWNR-7/2/ 1b81ed9a63a697d162ec2514a67fc1d3 Foldes, Lucien oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1229-12472012-12-25RePEc:eee:dyncon article We propose three residual-based tests for conditional asymmetry. The distribution is assumed to fall into the class of skewed distributions of Fernández and Steel (1998). In this class, asymmetry is measured by the ratio between the probabilities of being larger and smaller than the mode. Estimation is performed under the null hypothesis of constant asymmetry of the innovations and, in a second step, tests for conditional asymmetry are performed on generalized residuals through parametric and nonparametric methods. We derive the asymptotic distribution of the tests that incorporates the uncertainty of the estimated parameters. A Monte Carlo study shows that neglecting this uncertainty severely biases the tests. An empirical application on a basket of daily returns reveals that financial data often present dynamics in the conditional skewness. Conditional asymmetry; Residuals; Wald; Runs; 8 2012 36 1229 1247 C32 G14 E44 http://www.sciencedirect.com/science/article/pii/S0165188912000814 Lambert, Philippe Laurent, Sébastien Veredas, David oai:RePEc:eee:dyncon:v:26:y:2002:i:7-8:p:1323-13522012-12-25RePEc:eee:dyncon article 7-8 2002 26 7 1323 1352 http://www.sciencedirect.com/science/article/B6V85-459HNNF-D/2/ 0a72db1ee283bc7c1c05e185774e3305 Constantinides, George M. Perrakis, Stylianos oai:RePEc:eee:dyncon:v:30:y:2006:i:11:p:2339-23612012-12-25RePEc:eee:dyncon article 11 2006 30 11 2339 2361 http:// www.sciencedirect.com/science/article/B6V85-4H6XKRY-1/2/e69598f377cb2904b58cb1be02b2021b Jorgensen, Steffen Kort, Peter M. Dockner, Engelbert J. oai:RePEc:eee:dyncon:v:32:y:2008:i:12:p:3820-38462012-12-25RePEc:eee:dyncon article We consider a mean-reverting risk-neutral short rate process model with a vector of subordinated drift processes that accounts for the random effect of the arrival of new information. It is assumed that the market is efficient with no arbitrage opportunities. Closed form expressions for the price in nominal and in real terms of a discount bond are obtained. We define a risk-neutral exchange rate model with correlated subordinated drift and volatility processes that reflect the effect of the arrival of new information pertaining to the countries involved. The cases of complete and incomplete exchange markets with no arbitrage opportunities are considered. Mean reverting Ito process Stochastic volatility Economic shock process Risk premia Stochastic interest rate Risk-neutral process Subordinated processes Brownian motion models Foreign exchange markets Incomplete/complete markets 12 2008 32 12 3820 3846 http://www.sciencedirect.com/science/article/B6V85-4SDX2NJ-1/2/3de49fff9b9d902e1cc396360c2f69a1 Jagannathan, Raj oai:RePEc:eee:dyncon:v:10:y:1986:i:3:p:395-4142012-12-25RePEc:eee:dyncon article 3 1986 10 9 395 414 http://www.sciencedirect.com/science/article/B6V85-4C40JJV-2V/2/36bc1292db096b6d3a8c259d036e2cef Toman, Michael A. oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:297-2982012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 297 298 http://www.sciencedirect.com/science/article/B6V85-4D8W2RC-1M/2/ 014c125f168c9e45c01a3411e6d8b417 Meese, Richard Rogoff, Kenneth oai:RePEc:eee:dyncon:v:11:y:1987:i:4:p:483-4982012-12-25RePEc:eee:dyncon article 4 1987 11 12 483 498 http://www.sciencedirect.com/ science/article/B6V85-4GP1TWJ-2/2/96402859ed4a6f03f531d305c27da8ab Kulatilaka, Nalin Marcus, Alan J. oai:RePEc:eee:dyncon:v:23:y:1999:i:8:p:1099-11312012-12-25RePEc:eee:dyncon article 8 1999 23 8 1099 1131 http://www.sciencedirect.com/science/article/B6V85-3X6B5WV-2/2/3882feeb952b0d84e54dc46318c6bf13 do Val, Joao B. R. Basar, Tamer oai:RePEc:eee:dyncon:v:22:y:1997:i:1:p:49-662012-12-25RePEc:eee:dyncon article 1 1997 22 11 49 66 http://www.sciencedirect.com/science/article/B6V85-3SX6H28-3/2/ab31ffcc8a0353b315f3af1dd93c5c33 Cannarsa, Piermarco Giannini, Massimo Tessitore, Maria Elisabetta oai:RePEc:eee:dyncon:v:20:y:1996:i:6-7:p:1145-11762012-12-25RePEc:eee:dyncon article 6-7 1996 20 1145 1176 http:// www.sciencedirect.com/science/article/B6V85-3VW1T3H-9/2/2f8a8bc405773bb01db28bbda9c86a59 Gordon, Stephen oai:RePEc:eee:dyncon:v:31:y:2007:i:6:p:1801-18072012-12-25RePEc:eee:dyncon article 6 2007 31 6 1801 1807 http://www.sciencedirect.com/science/article/B6V85-4NJP97X-1/2/96092ac2dfab95f0f07a16f67203e605 Markose, Sheri Arifovic, Jasmina Sunder, Shyam oai:RePEc:eee:dyncon:v:27:y:2002:i:2:p:243-2692012-12-25RePEc:eee:dyncon article 2 2002 27 12 243 269 http://www.sciencedirect.com/science/article/B6V85-46SVXBT-4/2/00dcc625677a14135dc5afd50ccfe702 Wielhouwer, Jacco L. Waegenaere, Anja De Kort, Peter M. oai:RePEc:eee:dyncon:v:32:y:2008:i:1:p:259-2782012-12-25RePEc:eee:dyncon article 1 2008 32 1 259 278 http://www.sciencedirect.com/science/ article/B6V85-4NC5V0P-1/2/5734aaa4fd05a464c60a395dda1c51d8 Iori, Giulia De Masi, Giulia Precup, Ovidiu Vasile Gabbi, Giampaolo Caldarelli, Guido oai:RePEc:eee:dyncon:v:32:y:2008:i:11:p:3682-36942012-12-25RePEc:eee:dyncon article In this paper, we characterize subjective probability beliefs leading to a higher equilibrium market price of risk. We establish that Abel's result on the impact of doubt on the risk premium is not correct in general; see Abel [2002. An exploration of the effects of pessimism and doubt on asset returns. Journal of Economic Dynamics and Control 26, 1075-1092]. We introduce, on the set of subjective probability beliefs, market-price-of-risk dominance concepts and we relate them to well-known dominance concepts used for comparative statics in portfolio choice analysis. In particular, the necessary first-order conditions on subjective probability beliefs in order to increase the market price of risk for all nondecreasing utility functions appear as equivalent to the monotone likelihood ratio property. Pessimism Optimism Doubt Stochastic dominance Risk premium Market price of risk Riskiness Portfolio dominance Monotone likelihood ratio 11 2008 32 11 3682 3694 http://www.sciencedirect.com/science/article/B6V85-4S4JYKH-2/2/cd9625039be7ddd1f056530177651140 Jouini, E. Napp, C. oai:RePEc:eee:dyncon:v:36:y:2012:i:8:p:1077-10872012-12-25RePEc:eee:dyncon article Fifty-two research interviews were conducted with money managers controlling over $500 billion. This paper presents detailed material from one interview to argue interviews usefully describe their shared reality and provide information about the conditions of action facing financial decision-makers with implications for aggregate behaviour. Their shared reality was dominated by “radical” uncertainty and information ambiguity which severely limited the scope for “fully rational” decision-making. How they managed to commit to decisions was by creating narratives. The study suggests it may be useful to reconsider prejudices against the usefulness of talking to individual economic agents about what they actually do. Financial markets; Interviews; Economic methodology; Narrative; Uncertainty; 8 2012 36 1077 1087 B4 G1 http://www.sciencedirect.com/science/article/pii/S0165188912000851 Tuckett, David oai:RePEc:eee:dyncon:v:31:y:2007:i:2:p:703-7202012-12-25RePEc:eee:dyncon article 2 2007 31 2 703 720 http://www.sciencedirect.com/science/article/B6V85-4K9C54Y-2/2/ b0351a7de8be2890873a43c74f21bb11 Kawaguchi, Kazuhito Morimoto, Hiroaki oai:RePEc:eee:dyncon:v:13:y:1989:i:3:p:449-4702012-12-25RePEc:eee:dyncon article 3 1989 13 7 449 470 http:// www.sciencedirect.com/science/article/B6V85-46X3RNB-6/2/2ba6e0c948dd9e754ced20c87ec393f3 Danthine, Jean-Pierre Donaldson, John B. Mehra, Rajnish oai:RePEc:eee:dyncon:v:17:y:1993:i:1-2:p:181-2052012-12-25RePEc:eee:dyncon article 1-2 1993 17 181 205 http://www.sciencedirect.com/science/article/B6V85-46MMW27-9/2/3b84641066283c6328a3bb87beb2d959 Berninghaus, Siegfried Seifert-Vogt, Hans Gunther oai:RePEc:eee:dyncon:v:21:y:1997:i:2-3:p:575-6022012-12-25RePEc:eee:dyncon article 2-3 1997 21 575 602 http://www.sciencedirect.com/science/article/ B6V85-3SWV8HG-G/2/f26fda6cc5eba2a9f429a6401f02cb39 Dutta, Prajit K. oai:RePEc:eee:dyncon:v:31:y:2007:i:2:p:473-4912012-12-25RePEc:eee:dyncon article 2 2007 31 2 473 491 http://www.sciencedirect.com/ science/article/B6V85-4JGJJ0B-1/2/d77d2e9d22238861ed7e06a7e1f79a53 Gomis-Porqueras, Pere Haro, Alex oai:RePEc:eee:dyncon:v:28:y:2004:i:5:p:861-8872012-12-25RePEc:eee:dyncon article 5 2004 28 2 861 887 http://www.sciencedirect.com/science/article/B6V85-48BTYTB-2/2/4829210be1f34766717648835397c0b5 Muzzioli, Silvia Torricelli, Costanza oai:RePEc:eee:dyncon:v:30:y:2006:i:9-10:p:1445-14892012-12-25RePEc:eee:dyncon article 9-10 2006 30 1445 1489 http://www.sciencedirect.com/science/article/B6V85-4JYKKKY-1/2/ 74e03d09be72272d0e420a5b7b256c91 Benigno, Pierpaolo Woodford, Michael oai:RePEc:eee:dyncon:v:27:y:2003:i:10:p:1699-17372012-12-25RePEc:eee:dyncon article 10 2003 27 8 1699 1737 http:// www.sciencedirect.com/science/article/B6V85-45XT75D-2/2/f2cfcacdd4783b9dee326f65d99dde2d Jondeau, Eric Rockinger, Michael oai:RePEc:eee:dyncon:v:23:y:1999:i:7:p:1029-10642012-12-25RePEc:eee:dyncon article 7 1999 23 6 1029 1064 http://www.sciencedirect.com/science/article/B6V85-3WRBPDW-5/2/0474bb6ebd535265b6119289af129ae7 Basak, Suleyman oai:RePEc:eee:dyncon:v:27:y:2003:i:11-12:p:2171-21932012-12-25RePEc:eee:dyncon article 11-12 2003 27 9 2171 2193 http://www.sciencedirect.com/science/article/B6V85-4753NRK-1/2/ aa49fc959639b95c580f693c2874d1cc Bischi, G. -I. Dawid, H. Kopel, M. oai:RePEc:eee:dyncon:v:19:y:1995:i:1-2:p:155-1792012-12-25RePEc:eee:dyncon article 1-2 1995 19 155 179 http://www.sciencedirect.com /science/article/B6V85-3YB56MM-1W/2/81b31c3123aa707cf334198ed9b4cb74 Hommes, Cars H. Nusse, Helena E. Simonovits, Andras oai:RePEc:eee:dyncon:v:19:y:1995:i:8:p:1429-14482012-12-25RePEc:eee:dyncon article 8 1995 19 11 1429 1448 http://www.sciencedirect.com/science/article/B6V85-3YB56J6-6/2/841fbec53ed1f357f12c84bfec10726e Ireland, Peter N. oai:RePEc:eee:dyncon:v:3:y:1981:i:1:p:119-1402012-12-25RePEc:eee:dyncon article 1 1981 3 11 119 140 http://www.sciencedirect.com/science/article/B6V85-4D9X39C-7/2/d1e0c7c3aa86070b09d05b8e008993de Bourguignon, Francoise Sethi, Suresh P. oai:RePEc:eee:dyncon:v:31:y:2007:i:12:p:3860-38802012-12-25RePEc:eee:dyncon article 12 2007 31 12 3860 3880 http://www.sciencedirect.com/science/article/ B6V85-4N6FV68-1/2/2eff6ef52f0632c13ba9b9fe731ee999 Dai, Min Kwok, Yue Kuen You, Hong oai:RePEc:eee:dyncon:v:29:y:2005:i:9:p:1517-15452012-12-25RePEc:eee:dyncon article 9 2005 29 9 1517 1545 http:// www.sciencedirect.com/science/article/B6V85-4DVW3RX-1/2/b97735a511108eb9fcf1ea020675b367 Chakravorty, Ujjayant Krulce, Darrell Roumasset, James oai:RePEc:eee:dyncon:v:21:y:1997:i:10:p:1627-16442012-12-25RePEc:eee:dyncon article 10 1997 21 8 1627 1644 http://www.sciencedirect.com/science/article/B6V85-3SX0PR1-4/2/ a4e07bf1685f28e2b4080189d0d4c33b Corsetti, Giancarlo oai:RePEc:eee:dyncon:v:12:y:1988:i:1:p:19-252012-12-25RePEc:eee:dyncon article 1 1988 12 3 19 25 http://www.sciencedirect.com/science/article/ B6V85-45JK55T-W/2/d27331bc2ba0436389b5468e4d64ab4a Marquez, Jaime oai:RePEc:eee:dyncon:v:3:y:1981:i:1:p:217-2342012-12-25RePEc:eee:dyncon article 1 1981 3 11 217 234 http://www.sciencedirect.com/ science/article/B6V85-4D9X39C-H/2/7e5aa75fa68e4396be35324b1721880f Craine, Roger Havenner, Arthur oai:RePEc:eee:dyncon:v:36:y:2012:i:9:p:1414-14302012-12-25RePEc:eee:dyncon article The maximin criterion defines the highest utility level that can be sustained in an intergenerational equity perspective. The viability approach makes it possible to characterize all the economic trajectories sustaining a given, not necessarily maximal, utility level. In this paper, we exhibit the strong links between maximin and viability: we show that the value function of the maximin problem can be obtained in the viability framework, and that the maximin path is a particular viable path. This result allows us to extend the recommendations of the maximin approach beyond optimality, to characterize the sustainability of economic trajectories which differ from the maximin path. Attention is especially paid to non-negative net investment at maximin accounting prices, which is shown to be necessary to maintain the productive capacity of the economy, whether the development path is optimal or not. Our results provide a new theoretical ground to account for sustainability in imperfect economies, based on maximin prices. Sustainability; Maximin; Viability; Dynamics; Optimality; Genuine savings; 9 2012 36 1414 1430 C61 E21 Q01 Q56 http://www.sciencedirect.com/science/ article/pii/S0165188912000668 Doyen, L. Martinet, V. oai:RePEc:eee:dyncon:v:27:y:2002:i:1:p:157-1802012-12-25RePEc:eee:dyncon article 1 2002 27 11 157 180 http://www.sciencedirect.com/science/article /B6V85-46H21HP-9/2/1d68ab69eed315a610b6b2b4b33956f1 Louberge, Henri Villeneuve, Stephane Chesney, Marc oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:75-812012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 75 81 http://www.sciencedirect.com/science/article/B6V85-4D8W2RC-G/2/923fabfd404c529cadcc6b657929cf9f Becker, R. Dwolatzky, B. Karakitsos, E. Rustem, B. oai:RePEc:eee:dyncon:v:28:y:2003:i:2:p:331-3482012-12-25RePEc:eee:dyncon article 2 2003 28 11 331 348 http://www.sciencedirect.com/science/article/B6V85-47G3XV0-2/2/ecbeb3b33d530948becefd562e888604 Amilon, Henrik Bermin, Hans-Peter oai:RePEc:eee:dyncon:v:13:y:1989:i:3:p:485-4972012-12-25RePEc:eee:dyncon article 3 1989 13 7 485 497 http://www.sciencedirect.com/science/article/B6V85-46X3RNB-8/2/ fdef11db147528b5adcef2e5a9159f82 Zhang, Wei-Bin oai:RePEc:eee:dyncon:v:30:y:2006:i:12:p:2693-27242012-12-25RePEc:eee:dyncon article 12 2006 30 12 2693 2724 http://www.sciencedirect.com/science/ article/B6V85-4HPD3HK-3/2/8ee35aa0b6b7356bec21266474b7977a Angelini, Elena Henry, Jerome Marcellino, Massimiliano oai:RePEc:eee:dyncon:v:10:y:1986:i:1-2:p:93-972012-12-25RePEc:eee:dyncon article 1-2 1986 10 6 93 97 http://www.sciencedirect.com/science/article/B6V85-4D8W2RC-K/2/a2f6f2d4163e274041c7b26fc3e4ef0d Levine, Paul oai:RePEc:eee:dyncon:v:29:y:2005:i:8:p:1313-13292012-12-25RePEc:eee:dyncon article 8 2005 29 8 1313 1329 http://www.sciencedirect.com/science/article/B6V85-4DTKKK8-2/2/3916b460926c1bcc98383d172f22e049 Kamihigashi, Takashi oai:RePEc:eee:dyncon:v:36:y:2012:i:10:p:1520-15332012-12-25RePEc:eee:dyncon article We study the effects that the Maastricht Treaty, the creation of the ECB, and the Euro changeover had on the dynamics of European business cycles using a panel VAR and data from 10 European countries—seven from the Euro area and three outside of it. There are changes in the features of European business cycles and in the transmission of shocks. They precede the three events of interest and are more linked to a general process of European convergence and synchronization. Business cycles; European Monetary Union; Panel VAR; Structural changes; 10 2012 36 1520 1533 C15 C33 E32 E42 http://www.sciencedirect.com/science/article/pii/S0165188912000942 Canova, Fabio Ciccarelli, Matteo Ortega, Eva oai:RePEc:eee:dyncon:v:26:y:2002:i:6:p:985-10072012-12-25RePEc:eee:dyncon article 6 2002 26 6 985 1007 http://www.sciencedirect.com/science/article/B6V85-44KV265-6/2/1bd7748aab6a525f9cf2fe40121b087a Femminis, Gianluca oai:RePEc:eee:dyncon:v:20:y:1996:i:5:p:879-9042012-12-25RePEc:eee:dyncon article 5 1996 20 5 879 904 http://www.sciencedirect.com/science/article/B6V85-3VVVR8J-8/2/ e0bfcb6d18bc827aaaea7dc5dea68575 Fanizza, Domenico oai:RePEc:eee:dyncon:v:30:y:2006:i:11:p:2217-22602012-12-25RePEc:eee:dyncon article 11 2006 30 11 2217 2260 http://www.sciencedirect.com/science/ article/B6V85-4HG69H7-2/2/4a17ea597d7f096c97d81085e4bff57d Collard, Fabrice Feve, Patrick Ghattassi, Imen oai:RePEc:eee:dyncon:v:12:y:1988:i:1:p:181-1872012-12-25RePEc:eee:dyncon article 1 1988 12 3 181 187 http://www.sciencedirect.com/science/article/B6V85-45JK55T-1N/2/78c8573a3015f5e1eb80cc250e4e1d2c Hsu, Shih-Hsun Stefanou, Spiro E. oai:RePEc:eee:dyncon:v:30:y:2006:i:3:p:361-3912012-12-25RePEc:eee:dyncon article 3 2006 30 3 361 391 http://www.sciencedirect.com/science/article/B6V85-4HD8B3Y-1/2/25a1045190aa191683cf53544c295431 El-Gamal, Mahmoud A. Ryu, Deockhyun oai:RePEc:eee:dyncon:v:15:y:1991:i:4:p:607-6262012-12-25RePEc:eee:dyncon article 4 1991 15 10 607 626 http://www.sciencedirect.com/science/article/B6V85-45R2GXH-1/ 2/cebcf926eacff8ea4eb2e5c3f8a40ea2 Eichenbaum, Martin oai:RePEc:eee:dyncon:v:31:y:2007:i:10:p:3348-33692012-12-25RePEc:eee:dyncon article 10 2007 31 10 3348 3369 http://www.sciencedirect.com/science/ article/B6V85-4MX4VM8-1/2/0a0d3e5a53baba6ef28106cd4a7b6bb5 Marrero, Gustavo A. Novales, Alfonso oai:RePEc:eee:dyncon:v:31:y:2007:i:7:p:2168-21952012-12-25RePEc:eee:dyncon article 7 2007 31 7 2168 2195 http://www.sciencedirect.com/science/article/B6V85-4KTVNV4-1/2/50bb7247c837d80165843643998ecf3d Chellathurai, Thamayanthi Draviam, Thangaraj oai:RePEc:eee:dyncon:v:32:y:2008:i:10:p:3192-32172012-12-25RePEc:eee:dyncon article 10 2008 32 10 3192 3217 http://www.sciencedirect.com/science/article/B6V85-4RS43GB-1/2/ 4c277c5751317bb50ee111134cbf8571 Meng, Rujing oai:RePEc:eee:dyncon:v:25:y:2001:i:1-2:p:245-2792012-12-25RePEc:eee:dyncon article 1-2 2001 25 1 245 279 http://www.sciencedirect.com/science/article/ B6V85-418PPNR-8/2/2446fa2cf4e702a728c982c8059d2298 Tetlow, Robert J. von zur Muehlen, Peter oai:RePEc:eee:dyncon:v:24:y:2000:i:11-12:p:1747-17822012-12-25RePEc:eee:dyncon article 11-12 2000 24 10 1747 1782 http://www.sciencedirect.com/science/article/B6V85-412RWNR-B/2/10902c7369660310d0696ca96f51aaf2 Tan, Ken Seng Boyle, Phelim P. oai:RePEc:eee:dyncon:v:16:y:1992:i:2:p:289-3152012-12-25RePEc:eee:dyncon article 2 1992 16 4 289 315 http://www.sciencedirect.com/science/article/B6V85-45NHVY2-6/2/685187d61d0504480df1c1b957cfef59 Kollintzas, Tryphon Zhou, Ruilin oai:RePEc:eee:dyncon:v:25:y:2001:i:3-4:p:459-5022012-12-25RePEc:eee:dyncon article 3-4 2001 25 3 459 502 http://www.sciencedirect.com/science/article/B6V85-419JHMW-7/ 2/2c0bd8653ad20d0f1ff7775f0f7d5cbd Kirman, Alan P. Vriend, Nicolaas J. oai:RePEc:eee:dyncon:v:23:y:1999:i:4:p:519-5372012-12-25RePEc:eee:dyncon article 4 1999 23 2 519 537 http:// www.sciencedirect.com/science/article/B6V85-3VF9C8K-2/2/33b1363196e18651cb543f62a416cbd8 Croix, David de la Michel, Philippe oai:RePEc:eee:dyncon:v:32:y:2008:i:11:p:3661-36812012-12-25RePEc:eee:dyncon article This paper investigates monetary policy design when central bank and private-sector expectations differ. Private agents learn adaptively; the central bank has a possibly misspecified model of the economy. Successful implementation of optimal policy using inflation targeting rules requires the central bank to have complete knowledge of private agents' learning behavior. If the central bank mistakenly assumes private agents to have rational expectations when in fact they are learning, then policy rules frequently lead to divergent learning dynamics. However, if the central bank does not correctly understand agents' behavior, stabilization policy is best implemented by controlling the path of the price level rather than the inflation rate. Adaptive learning Monetary policy Targeting rules 11 2008 32 11 3661 3681 http://www.sciencedirect.com/science/article/B6V85-4SCD9XX-1/2/ 7343ef73f36418a883afc8da130a49d6 Preston, Bruce oai:RePEc:eee:dyncon:v:23:y:1999:i:9-10:p:1605-16322012-12-25RePEc:eee:dyncon article 9-10 1999 23 9 1605 1632 http://www.sciencedirect.com/science/ article/B6V85-3Y9RKX5-H/2/af43c7c2125c2861379ff5fb8890606b Feltovich, Nick oai:RePEc:eee:dyncon:v:32:y:2008:i:10:p:3166-31912012-12-25RePEc:eee:dyncon article 10 2008 32 10 3166 3191 http:// www.sciencedirect.com/science/article/B6V85-4RRNXNG-1/2/2bbd63423801f1ec8dbadbedb722546b Paustian, Matthias Stoltenberg, Christian oai:RePEc:eee:dyncon:v:30:y:2006:i:11:p:2305-23382012-12-25RePEc:eee:dyncon article 11 2006 30 11 2305 2338 http://www.sciencedirect.com/science/article/B6V85-4H74KXN-1/2/ 6710449037adf52f070e9e498a81ae25 Vega-Redondo, Fernando oai:RePEc:eee:dyncon:v:21:y:1997:i:1:p:1-222012-12-25RePEc:eee:dyncon article 1 1997 21 1 1 22 http://www.sciencedirect.com/science/article/ B6V85-3T7HKH4-1/2/c9bc99199cd8326be1f3bf30646957c4 Jones, Larry E. Manuelli, Rodolfo E. oai:RePEc:eee:dyncon:v:32:y:2008:i:8:p:2428-24522012-12-25RePEc:eee:dyncon article 8 2008 32 8 2428 2452 http:/ /www.sciencedirect.com/science/article/B6V85-4PR3G6X-2/2/f55783b2eebbe2896fd381f378dafb2e Alvarez-Lois, Pedro Harrison, Richard Piscitelli, Laura Scott, Alasdair oai:RePEc:eee:dyncon:v:31:y:2007:i:4:p:1326-13582012-12-25RePEc:eee:dyncon article 4 2007 31 4 1326 1358 http://www.sciencedirect.com/science/article/B6V85-4KF6BM1-1/2/ 24417d5a31cc8f191e94586d2ca7411d Pivetta, Frederic Reis, Ricardo oai:RePEc:eee:dyncon:v:30:y:2006:i:9-10:p:1569-15872012-12-25RePEc:eee:dyncon article 9-10 2006 30 1569 1587 http:// www.sciencedirect.com/science/article/B6V85-4JVT1J3-1/2/182bd5ac3115721a5a6ed00bb3fbf11e Dechert, W.D. O'Donnell, S.I. oai:RePEc:eee:dyncon:v:31:y:2007:i:11:p:3478-35022012-12-25RePEc:eee:dyncon article 11 2007 31 11 3478 3502 http://www.sciencedirect.com/science/article/B6V85-4N0GDNG-2/2/f9c230fd63143fcf6dd95f1e43295c13 Fujiwara, Hajime Kijima, Masaaki oai:RePEc:eee:dyncon:v:31:y:2007:i:7:p:2486-25182012-12-25RePEc:eee:dyncon article 7 2007 31 7 2486 2518 http://www.sciencedirect.com/science/article/B6V85-4M51F65-1/2/ 6148e0babf9edeb2f39ff185d160464f Chen, Been-Lon Lee, Shun-Fa oai:RePEc:eee:dyncon:v:30:y:2006:i:9-10:p:1687-17062012-12-25RePEc:eee:dyncon article 9-10 2006 30 1687 1706 http://www.sciencedirect.com/ science/article/B6V85-4JX3728-2/2/8a16d6f86b1ca6b92b74e75e9c5fade1 Camacho, Maximo Perez-Quiros, Gabriel Saiz, Lorena oai:RePEc:eee:dyncon:v:25:y:2001:i:5:p:721-7462012-12-25RePEc:eee:dyncon article 5 2001 25 5 721 746 http://www.sciencedirect.com/science/article/B6V85-41NTC6G-4/2/d34e38d4c3817c67e2c85ff3ada926b5 Den Haan, Wouter J. oai:RePEc:eee:dyncon:v:28:y:2003:i:3:p:467-4922012-12-25RePEc:eee:dyncon article 3 2003 28 12 467 492 http://www.sciencedirect.com/science/article/B6V85-47T2DFC-1/2/53b87962e2e8afe42c3881d4457ea609 Corrado, Luisa Holly, Sean oai:RePEc:eee:dyncon:v:36:y:2012:i:9:p:1340-13482012-12-25RePEc:eee:dyncon article We consider an overlapping generations model with public education and social security financed by labor income taxation, in which the overall size of these policies is determined in a repeated majority voting game. We investigate the interaction between these policies and economic development in stationary Markov perfect equilibria. In the politico-economic equilibrium, the labor income tax rate is represented as a linear increasing function of the ratio of the decisive voter's human capital and the average human capital level. A high level of initial income inequality reduces the size of public policies and retards economic growth. Public education; Social security; Markov perfect equilibrium; Income inequality; Economic development; 9 2012 36 1340 1348 H55 O16 http://www.sciencedirect.com/science/article/pii/S016518891200067X Naito, Katsuyuki oai:RePEc:eee:dyncon:v:21:y:1997:i:2-3:p:603-6302012-12-25RePEc:eee:dyncon article 2-3 1997 21 603 630 http://www.sciencedirect.com/science/article/B6V85-3SWV8HG-H/2/ff5c07c4db6f08afe9ca15152ea0eea9 Jorgensen, Steffen Kort, Peter M. oai:RePEc:eee:dyncon:v:29:y:2005:i:10:p:1673-17002012-12-25RePEc:eee:dyncon article 10 2005 29 10 1673 1700 http://www.sciencedirect.com/science/article/ B6V85-4F3FF2S-1/2/d1be7f58605e46ee6dad4c5059d1d093 Momota, Akira Tabata, Ken Futagami, Koichi oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:471-4772012-12-25RePEc:eee:dyncon article 1-3 1996 20 471 477 http://www.sciencedirect.com/science/article/B6V85-3VWPNPX-12/2/d3d044b003a6f4a757bac880c36cc132 Presman, E. Sethi, S. oai:RePEc:eee:dyncon:v:17:y:1993:i:1-2:p:263-2872012-12-25RePEc:eee:dyncon article 1-2 1993 17 263 287 http://www.sciencedirect.com/science/article/B6V85-46MMW27-D/2/f47b292c4b06ba27cc1596be32681d60 Streibel, Mariane Harvey, Andrew oai:RePEc:eee:dyncon:v:31:y:2007:i:4:p:1217-12442012-12-25RePEc:eee:dyncon article 4 2007 31 4 1217 1244 http://www.sciencedirect.com/science/article/B6V85-4KB14JM-1/2/ 9b0890f3dbd05235e09c78f70459e461 Ebrahim, M. Shahid Mathur, Ike oai:RePEc:eee:dyncon:v:20:y:1996:i:9-10:p:1541-15562012-12-25RePEc:eee:dyncon article 9-10 1996 20 1541 1556 http:// www.sciencedirect.com/science/article/B6V85-3VV430P-3/2/ad2b6210ce65b004049f5413996a2771 Gilli, Manfred Garbely, Myriam oai:RePEc:eee:dyncon:v:24:y:2000:i:4:p:623-6502012-12-25RePEc:eee:dyncon article 4 2000 24 4 623 650 http://www.sciencedirect.com/science/article/B6V85-3Y6HDF4-6/2/8c62643d50a7e2cf20e5cbaba6e22a89 Silvestre, Joaquim oai:RePEc:eee:dyncon:v:33:y:2009:i:4:p:903-9212012-12-25RePEc:eee:dyncon article This paper examines investment timing by the manager in a decentralized firm in the presence of asymmetric information. In particular, we incorporate an audit technology in the agency model developed by Grenadier and Wang [2005. Investment timing, agency, and information. Journal of Financial Economics 75, 493-533]. The implied investment trigger in the agency problem with auditing is larger than in the full-information problem, and smaller than in the agency problem without auditing. Nevertheless, the audit technology does not necessarily reduce inefficiency in the total social welfare. Real options Asymmetric information Agency conflicts Audit 4 2009 33 4 903 921 http://www.sciencedirect.com/ science/article/B6V85-4TY9MJW-1/2/b0000d440b6ef9f49628240fd12f3106 Shibata, Takashi oai:RePEc:eee:dyncon:v:26:y:2002:i:7-8:p:1127-11572012-12-25RePEc:eee:dyncon article 7-8 2002 26 7 1127 1157 http:/ /www.sciencedirect.com/science/article/B6V85-459HNNF-5/2/bab1ee2b289e8c6ef5cf2e238d79a973 Telmer, Chris I. Zin, Stanley E. oai:RePEc:eee:dyncon:v:18:y:1994:i:1:p:185-2032012-12-25RePEc:eee:dyncon article 1 1994 18 1 185 203 http://www.sciencedirect.com/science/article/B6V85-46MMW30-W/2/79a10fb4599d05bb17964906218161f2 Bartholomew-Biggs, M. C. Hernandez, M. deF. G. oai:RePEc:eee:dyncon:v:33:y:2009:i:2:p:491-5062012-12-25RePEc:eee:dyncon article In contrast to existing literature we implement experimental asset markets with fluctuating fundamental values following a stochastic process. Therefore we can measure traders' behavior in both bullish and bearish markets. We observe underreaction of price changes to changes in fundamental value which induces overvaluation in bearish and undervaluation in bullish markets. We also find an asymmetry between markets with bullish fundamental values and those with bearish ones as the former markets show a higher degree of informational efficiency. The reason for the observed underreaction lies in the relatively large volatility of the underlying fundamental value process. Asset markets Bubbles Experiment Underreaction 2 2009 33 2 491 506 http://www.sciencedirect.com/science/article/B6V85-4TB777J-1/2/944fbe05f272d24e161dfaa75a16c75c Kirchler, Michael oai:RePEc:eee:dyncon:v:27:y:2003:i:4:p:667-7002012-12-25RePEc:eee:dyncon article 4 2003 27 2 667 700 http://www.sciencedirect.com/science/article/B6V85-44CH1DN-1/2/6a0ac2e592dbb3009b10c5a07de8878c Damgaard, Anders oai:RePEc:eee:dyncon:v:25:y:2001:i:11:p:1841-18652012-12-25RePEc:eee:dyncon article 11 2001 25 11 1841 1865 http://www.sciencedirect.com/science/article/B6V85-43DKSHS-7/2/ 67ffc5e6c73566dc58edd5171ec4014c Lioui, Abraham Poncet, Patrice oai:RePEc:eee:dyncon:v:32:y:2008:i:5:p:1489-15162012-12-25RePEc:eee:dyncon article Carlstrom and Fuerst [2005. Investment and interest rate policy: a discrete time analysis. Journal of Economic Theory 123, 4-20.] show that in the presence of investment activity and price stickiness, indeterminacy of equilibrium is induced by forward-looking monetary policy that sets the interest rate in response only to future inflation. In a stochastic version of their model, we find that this indeterminacy problem is due to a cost channel of monetary policy, whereby inflation expectations become self-fulfilling, and the problem can be overcome once the forward-looking policy responds also to current output or contains sufficiently strong interest rate smoothing, since this prevents the self-fulfilling expectations. We also show that when E-stability is adopted as the selection criterion from multiple equilibria, even the forward-looking policy generates a locally unique non-explosive E-stable fundamental rational expectations equilibrium as long as the policy response to expected future inflation is sufficiently strong. 5 2008 32 5 1489 1516 http://www.sciencedirect.com/science/article/B6V85-4P29K6W-1/1/01258e3404bf3a24433e8483c43412f2 Kurozumi, Takushi Van Zandweghe, Willem oai:RePEc:eee:dyncon:v:9:y:1985:i:4:p:363-4042012-12-25RePEc:eee:dyncon article 4 1985 9 12 363 404 http://www.sciencedirect.com/science/article/B6V85-4C40JH5-1J/2/0b908b8b26a8f83917f60dc7d678b4d7 Christiano, Lawrence J. oai:RePEc:eee:dyncon:v:13:y:1989:i:2:p:151-1692012-12-25RePEc:eee:dyncon article 2 1989 13 4 151 169 http://www.sciencedirect.com/science/article/B6V85-46MMW4J-12/2/ 09a6dc93c4a5d537a85b66251fb08674 Kang, Heejoon oai:RePEc:eee:dyncon:v:28:y:2004:i:4:p:817-8392012-12-25RePEc:eee:dyncon article 4 2004 28 1 817 839 http://www.sciencedirect.com/science/article/ B6V85-48BC1PP-2/2/e73b16c30584e7ec1019f0fc5460dd96 Krebs, Tom Wilson, Bonnie oai:RePEc:eee:dyncon:v:13:y:1989:i:4:p:597-6122012-12-25RePEc:eee:dyncon article 4 1989 13 10 597 612 http:// www.sciencedirect.com/science/article/B6V85-45GNWG5-5/2/e08d086d29103b69a49ce7e7082b3c04 Diebold, Francis X. oai:RePEc:eee:dyncon:v:28:y:2003:i:1:p:117-1402012-12-25RePEc:eee:dyncon article 1 2003 28 10 117 140 http://www.sciencedirect.com/science/article/B6V85-46YXMWT-1/2/d25344c5a33077b57e4d0b86dc3a44ee Negroni, Giorgio oai:RePEc:eee:dyncon:v:13:y:1989:i:2:p:171-1852012-12-25RePEc:eee:dyncon article 2 1989 13 4 171 185 http://www.sciencedirect.com/science/article/B6V85-46MMW4J-13/2/e2111e60d886f0b895c69ceb1e1860b4 Speaker, Paul J. Mitchell, Douglas W. Gelles, Gregory M. oai:RePEc:eee:dyncon:v:30:y:2006:i:12:p:2793-28222012-12-25RePEc:eee:dyncon article 12 2006 30 12 2793 2822 http://www.sciencedirect.com/science/article/B6V85-4HTCTB7-2/2/ 0c7cf55598acabf380954abc91285de0 Attari, Mukarram Mello, Antonio S. oai:RePEc:eee:dyncon:v:24:y:2000:i:5-7:p:1121-11442012-12-25RePEc:eee:dyncon article 5-7 2000 24 6 1121 1144 http:// www.sciencedirect.com/science/article/B6V85-3YNY75V-P/2/64e26ec8671eafdf810df6ab2bce2efc Semmler, Willi Sieveking, Malte oai:RePEc:eee:dyncon:v:11:y:1987:i:1:p:29-642012-12-25RePEc:eee:dyncon article 1 1987 11 3 29 64 http://www.sciencedirect.com/science/article/B6V85-4C7WMJR-2/2/cb7686b31678909d2716a5a33af5c692 Chuma, Hiroyuki oai:RePEc:eee:dyncon:v:14:y:1990:i:3-4:p:763-7952012-12-25RePEc:eee:dyncon article 3-4 1990 14 10 763 795 http://www.sciencedirect.com/science/article/B6V85-45MFRX7-1F/2/ 614e2ed7f504022162bece930e1a59ac Todd, Richard M. oai:RePEc:eee:dyncon:v:28:y:2003:i:3:p:617-6412012-12-25RePEc:eee:dyncon article 3 2003 28 12 617 641 http://www.sciencedirect.com/science/article/ B6V85-488Y7DB-1/2/47f4dd3c89d727317fccdd6cdb12b775 Zhang, Jie Zhang, Junsen oai:RePEc:eee:dyncon:v:5:y:1983:i:1:p:187-1992012-12-25RePEc:eee:dyncon article 1 1983 5 2 187 199 http:// www.sciencedirect.com/science/article/B6V85-4C47HD0-C/2/d59b0354d61ffc63c070e78f47881a38 Friedman, James W. oai:RePEc:eee:dyncon:v:31:y:2007:i:5:p:1697-17272012-12-25RePEc:eee:dyncon article 5 2007 31 5 1697 1727 http://www.sciencedirect.com/science/article/B6V85-4KNM9WN-1/2/e2314773ded2afbbcf1b134f4bb183ab Li, Tao oai:RePEc:eee:dyncon:v:27:y:2003:i:3:p:357-3792012-12-25RePEc:eee:dyncon article 3 2003 27 1 357 379 http://www.sciencedirect.com/science/article/B6V85-46YVCK2-2/2/ee7340998a3b2d1148ff2a916dd9b3f2 Stemp, Peter J. Herbert, Ric D. oai:RePEc:eee:dyncon:v:5:y:1983:i:1:p:311-3212012-12-25RePEc:eee:dyncon article 1 1983 5 2 311 321 http://www.sciencedirect.com/science/article/B6V85-4C47HD0-K/2/cb73f6c6e392d46cdcba4a47329fb3c9 Burmeister, Edwin Flood, Robert P. Garber, Peter M. oai:RePEc:eee:dyncon:v:1:y:1979:i:4:p:305-3202012-12-25RePEc:eee:dyncon article 4 1979 1 11 305 320 http://www.sciencedirect.com/science/article/ B6V85-4DJ3F4H-5/2/042cd66bb46828e76254001504744853 Gertler, Mark oai:RePEc:eee:dyncon:v:13:y:1989:i:2:p:301-3112012-12-25RePEc:eee:dyncon article 2 1989 13 4 301 311 http://www.sciencedirect.com/ science/article/B6V85-46MMW4J-1C/2/9d583527a70395ec7f19a3da8e148fc0 Herceg, Dragoslav Cvetkovic, Ljiljana oai:RePEc:eee:dyncon:v:3:y:1981:i:1:p:1-272012-12-25RePEc:eee:dyncon article 1 1981 3 11 1 27 http://www.sciencedirect.com/science/article/B6V85-4D9X39C-1/2/403b6fe0f28b723cea807915f6399a49 Brada, Josef C. King, Arthur E. Schlagenhauf, Don E. oai:RePEc:eee:dyncon:v:20:y:1996:i:4:p:681-6892012-12-25RePEc:eee:dyncon article 4 1996 20 4 681 689 http://www.sciencedirect.com/science/article/B6V85-3VWPNNV-8/2/1fc7bc28f2c25e23a7d9da28dce9231c Salyer, Kevin D. oai:RePEc:eee:dyncon:v:36:y:2012:i:9:p:1349-13632012-12-25RePEc:eee:dyncon article Chartist and fundamentalist models have proven to be capable of replicating stylized facts on speculative markets. In general, this is achieved by specifying nonlinear interactions of otherwise linear asset price expectations of the respective trader groups. This paper investigates whether or not regressive and extrapolative expectations themselves exhibit significant nonlinear dynamics. The empirical results are based on a new data set from the European Central Bank Survey of Professional Forecasters on oil price expectations. In particular, we find that forecasters form destabilizing expectations in the neighborhood of the fundamental value, whereas expectations tend to be stabilizing in the presence of substantial oil price misalignment. Agent based models; Nonlinear expectations; Survey data; 9 2012 36 1349 1363 F31 D84 C33 http://www.sciencedirect.com/science/ article/pii/S0165188912000644 Reitz, Stefan Rülke, Jan-Christoph Stadtmann, Georg oai:RePEc:eee:dyncon:v:26:y:2002:i:9-10:p:1585-16112012-12-25RePEc:eee:dyncon article 9-10 2002 26 8 1585 1611 http:/ /www.sciencedirect.com/science/article/B6V85-44TVBP5-2/2/3c779f44f66dbc534a85644540f52c43 Kozicki, Sharon Tinsley, P. A. oai:RePEc:eee:dyncon:v:21:y:1997:i:10:p:1615-16252012-12-25RePEc:eee:dyncon article 10 1997 21 8 1615 1625 http://www.sciencedirect.com/science/article/B6V85-3SX0PR1-3/2/b4b11258a9eaad5070d22c7aa1ba947f Sulganik, Eyal Zilcha, Itzhak oai:RePEc:eee:dyncon:v:21:y:1997:i:10:p:1669-16972012-12-25RePEc:eee:dyncon article 10 1997 21 8 1669 1697 http://www.sciencedirect.com/science/article/B6V85-3SX0PR1-6/2/ c74cabfd90cdf8d2dd531cd43932188a Taub, B. oai:RePEc:eee:dyncon:v:36:y:2012:i:9:p:1322-13392012-12-25RePEc:eee:dyncon article We present a model of structural change which, distinctively, sub-divides services (S) into ‘Progressive Services’ (PS) and ‘Asymptotically Stagnant Services’ (AS), to better reflect the advent of the New Economy. A manufacturing (M) sector is also included, and non-homothetic preferences assumed. An expanding-product-variety endogenous-growth framework is adopted, and partially overlapping input sets across the three (sub-)sectors assumed. The model endogenously generates different stages of growth: services which in due course become classified as progressive first overtake AS, and then M, in innovation-driven productivity growth, consistent with post-World-War-II US experience. The socially optimal growth pattern differs qualitatively from the private, and optimal, time-varying R&D subsidies are identified. Progressive services; Information technology; Structural change; Endogenous growth; Non-homothetic preferences; 9 2012 36 1322 1339 O41 H25 http://www.sciencedirect.com/science/article/pii/S0165188912000632 Kapur, Basant K. oai:RePEc:eee:dyncon:v:20:y:1996:i:1-3:p:315-3312012-12-25RePEc:eee:dyncon article 1-3 1996 20 315 331 http://www.sciencedirect.com/science/article/B6V85-3VWPNPX-T/2/ 289e0d2553a4861923b14e4b3316ea63 Pfann, Gerard A. oai:RePEc:eee:dyncon:v:21:y:1997:i:7:p:1259-12622012-12-25RePEc:eee:dyncon article 7 1997 21 6 1259 1262 http://www.sciencedirect.com/science/article /B6V85-3SWYBJD-12/2/990c8a69d10a2573b8e572d391f4f5b8 Velupillai, K. Vela oai:RePEc:eee:dyncon:v:18:y:1994:i:2:p:353-3802012-12-25RePEc:eee:dyncon article 2 1994 18 3 353 380 http:// www.sciencedirect.com/science/article/B6V85-45CX0JG-4/2/709a0370ba92bbcfbf68a3882a18c6f1 Feichtinger, Gustav Novak, Andreas Wirl, Franz oai:RePEc:eee:dyncon:v:20:y:1996:i:4:p:559-5822012-12-25RePEc:eee:dyncon article 4 1996 20 4 559 582 http://www.sciencedirect.com/science/article/B6V85-3VWPNNV-2/2/be3e7ae9cad7788d736a0edb835fa46c Golan, Amos Judge, George Karp, Larry oai:RePEc:eee:dyncon:v:16:y:1992:i:2:p:267-2872012-12-25RePEc:eee:dyncon article 2 1992 16 4 267 287 http://www.sciencedirect.com/science/article/B6V85-45NHVY2-5 /2/84b220ad41d9e7acf78a02d35b1621a4 Fischer, Ronald D. Mirman, Leonard J. oai:RePEc:eee:dyncon:v:23:y:1999:i:4:p:539-5632012-12-25RePEc:eee:dyncon article 4 1999 23 2 539 563 http:// www.sciencedirect.com/science/article/B6V85-3VF9C8K-3/2/becdcae86ab55cdb950dc984e0bafa78 Cavalcanti Ferreira, Pedro oai:RePEc:eee:dyncon:v:32:y:2008:i:11:p:3718-37422012-12-25RePEc:eee:dyncon article This paper examines whether reputation concerns can induce the central bank to implement the time-inconsistent optimal monetary policy in the standard New Keynesian model. Interestingly, the forward-looking nature of this model enables us to account for the coordination of the private agents on the punishment length of their trigger strategy. Our results suggest that both the inflation bias and the stabilization bias can be overcome by a reputation-concerned central bank for the calibrations used in the literature. These results enable us to endogenize Woodford's timeless perspective and tend to weaken the case for recent monetary policy delegation proposals. Inflation bias Monetary policy Reputation Stabilization bias Timeless perspective 11 2008 32 11 3718 3742 http://www.sciencedirect.com/science/article/B6V85-4S7JFTB-2/2/836e165db9d315cc0d5f7af3a55dc471 Loisel, Olivier oai:RePEc:eee:dyncon:v:31:y:2007:i:11:p:3614-36432012-12-25RePEc:eee:dyncon article 11 2007 31 11 3614 3643 http://www.sciencedirect.com/science/article/B6V85-4N2DRDM-1/2/081ae508e594494a3b1060c31ffdae34 Esteban-Bravo, Mercedes Vidal-Sanz, Jose M. oai:RePEc:eee:dyncon:v:27:y:2003:i:11-12:p:1941-19592012-12-25RePEc:eee:dyncon article 11-12 2003 27 9 1941 1959 http://www.sciencedirect.com/science/article/B6V85-47189XN-1/2/ e36fb68c194df1b62a3997dce7b24a4e Gomis-Porqueras, Pere Haro, Alex oai:RePEc:eee:dyncon:v:21:y:1997:i:8-9:p:1405-14252012-12-25RePEc:eee:dyncon article 8-9 1997 21 6 1405 1425 http:// www.sciencedirect.com/science/article/B6V85-3SWYBJD-6/2/69db4df295f4f28dc78914bb7439ff45 Maranas, C. D. Androulakis, I. P. Floudas, C. A. Berger, A. J. Mulvey, J. M. oai:RePEc:eee:dyncon:v:31:y:2007:i:6:p:1844-18742012-12-25RePEc:eee:dyncon article 6 2007 31 6 1844 1874 http://www.sciencedirect.com/science/article/B6V85-4N4S0D4-1/2/ 4a2cd35f3ccb7b559d433b8168cdaa92 Kirchler, Michael Huber, Jurgen oai:RePEc:eee:dyncon:v:33:y:2009:i:1:p:15-362012-12-25RePEc:eee:dyncon article Affine jump-diffusion (AJD) processes constitute a large and widely used class of continuous-time asset pricing models that balance tractability and flexibility in matching market data. The prices of e.g., bonds, options, and other assets in AJD models are given by extended pricing transforms that have an exponential-affine form; these transforms have been characterized in great generality by Duffie et al. [2000. Transform analysis and asset pricing for affine jump-diffusions. Econometrica 68, 1343-1376]. Calculating model prices requires inversion of these transforms, and this has limited the application of AJD models to the comparatively small subclass for which the transforms are available in closed form. This article seeks to widen the scope of AJD models amenable to practical application through approximate transform inversion techniques. More specifically, we develop the use of saddlepoint approximations for AJD models. These approximations facilitate the calculation of prices in AJD models whose transforms are not available explicitly. We derive and test several alternative saddlepoint approximations and find that they produce accurate prices over a wide range of parameters. Transform inversion Characteristic function Option prices Numerical approximations 1 2009 33 1 15 36 http://www.sciencedirect.com/science/article/B6V85-4SH6B7X-3/2/9ec6133ec14ee10687ef64be1054c76d Glasserman, Paul Kim, Kyoung-Kuk oai:RePEc:eee:dyncon:v:29:y:2005:i:3:p:509-5272012-12-25RePEc:eee:dyncon article 3 2005 29 3 509 527 http://www.sciencedirect.com/science/article/B6V85-4CK1Y7T-1/2/ f38e0edabce38ba6c775e7b01cf3019a Kaboski, Joseph P. oai:RePEc:eee:dyncon:v:22:y:1998:i:7:p:1117-11372012-12-25RePEc:eee:dyncon article 7 1998 22 5 1117 1137 http://www.sciencedirect.com/science/ article/B6V85-3V5MB4X-7/2/60fd37cf72e9043027042f558312cc92 Michener, Ronald Ravikumar, B. oai:RePEc:eee:dyncon:v:24:y:2000:i:11-12:p:1499-15252012-12-25RePEc:eee:dyncon article 11-12 2000 24 10 1499 1525 http://www.sciencedirect.com/science/article/B6V85-412RWNR-2/2/ec768501a417e7388f606fc66491cde5 Babbs, Simon oai:RePEc:eee:dyncon:v:30:y:2006:i:1:p:1-252012-12-25RePEc:eee:dyncon article 1 2006 30 1 1 25 http://www.sciencedirect.com/science/article/B6V85-4F37M5H-1/2/534f6c6a93b937297b01ad2dfcd26d31 Zakamouline, Valeri I. oai:RePEc:eee:dyncon:v:19:y:1995:i:4:p:869-8722012-12-25RePEc:eee:dyncon article 4 1995 19 5 869 872 http://www.sciencedirect.com/science/article/B6V85-48DY1GD-6/2/758ae4f0ce8554303afe4c8d4aab005c Mongardini, Joannes oai:RePEc:eee:dyncon:v:28:y:2003:i:2:p:273-2852012-12-25RePEc:eee:dyncon article 2 2003 28 11 273 285 http://www.sciencedirect.com/science/article/B6V85-479KD7K-1/2/ 4a621b33c3b96e058b8c326feb39da80 Lubik, Thomas A. Schorfheide, Frank oai:RePEc:eee:dyncon:v:27:y:2002:i:2:p:271-2812012-12-25RePEc:eee:dyncon article 2 2002 27 12 271 281 http://www.sciencedirect.com /science/article/B6V85-46SVXBT-5/2/2e9d4c14722b24db6729f9e3f9094c57 Nishimura, Kazuo Shimomura, Koji oai:RePEc:eee:dyncon:v:12:y:1988:i:1:p:127-1332012-12-25RePEc:eee:dyncon article 1 1988 12 3 127 133 http://www.sciencedirect.com/science/article/B6V85-45JK55T-1C/2/65a7eb831d48c27c7ba2e9c9fb71cfd0 Baum, Christopher F. Doyle, Joanne M. oai:RePEc:eee:dyncon:v:15:y:1991:i:2:p:339-3532012-12-25RePEc:eee:dyncon article 2 1991 15 4 339 353 http://www.sciencedirect.com/science/article/B6V85-45FCJHP-6/2/ff202fdda62d75644ea8eca421d29150 Fukuda, Shin-ichi oai:RePEc:eee:dyncon:v:30:y:2006:i:2:p:243-2782012-12-25RePEc:eee:dyncon article 2 2006 30 2 243 278 http://www.sciencedirect.com/science/article/B6V85-4FK3PBP-1/2/ a11c70adfeaabc371ae7ec4cf75374ef Turnovsky, Stephen J. Smith, William T. oai:RePEc:eee:dyncon:v:27:y:2003:i:6:p:937-9692012-12-25RePEc:eee:dyncon article 6 2003 27 4 937 969 http:// www.sciencedirect.com/science/article/B6V85-45VG7NP-1/2/414268a6173d22244b955cb8ee863de5 Tokat, Yesim Rachev, Svetlozar T. Schwartz, Eduardo S. oai:RePEc:eee:dyncon:v:12:y:1988:i:2-3:p:385-4232012-12-25RePEc:eee:dyncon article 2-3 1988 12 385 423 http://www.sciencedirect.com/science/article/B6V85-45MFRW4-M/2/8448f4d350843e858d7b2eb9b0e9fa36 Hamilton, James D. oai:RePEc:eee:dyncon:v:12:y:1988:i:1:p:7-122012-12-25RePEc:eee:dyncon article 1 1988 12 3 7 12 http://www.sciencedirect.com/science/article/B6V85-45JK55T-T/2/ 1c45c52cf42beea844e24da5c51edec6 Murphy, Robert G. oai:RePEc:eee:dyncon:v:21:y:1997:i:4-5:p:853-8722012-12-25RePEc:eee:dyncon article 4-5 1997 21 5 853 872 http://www.sciencedirect.com/science/ article/B6V85-3SWY0XD-8/2/759d4706197a2e01987215d4e00976ec Merz, Monika oai:RePEc:eee:dyncon:v:17:y:1993:i:3:p:401-4212012-12-25RePEc:eee:dyncon article 3 1993 17 5 401 421 http:// www.sciencedirect.com/science/article/B6V85-45JK5BH-32/2/25492d6b442c518b2470c9650bf45281 Vaccaro, Richard J. Vukina, Tomislav oai:RePEc:eee:dyncon:v:23:y:1999:i:9-10:p:1459-14852012-12-25RePEc:eee:dyncon article 9-10 1999 23 9 1459 1485 http://www.sciencedirect.com/science/article/B6V85-3Y9RKX5-9/2/ 812f3ccfc13a44951271c0c2ddbbb519 Laxton, Douglas Rose, David Tambakis, Demosthenes oai:RePEc:eee:dyncon:v:32:y:2008:i:1:p:279-3022012-12-25RePEc:eee:dyncon article 1 2008 32 1 279 302 http:// www.sciencedirect.com/science/article/B6V85-4PKX5M3-2/2/328a7479f39dbc7354156555d2951be7 Rosenow, Bernd oai:RePEc:eee:dyncon:v:26:y:2002:i:2:p:247-2702012-12-25RePEc:eee:dyncon article 2 2002 26 2 247 270 http://www.sciencedirect.com/science/article/B6V85-43Y9W8B-5/2/dd7bb7da96968e22edbf20ce4fe5a030 Gong, Liutang Zou, Heng-fu oai:RePEc:eee:dyncon:v:11:y:1987:i:2:p:229-2342012-12-25RePEc:eee:dyncon article 2 1987 11 6 229 234 http://www.sciencedirect.com/science/article/B6V85-45JK54D-F/2/3d17c52c696c566c16f5ea657a7ae9c6 Deissenberg, Christophe oai:RePEc:eee:dyncon:v:18:y:1994:i:1:p:161-1842012-12-25RePEc:eee:dyncon article 1 1994 18 1 161 184 http://www.sciencedirect.com/science/article/B6V85-46MMW30-V/2/ 3ebe181e61e06d1e68076d0cc4632df7 Nagurney, Anna oai:RePEc:eee:dyncon:v:27:y:2003:i:11-12:p:2035-20572012-12-25RePEc:eee:dyncon article 11-12 2003 27 9 2035 2057 http://www.sciencedirect.com/science/ article/B6V85-47287N6-1/2/82028d7ffd7ce16dc6734919b01cf30d Amman, Hans M. Kendrick, David A. oai:RePEc:eee:dyncon:v:21:y:1997:i:8-9:p:1353-13762012-12-25RePEc:eee:dyncon article 8-9 1997 21 6 1353 1376 http://www.sciencedirect.com/science/article/B6V85-3SWYBJD-4/2/d20c450004500d17a476dcf41067106c Clewlow, Les Hodges, Stewart oai:RePEc:eee:dyncon:v:27:y:2003:i:11-12:p:2207-22182012-12-25RePEc:eee:dyncon article 11-12 2003 27 9 2207 2218 http://www.sciencedirect.com/science/article/B6V85-470M5XH-1/2/ 2eb0cc6878abee43bb67d32ef3d0fd00 Kutschinski, Erich Uthmann, Thomas Polani, Daniel oai:RePEc:eee:dyncon:v:35:y:2011:i:4:p:479-4902011-03-25RePEc:eee:dyncon article We show that the imposition of a Markovian tax on emissions, that is, a tax rate which depends on the pollution stock, can induce stable cartelization in an oligopolistic polluting industry. This does not hold for a uniform tax. Thus, accounting for the feedback effect that exists within a dynamic framework, where pollution is allowed to accumulate into a stock over time, changes the result obtained within a static framework. Moreover, the cartel formation can diminish the welfare gain from environmental regulation such that welfare under environmental regulation and collusion of firms lies below that under a laissez-faire policy. Pollution tax Oligopoly Cartel formation Coalition formation Differential game 4 2011 35 4 479 490 http://www.sciencedirect.com/science/article/B6V85-51P9T45-1/2/ 82d0a909075f6f6c2b2ec51a4e991d99 Benchekroun, Hassan Ray Chaudhuri, Amrita oai:RePEc:eee:dyncon:v:34:y:2010:i:10:p:2023-20372011-03-25RePEc:eee:dyncon article In this paper, we use the generalized Taylor economy (GTE) framework to examine the optimal choice of inflation index. In this otherwise standard dynamic stochastic general equilibrium (DSGE) model, there can be many sectors, each with a different contract length. In the GTE framework with an empirically relevant contract structure, a simple rule under which the interest rate responds to economy-wide inflation gives a welfare outcome nearly identical to the optimal policy. This finding suggests that it may not be necessary for a well-designed monetary policy to respond to sector-specific inflations. Inflation targeting Optimal monetary policy 10 2010 34 10 2023 2037 http://www.sciencedirect.com/science/article/B6V85-4YWYYNH-1/2/7bf32a56721aaa52b271bdbb882f0012 Kara, Engin oai:RePEc:eee:dyncon:v:35:y:2011:i:4:p:604-6152011-03-25RePEc:eee:dyncon article Several approaches to finding the second-order approximation to a dynamic model have been proposed recently. This paper differs from the existing literature in that it makes use of the Magnus and Neudecker (1999) definition of the Hessian matrix. The key result is a linear system of equations that characterizes the second-order coefficients. No use is made of multi-dimensional arrays or tensors, a practical implication of which is that it is much easier to transcribe the mathematical representation of the solution into usable computer code. Matlab code is available from http://paulklein.se/newsite/codes/codes.php; Fortran 90 code is available from http://alcor.concordia.ca/~pgomme/. Solving dynamic models Second-order approximation 4 2011 35 4 604 615 http://www.sciencedirect.com/science/article/B6V85-51BNWP6-1/2/ffe57b938b4a2bd30dfc926de025ce7c Gomme, Paul Klein, Paul oai:RePEc:eee:dyncon:v:34:y:2010:i:12:p:2578-26002011-03-25RePEc:eee:dyncon article This paper analyses what determines an individual investor's risk-sharing demand for options and, aggregating across investors, what the equilibrium demand for options. We find that agents trade options to achieve their desired skewness; specifically, we find that portfolio holdings boil down to a three-fund separation theorem that includes a so-called skewness portfolio that agents like to attain. Our analysis indicates also, however, that the common risk-sharing setup used for option demand and pricing is incompatible with a stylized fact about open interest across strikes. Option demand Open interest Co-skewness Skewness preference 12 2010 34 12 2578 2600 http://www.sciencedirect.com/science/ article/B6V85-50PCM69-2/2/7dd224801953b179ccece0d6a03a6b2f Judd, Kenneth L. Leisen, Dietmar P.J. oai:RePEc:eee:dyncon:v:34:y:2010:i:11:p:2232-22442011-03-25RePEc:eee:dyncon article An estimation method is developed for extracting the latent stochastic volatility from VIX, a volatility index for the S&P 500 index return produced by the Chicago Board Options Exchange (CBOE) using the so-called model-free volatility construction. Our model specification encompasses all mean-reverting stochastic volatility option pricing models with a constant-elasticity of variance and those allowing for price jumps under stochastic volatility. Our approach is made possible by linking the latent volatility to the VIX index via a new theoretical relationship under the risk-neutral measure. Because option prices are not directly used in estimation, we can avoid the computational burden associated with option valuation for stochastic volatility/jump option pricing models. Our empirical findings are: (1) incorporating a jump risk factor is critically important; (2) the jump and volatility risks are priced; (3) the popular square-root stochastic volatility process is a poor model specification irrespective of allowing for price jumps or not. Our simulation study shows that statistical inference is reliable and not materially affected by the approximation used in the VIX index construction. Model-free volatility Stochastic volatility Jump Options VIX Constant elasticity of variance 11 2010 34 11 2232 2244 http://www.sciencedirect.com/science/article/B6V85-506J0FH-1/2/ 6a9251764fe7abb6deda18b0b4dea764 Duan, Jin-Chuan Yeh, Chung-Ying oai:RePEc:eee:dyncon:v:34:y:2010:i:11:p:2245-22582011-03-25RePEc:eee:dyncon article This paper extends the one-factor Gaussian copula model, the standard market model for valuing CDOs, based on the multivariate Wang transform. Unlike the existing models, our model calibrates the parameter associated with a risk adjustment for default threshold, not correlation parameter, which always exists and is unique for any market price of CDO tranche. A Student t-copula model is also considered within the same framework to describe a fat-tail distribution observed in the actual market. Through numerical experiments, it is shown that our model provides a better fit to the market data compared with the existing models. One-factor Gaussian copula model Merton's structural model Multivariate Wang transform Student t copula 11 2010 34 11 2245 2258 http://www.sciencedirect.com/science/article/B6V85-506H0K8-1/2/ e64ac18dcbbcc8cf2243abaebca05b80 Kijima, Masaaki Motomiya, Shin-ichi Suzuki, Yoichi oai:RePEc:eee:dyncon:v:34:y:2010:i:12:p:2420-24392011-03-25RePEc:eee:dyncon article A benchmark AK optimal growth model with maintenance expenditures and endogenous utilization of capital is considered within an explicit vintage capital framework. Scrapping is endogenous, and the model allows for a clean distinction between age and usage dependent capital depreciation and obsolescence. It is also shown that in this set-up past investment profile completely determines the size of current maintenance expenditures. Among other findings, a closed-form solution to optimal dynamics is provided taking advantage of very recent development in optimal control of infinite dimensional systems. More importantly, and in contrast to the pre-existing literature, we study investment and maintenance co-movements without any postulated ad hoc depreciation function. In particular using impulse response experiments, we find that optimal investment and maintenance do move together in the short-run in response to neutral technological shocks, which seems to be more consistent with the data. Maintenance Investment Optimal control Dynamic programming Infinite dimensional problem 12 2010 34 12 2420 2439 http://www.sciencedirect.com/science/article/B6V85-508PPN6-2/2/ 3fe0bc48a42b2667bbd21853c1b671e8 Boucekkine, R. Fabbri, G. Gozzi, F. oai:RePEc:eee:dyncon:v:34:y:2010:i:10:p:2179-21912011-03-25RePEc:eee:dyncon article We quantitatively evaluate a business-cycle environment featuring endogenous capital utilization and nominal price rigidity that illustrates a negative relationship between labor hours and technology (TFP) shocks and a positive relationship between hours and investment (MEI) shocks. Sticky prices induce firms to suppress changes in output due to TFP shocks through changes in the utilization rate of the existing capital stock and labor demand. MEI shocks have an indirect impact on output via their link with capital utilization, and are shown to be the dominant driver of post-1979 US business cycles. Business-cycle shocks Total factor productivity Marginal efficiency of investment Nominal rigidities 10 2010 34 10 2179 2191 http://www.sciencedirect.com/science/article/B6V85-505NRX4-4/2/3d29a73b151288cd43cdec4ed4e07312 Dave, Chetan Dressler, Scott J. oai:RePEc:eee:dyncon:v:35:y:2011:i:5:p:746-7632011-03-25RePEc:eee:dyncon article Previous studies have suggested that some pollutant levels first increases due to the economic growth and then start decreasing, the pattern being called the "environmental Kuznets curve" (EKC). We examine EKC-type transitions of pollutant levels not with respect to economic growth but more generally in time. Assuming that each policy maker optimally executes the two switching options of regulation and unregulation for pollution, the switching dynamics of environmental policy can be described by an alternating renewal process. It is shown that the double Laplace transform of transition density of a pollutant level can be obtained by a novel application of renewal theory. The expected level of overall pollutants is then calculated numerically and found to exhibit either a [Lambda][hyphen (true graphic)]shaped or an N-shaped pattern in time. Our results present a simple explanation for the EKC-type transitions of pollutant levels within a real options framework. Environmental Kuznets curve Real option Alternating renewal process Double Laplace transform 5 2011 35 5 746 763 http://www.sciencedirect.com/science/article/B6V85-51XH963-4/2/cc560764f9402da0fabb377447b8f407 Kijima, Masaaki Nishide, Katsumasa Ohyama, Atsuyuki oai:RePEc:eee:dyncon:v:34:y:2010:i:11:p:2320-23402011-03-25RePEc:eee:dyncon article Galluccio and Roncoroni (2006) empirically demonstrate that cross-sectional data provide relevant information when assessing dynamic risk in fixed income markets. We put forward a theoretical framework supporting that finding based on the notion of "shape factors". We devise an econometric procedure to identify shape factors, propose a dynamic model for the yield curve, develop a corresponding arbitrage pricing theory, derive interest rate pricing formulae, and study the analytical properties exhibited by a finite factor restriction of rate dynamics that is cross-sectionally consistent with a family of exponentially weighed polynomials. We also conduct an empirical analysis of cross-sectional risk affecting US swap, Euro bond, and oil markets. Results support the conclusion whereby shape factors outperform the classical yield (resp., price) factors (i.e., level, slope, and convexity) in explaining the underlying fixed income (resp., commodity) market risk. The methodology can in principle be used for understanding the intertemporal dynamics of any cross-sectional data. Risk measures Factor analysis Cross-sectional analysis Interest rates 11 2010 34 11 2320 2340 http://www.sciencedirect.com/science/article/B6V85-508PPN6-1/2/12cd80e6f04b8b22e8e86328130b03b5 Roncoroni, Andrea Galluccio, Stefano Guiotto, Paolo oai:RePEc:eee:dyncon:v:34:y:2010:i:10:p:2229-22292011-03-25RePEc:eee:dyncon article 10 2010 34 10 2229 2229 http://www.sciencedirect.com/science/article/ B6V85-50C5X6C-1/2/eaed7a81d66f8d4925ca57b2b19b30da Cogan, John F. Cwik, Tobias Taylor, John B. Wieland, Volker oai:RePEc:eee:dyncon:v:35:y:2011:i:2:p:229-2392011-03-25RePEc:eee:dyncon article We describe a sparse-grid collocation method to compute recursive solutions of dynamic economies with a sizable number of state variables. We show how powerful this method can be in applications by computing the non-linear recursive solution of an international real business cycle model with a substantial number of countries, complete insurance markets and frictions that impede frictionless international capital flows. In this economy, the aggregate state vector includes the distribution of world capital across different countries as well as the exogenous country-specific technology shocks. We use the algorithm to efficiently solve models with up to 10 countries (i.e., up to 20 continuous-valued state variables). Sparse grids Collocation International real business cycles 2 2011 35 2 229 239 http://www.sciencedirect.com/science/article/B6V85-514P5RX-6/2/cbce27c8dc59efddf50ccc02fd600e67 Malin, Benjamin A. Krueger, Dirk Kubler, Felix oai:RePEc:eee:dyncon:v:35:y:2011:i:5:p:730-7452011-03-25RePEc:eee:dyncon article Durable goods pose a challenge for standard sticky-price models because the near constancy of their shadow value and their apparent price flexibility lead to perverse and counterfactual economic implications, such as the tendency of the durables and nondurables sectors to move in opposite directions following a monetary policy shock. This paper introduces input-output interactions and limited input mobility into an otherwise standard sticky-price model with durable and nondurable goods. The extended model generates substantial aggregate effects and positive sectoral comovement following a monetary policy shock, even when durable goods have flexible prices. The latter result is consistent with empirical evidence on the sectoral effects of monetary policy. Durability Input-output interactions Roundabout production Sectoral comovement Monetary policy 5 2011 35 5 730 745 http:// www.sciencedirect.com/science/article/B6V85-51YYNVF-2/2/dcef3fb370d8b50265f703cd7049f9ec Bouakez, Hafedh Cardia, Emanuela Ruge-Murcia, Francisco J. oai:RePEc:eee:dyncon:v:34:y:2010:i:10:p:2074-20882011-03-25RePEc:eee:dyncon article We show how the method of endogenous gridpoints can be extended to solve models with occasionally binding constraints among endogenous variables very efficiently. We present the method for a consumer problem with occasionally binding collateral constraints and non-separable utility in durable and non-durable consumption. This problem allows for a joint analysis of durable and non-durable consumption in models with uninsurable income risk which is important to understand patterns of consumption, saving and collateralized debt. We illustrate the algorithm and its efficiency by calibrating the model to US data. Endogenous gridpoints method Occasionally binding constraints Collateralized debt Durables 10 2010 34 10 2074 2088 http://www.sciencedirect.com/science/article/B6V85-501FPFH-1/2/99844cbee168fa77aeed7d060eb58a8e Hintermaier, Thomas Koeniger, Winfried oai:RePEc:eee:dyncon:v:35:y:2011:i:3:p:253-2562011-03-25RePEc:eee:dyncon article 3 2011 35 3 253 256 http://www.sciencedirect.com/science/article/B6V85-50S8PGC-1/2/665169c5aa0eae4eb3ff7d12e08a77c4 Dzhumashev, Ratbek Gahramanov, Emin oai:RePEc:eee:dyncon:v:35:y:2011:i:3:p:312-3292011-03-25RePEc:eee:dyncon article In this paper we model the situation where a non-renewable investment is given, for instance a resource reservoir, and show how to optimally trade-off between dividends and leverage, in order to maximize a performance indicator for shareholders, up to the bankruptcy time. We then study the way market risk (the volatility of the market price of the resource) impacts the optimal policies and the default risk of the company. The moments when the policies are rebalanced are analyzed and we give a measure of the agency costs which appear between the shareholders and the debt-holders. Dividend policy Capital structure Non-renewable investment Default risk Bankruptcy costs Agency costs 3 2011 35 3 312 329 http://www.sciencedirect.com/science/article/B6V85-51JPWMD-1/2/d90caf73b5f8782bb9af5b201f4fc952 Coculescu, Delia oai:RePEc:eee:dyncon:v:34:y:2010:i:10:p:2141-21582011-03-25RePEc:eee:dyncon article Macroeconomic studies of tax policy in dynamic general equilibrium usually assume that reforms hit the economy unexpectedly and last forever. Here, we explore how previous results change when we allow policy changes to be pre-announced and of finite duration and when these facts are anticipated by households and firms. Quantitatively we demonstrate a headstart advantage from pre-announcement that is never caught up by a surprising reform. The welfare gain stemming from a 5-year announcement phase of a corporate tax cut, for example, is estimated to be around 10 percent of the total gain from the reform. We show that impulse responses of important variables like firm value, dividends, and investment differ qualitatively depending on whether the reform comes expected or not. We are also able to demonstrate a genuine welfare gain from temporary tax cuts. Impulse responses generated by our numerical method can be retraced by phase diagram analysis, which facilitates explanation and interpretation of the produced results. Tax reform Anticipation effects Investment Economic growth Welfare Corporate finance Capital taxation 10 2010 34 10 2141 2158 http://www.sciencedirect.com/science/article/B6V85-505NRX4-5/2/d495cb8eb2d07fe10b774f9d9bdcfad2 Strulik, Holger Trimborn, Timo oai:RePEc:eee:dyncon:v:34:y:2010:i:12:p:2391-24062011-03-25RePEc:eee:dyncon article Brand competition is modelled using an agent based approach in order to examine the long run dynamics of market structure and brand characteristics. A repeated game is designed where myopic firms choose strategies based on beliefs about their rivals and consumers. Consumers are heterogeneous and can observe neighbour behaviour through social networks. Although firms do not observe them, the social networks have a significant impact on the emerging market structure. Presence of networks tends to polarize market share and leads to higher volatility in brands. Yet convergence in brand characteristics usually happens whenever the market reaches a steady state. Scale-free networks accentuate the polarization and volatility more than small world or random networks. Unilateral innovations are less frequent under social networks. Dynamic oligopoly Evolutionary game Social network 12 2010 34 12 2391 2406 http://www.sciencedirect.com/science/article/B6V85-509W721-2/2/5250b4fae9b87a895eee9e241f5919f8 Sengupta, Abhijit Greetham, Danica Vukadinovic oai:RePEc:eee:dyncon:v:34:y:2010:i:9:p:1531-15492011-03-25RePEc:eee:dyncon article Comparisons of various methods for solving stochastic control economic models can be done with Monte Carlo methods. These methods have been applied to simple one-state, one-control quadratic-linear tracking models; however, large outliers may occur in a substantial number of the Monte Carlo runs when certain parameter sets are used in these models. Building on the work of Mizrach (1991) and (Amman and Kendrick, 1994) and (Amman and Kendrick, 1995), this paper tracks the source of these outliers to two sources: (1) the use of a zero for the penalty weights on the control variables and (2) the generation of near-zero initial estimate of the control parameter in the systems equations by the Monte Carlo routine. This result leads to an understanding of why both the unsophisticated optimal feedback (certainty equivalence) and the sophisticated dual methods do poorly in some Monte Carlo comparisons relative to the moderately sophisticated expected optimal feedback method. Active learning Dual control Optimal experimentation Stochastic optimization Time-varying parameters Numerical experiments 9 2010 34 9 1531 1549 http://www.sciencedirect.com/science/article/B6V85-50DYH0H-2/2/c36bf2647755195326d2d2934c6c7af4 Tucci, Marco P. Kendrick, David A. Amman, Hans M. oai:RePEc:eee:dyncon:v:35:y:2011:i:4:p:565-5782011-03-25RePEc:eee:dyncon article The credibility problems of monetary policy are enlarged by transmission lags whenever the welfare criterion consists of arguments with differing transmission lags. If, as usually argued, prices react to monetary policy with a longer lag than output, the discretionary bias is substantially increased under a consumer welfare maximizing policy criterion (flexible inflation targeting) in the prototype New Keynesian model. Money growth targeting can significantly reduce the discretionary bias, but is not robust to other specifications of welfare with higher valuation of output stability. Discretion and stabilization bias Monetary policy Transmission lags Inflation targeting Money targeting 4 2011 35 4 565 578 http://www.sciencedirect.com/science/article/B6V85-51TGG0W-1/2/d86f89813b29b57d2b0f1c263e778746 Kilponen, Juha Leitemo, Kai oai:RePEc:eee:dyncon:v:35:y:2011:i:1:p:52-662011-03-25RePEc:eee:dyncon article We examine the expectational stability (E-stability) of rational expectations equilibrium (REE) in a standard New Keynesian model in which private agents refer to the central bank's forecast in the process of adaptive learning. To satisfy the E-stability condition in this environment, the central bank must respond more strongly to the expected inflation rate than the extent to which the Taylor principle suggests. However, the central bank's strong reaction to the expected inflation rate raises the possibility of indeterminacy of the REE. In considering these problems, a robust policy requires responding to the current inflation rate to a certain degree. Adaptive learning E-stability New Keynesian model Monetary policy Taylor principle 1 2011 35 1 52 66 http:// www.sciencedirect.com/science/article/B6V85-50PCM69-3/2/274859ddc8984ae88e96523b30d47d46 Muto, Ichiro oai:RePEc:eee:dyncon:v:35:y:2011:i:1:p:148-1622011-03-25RePEc:eee:dyncon article Heterogeneous agent models (HAMs) in finance and economics are often characterised by high dimensional nonlinear stochastic differential or difference systems. Because of the complexity of the interaction between the nonlinearities and noise, a commonly used, often called indirect, approach to the study of HAMs combines theoretical analysis of the underlying deterministic skeleton with numerical analysis of the stochastic model. However, it is well known that this indirect approach may not properly characterise the nature of the stochastic model. This paper aims to tackle this issue by developing a direct and analytical approach to the analysis of a stochastic model of speculative price dynamics involving two types of agents, fundamentalists and chartists, and the market price equilibria of which can be characterised by the stationary measures of a stochastic dynamical system. Using the stochastic method of averaging and stochastic bifurcation theory, we show that the stochastic model displays behaviour consistent with that of the underlying deterministic model when the time lag in the formation of price trends used by the chartists is far away from zero. However, when this lag approaches zero, such consistency breaks down. Heterogeneous agents Speculative behaviour Stochastic bifurcations Stationary measures Chartists 1 2011 35 1 148 162 http://www.sciencedirect.com/ science/article/B6V85-511TN83-1/2/2553dcdb2868cfc1c1a2d09beeb6e599 Chiarella, Carl He, Xue-Zhong Zheng, Min oai:RePEc:eee:dyncon:v:35:y:2011:i:3:p:295-3112011-03-25RePEc:eee:dyncon article Rational expectations solutions are usually derived by assuming that all state variables relevant to forward-looking behaviour are directly observable, or that they are "...an invertible function of observables" (Mehra and Prescott, 1980). Using a framework that nests linearised DSGE models, we give a number of results useful for the analysis of linear rational expectations models with restricted information sets. We distinguish between instantaneous and asymptotic invertibility, and show that the latter may require significantly less information than the former. We also show that non-invertibility of the information set can have significant implications for the time series properties of economies. Imperfect information Invertibility Rational expectations Fundamental versus nonfundamental time series representations Kalman filter Dynamic stochastic general equilibrium 3 2011 35 3 295 311 http://www.sciencedirect.com/science/article/B6V85-51G9BVP-1/2/ 21c4da514474efaaf2b5573ef8f53caa Baxter, Brad Graham, Liam Wright, Stephen oai:RePEc:eee:dyncon:v:34:y:2010:i:9:p:1596-16092011-03-25RePEc:eee:dyncon article We introduce a statistical test for comparing the predictive accuracy of competing copula specifications in multivariate density forecasts, based on the Kullback-Leibler information criterion (KLIC). The test is valid under general conditions on the competing copulas: in particular it allows for parameter estimation uncertainty and for the copulas to be nested or non-nested. Monte Carlo simulations demonstrate that the proposed test has satisfactory size and power properties in finite samples. Applying the test to daily exchange rate returns of several major currencies against the US dollar we find that the Student-t copula is favored over Gaussian, Gumbel and Clayton copulas. Copula-based density forecast Empirical copula Kullback-Leibler information criterion Out-of-sample forecast evaluation Semi-parametric statistics 9 2010 34 9 1596 1609 http://www.sciencedirect.com/science/article/B6V85-50DYH0H-3/2/f4a53951a4f527bb4c804cb56053fe8d Diks, Cees Panchenko, Valentyn van Dijk, Dick oai:RePEc:eee:dyncon:v:34:y:2010:i:10:p:2126-21402011-03-25RePEc:eee:dyncon article The pattern of joining the labor force only at an advanced stage of the life-cycle was widespread among American women in the 1960s and 1970s, but not since the 1980s. To explain this change we conduct a theoretical analysis of the interrelation between women's lifetime labor supply choices and the dynamic macroeconomic environment. In our model women choose the late-entry pattern only at early stages of the growth process when wages are sufficiently low and grow sufficiently rapidly. As the economy grows, this lifetime labor profile vanishes and women either join the labor force either early in life or not at all. Experience Labor Force Participation 10 2010 34 10 2126 2140 http:// www.sciencedirect.com/science/article/B6V85-50CV7T1-2/2/77e635849522431bb01bfb423712cf50 Hazan, Moshe Maoz, Yishay D. oai:RePEc:eee:dyncon:v:34:y:2010:i:11:p:2259-22722011-03-25RePEc:eee:dyncon article In this paper a Markov chain Monte Carlo (MCMC) technique is developed for the Bayesian analysis of structural credit risk models with microstructure noises. The technique is based on the general Bayesian approach with posterior computations performed by Gibbs sampling. Simulations from the Markov chain, whose stationary distribution converges to the posterior distribution, enable exact finite sample inferences of model parameters. The exact inferences can easily be extended to latent state variables and any nonlinear transformation of state variables and parameters, facilitating practical credit risk applications. In addition, the comparison of alternative models can be based on deviance information criterion (DIC) which is straightforwardly obtained from the MCMC output. The method is implemented on the basic structural credit risk model with pure microstructure noises and some more general specifications using daily equity data from US and emerging markets. We find empirical evidence that microstructure noises are positively correlated with the firm values in emerging markets. MCMC Credit risk Microstructure noise Structural models Deviance information criterion 11 2010 34 11 2259 2272 http://www.sciencedirect.com/science/article/B6V85-5033XPS-1/2/174c0427e70ec54fdfb78cab4850bce4 Huang, Shirley J. Yu, Jun oai:RePEc:eee:dyncon:v:35:y:2011:i:5:p:676-6932011-03-25RePEc:eee:dyncon article In this paper, I examine the transitional dynamics of an economy populated by individuals who split their time between acquiring a formal education, producing final goods, and innovating. The paper has two objectives: (i) uncovering the macroeconomic circumstances that favored the rise of formal education; (ii) to reconcile the remarkable growth of the education sector with the constancy of other key macroeconomic variables, such as the interest rate, the consumption-output ratio, and the growth rate of per capita income (Kaldor facts). The transitional dynamics of human capital growth models, such as Lucas (1998), would attribute the arrival of education to the diminishing marginal productivity of physical capital. Conversely, the model proposed here suggests that it is the rate of learning that catches up with the rate of return on physical capital. As technical knowledge expands, the rate of return on education increases, inducing individuals to stay longer in school. The model's transitional paths are matched with long run U.S. educational and economic data. Public knowledge Learning rate Transitional dynamics Calibration 5 2011 35 5 676 693 http://www.sciencedirect.com/science/article/B6V85-51XR3DH-2/2/e23354fae7e3673e3b728775d0c1083f Iacopetta, Maurizio oai:RePEc:eee:dyncon:v:35:y:2011:i:5:p:764-7752011-03-25RePEc:eee:dyncon article We develop an efficient algorithm to implement the adjoint method that computes sensitivities of an interest rate derivative to different underlying rates in the co-terminal swap-rate market model. The order of computation per step of the new method is shown to be proportional to the number of rates times the number of factors, which is the same as the order in the LIBOR market model. Adjoint method Delta Computational order Market model Monte Carlo simulation 5 2011 35 5 764 775 http:// www.sciencedirect.com/science/article/B6V85-51YYNVF-5/2/7115d4cec5fea4c299e605d93176224c Joshi, Mark Yang, Chao oai:RePEc:eee:dyncon:v:35:y:2011:i:5:p:714-7292011-03-25RePEc:eee:dyncon article The present article provides a novel framework for analyzing option network problems, which is a general class of compound real option problems with an arbitrary combination of reversible and irreversible decisions. The present framework represents the interdependent structure of decisions by using a directed graph. In this framework, the option network problem is formulated as a singular stochastic control problem, whose optimality condition is then obtained as a dynamical system of generalized linear complementarity problems (GLCPs). This enables us to develop a systematic and efficient numerical method for evaluating the option value and the optimal decision policy. Compound real options Managerial flexibility Graph theory Singular stochastic control problems Generalized complementarity problems 5 2011 35 5 714 729 http://www.sciencedirect.com/science/article/B6V85-51XR3DH-1/2/609086f3cd814a29916aea83cfca1a67 Akamatsu, Takashi Nagae, Takeshi oai:RePEc:eee:dyncon:v:34:y:2010:i:12:p:2485-24932011-03-25RePEc:eee:dyncon article We study the speed at which technologies are adopted depending on how the market power is shared between the firms that sell technologies and the firms that buy them. Our results suggest that, because of a double marginalization problem, adoption is fastest when either sellers or buyers hold all the market power. Thus, competition between sides of the market may delay the adoption of technologies. Market structures Technology adoption 12 2010 34 12 2485 2493 http://www.sciencedirect.com/science/article/ B6V85-50CVPWB-1/2/d09df5e8c1fbf78e93f9cf9807a1ff59 Rivas, Javier oai:RePEc:eee:dyncon:v:34:y:2010:i:10:p:1907-19222011-03-25RePEc:eee:dyncon article Feigenbaum et al. (2009) showed in a two-period overlapping generations model that households can improve upon the rational, competitive equilibrium while maintaining competitive factor markets if agents coordinate upon an irrational consumption/ saving rule. We generalize their findings to continuous time. The optimal consumption rule with coordination implies a U-shaped lifecycle consumption profile. Rational agents living in a standard competitive equilibrium would need a 4% increase of consumption in every period across the lifecycle to reach the level of utility that can be achieved under coordination. Most of this gain can be achieved with a linear saving rule. Consumption Saving Coordination General equilibrium Rules of thumb Pecuniary externality Overlapping generations Optimal irrational behavior 10 2010 34 10 1907 1922 http://www.sciencedirect.com/science/article/B6V85-505NRX4-1/2/9c9e2150d488078c8948a8ce87d546c7 Feigenbaum, James Caliendo, Frank N. oai:RePEc:eee:dyncon:v:34:y:2010:i:10:p:1993-20092011-03-25RePEc:eee:dyncon article This paper analyzes the roles of credit market conditions in endogenous formation of housing-market boom-bust cycles in a business cycle model. When households are uncertain about the duration of a temporary high income growth period, expected future house prices rise during the high growth period and fall at the end of the period. But this development causes expectation-driven boom-bust cycles in current house prices only if the economy is open to international capital flows. It is also shown that high maximum loan-to-value ratios for residential mortgages per se do not cause boom-bust cycles without international capital flows in the model. Informational overshooting House prices Boom-bust cycles Credit market frictions Financial liberalization 10 2010 34 10 1993 2009 http://www.sciencedirect.com/science/article/B6V85-4YWYYNH-3/2/905819781cff7f0dacdcf361155b2d2d Tomura, Hajime oai:RePEc:eee:dyncon:v:35:y:2011:i:1:p:1-242011-03-25RePEc:eee:dyncon article This paper surveys learning-to-forecast experiments (LtFEs) with human subjects to test theories of expectations and learning. Subjects must repeatedly forecast a market price, whose realization is an aggregation of individual expectations. Emphasis is given to how individual forecasting rules interact at the micro-level and which structure they cocreate at the aggregate, macro-level. In particular, we focus on the question wether the evidence from laboratory experiments is consistent with heterogeneous expectations. Heterogeneous expectations Bounded rationality Learning Heuristics switching 1 2011 35 1 1 24 http://www.sciencedirect.com/science/article/B6V85-516664W-1/2/ fa8870b9280beea17538e420d0ddeaf0 Hommes, Cars oai:RePEc:eee:dyncon:v:34:y:2010:i:9:p:1582-15952011-03-25RePEc:eee:dyncon article We study the extent of empirical information that can be obtained from alternative structural New Keynesian inflation equations concerning the average duration of prices in the United States, given that such specifications may be hard to identify. Using four different indexation and real-wage-rigidity-based models, in conjunction with identification-robust econometric methods, we evaluate the precision of Calvo parameter estimates. While results are sensitive to calibration and instrument selection, we find confidence bounds on the average duration of prices that line up with available micro-founded studies, statistically significant coefficients for the forcing variables, and non-zero estimates on the coefficient of lagged inflation. Sticky-price Calvo model Structural estimation Weak identification Indexation Real wage 9 2010 34 9 1582 1595 http:// www.sciencedirect.com/science/article/B6V85-50XTSXC-1/2/16051a124bef0d25355e492a314a2a99 Dufour, Jean-Marie Khalaf, Lynda Kichian, Maral oai:RePEc:eee:dyncon:v:34:y:2010:i:9:p:1627-16502011-03-25RePEc:eee:dyncon article We model a credit network characterized by credit relationships connecting (i) downstream (D) and upstream (U) firms and (ii) firms and banks. The net worth of D firms is the driver of fluctuations. The production of D firms and of their suppliers (U firms) in fact, is constrained by the availability of internal finance--proxied by net worth--to the D firms. The structure of credit interlinkages changes over time due to an endogeneous process of partner selection, which leads to the polarization of the network. At the aggregate level, the distribution of growth rates exhibits negative skewness and excess kurtosis. When a shock hits the macroeconomy or a significant group of agents in the credit network a bankruptcy avalanche can follow if agents' leverage is critically high. In a nutshell we want to explore the properties of a network-based financial accelerator. Business fluctuations Financial instability Bankruptcy chains 9 2010 34 9 1627 1650 http://www.sciencedirect.com/science/article/B6V85-50CDSG9-4/2/fcb6add120052321ce59b91285e82ead Delli Gatti, Domenico Gallegati, Mauro Greenwald, Bruce Russo, Alberto Stiglitz, Joseph E. oai:RePEc:eee:dyncon:v:35:y:2011:i:1:p:163-1742011-03-25RePEc:eee:dyncon article This paper considers the problem of how to price a conspicuous product when the economy is in a recession that disrupts capital markets. A conspicuous product in this context is a luxury good for which demand is increasing in brand image. Brand image here means the ability of a consumer to impress observers by conspicuously displaying consumption of the good. Brand image is built up when the good is priced high enough to make it exclusive, and eroded if the good is discounted. Recession is modeled as having two effects: it reduces demand and it freezes capital markets so borrowing is not possible. In pricing the conspicuous product the firm faces the following trade-off. Reducing price helps maintain sales volume and cash flow in the face of reduced demand, but it also damages brand image and thus long-term demand. The paper analyzes the firm's pricing policy facing scenarios of mild, intermediate and severe recessions, while taking the threat of bankruptcy into account. For an intermediate recession the optimal solution is history-dependent. The results have implications for policy interventions in capital ma
{"url":"http://oai.repec.openlib.org/?verb=ListRecords&set=RePEc:eee:dyncon&metadataPrefix=amf","timestamp":"2014-04-20T19:15:02Z","content_type":null,"content_length":"1048781","record_id":"<urn:uuid:00416347-65a8-4c7a-8458-211743049c87>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Conjunctive Normal Form Conjunctive Normal Form <logic> (CNF) A logical formula consisting of a conjunction of disjunctions of terms where no disjunction contains a conjunction. Such a formula might also be described as a product of sums. E.g. the CNF of (A and B) or C (A or C) and (B or C). Contrast Disjunctive Normal Form. Last updated: 1995-12-10 Try this search on Wikipedia, OneLook, Google Nearby terms: congestion « CONIC « conjunction « Conjunctive Normal Form » connect » connected graph » connected subgraph Copyright Denis Howe 1985
{"url":"http://foldoc.org/Conjunctive+Normal+Form","timestamp":"2014-04-19T17:04:40Z","content_type":null,"content_length":"5038","record_id":"<urn:uuid:03579bd7-c955-4561-95d8-295f09882059>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
How do control the boundary regularity of the Legendre transformation domain from a convex function up vote 0 down vote favorite Let f(x) be a strongly convex smooth function (its Hessian matrix is positive definite) defined in a convex domain D, introduce the Legendre transformation $$x=(x_1,...,x_n)\rightarrow (\xi_1,...,\ xi_n),\xi_i=\frac{\partial f}{\partial x_i},$$ $$u(\xi_1,...,\xi_n)=x_i\xi_i-f$$ The Legendre transformation domain W is defined by: $$W=((\xi_1,...,\xi_n)|\xi_i=\frac{\partial f}{\partial x_i}, x\in D )$$ I want to know the regularity of the boundary of W, (can assume the domain W is bounded) what conditions to make the boundary $\partial W$ smooth or $C^2$? convexity convex-geometry If $f$ is strongly convex and defined in a nbd of $\bar D$, it's gradient $\nabla f$ is a (monotone) diffeomorphism, so $\partial W$ is $C^2$ if $\partial D$ is $C^2$ and $f$ is $C^3$ on a nbd of $ \bar D$. – Pietro Majer Jun 2 '11 at 17:26 In my question, f(x) may be defined on the whole $\mathbb{R}^n$ (this situation is my interest), for example, the following function (known as hyperbolic affine hypersphere): $$f(x_1,...,x_n)=\frac {1}{x_1\cdots x_n}, x_i>0, 1\leq i\leq n.$$ Choose suitable coordinates, this graph can be represented by another function $\tilde{f}$ defined on the whole $\mathbb{R}^n$, and the Legendre transformation domain of $\tilde{f}$ is a simplex. – fible Jun 5 '11 at 2:28 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged convexity convex-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/66748/how-do-control-the-boundary-regularity-of-the-legendre-transformation-domain-fr?answertab=active","timestamp":"2014-04-20T06:28:27Z","content_type":null,"content_length":"49478","record_id":"<urn:uuid:babd3a2d-8ad3-43f0-a2df-82734c0d8811>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
Ryan W. Porter - Learn [ Home ] □ Books: ● Structure and Interpretation of Computer Programs (SICP): My first exposure to this book was through 6.001 (R.I.P.) as a freshman. Over ten years later, I still learn a lot every time I open ● Introduction to Algorithms (CLR): Provides consistently clear and precise explanations of algorithms and their analyses. Useful as both a reference (I certainly can't claim to have read it cover-to-cover), and as a guide to how to develop and analyze algorithms. Yes, I still refer to it as CLR, not CLRS (sorry, Stein). ● Introduction to Linear Algebra: It's embarrassing how badly my knowledge of the fundamentals has eroded since my undergrad days. As part of an effort to repair the damage, I have been watching lectures and working through problem sets from various courses provided by MIT OpenCourseWare. One of the best is 18.06, mainly due to the passion of Gilbert Strang, and this is his book which accompanies the course. ● Linear Algebra Done Right: A very good complement to other Linear Algebra texts, because it approaches the subject from a very different perspective (in particular, determinants are relegated to the closing chapter). Some might even call it orthogonal (sorry). It is also more theoretical than Strang's book, and thus serves as a good second book on the topic. Both have a good sets of problems for the self-learner to work through. ● Convex Optimization: Stephen Boyd, a co-author, is an excellent lecturer, and this book accompanies his EE364A course, which has both video lectures and assignments online. The book has a nice combination of proofs and applications, making it useful as an advanced, introductory text. ● Apostol's Calculus: Volume I and Volume II: These books provided the foundation for my efforts to repair my Calculus skills. In comparison to the books I used when first learning Calculus, these provide deeper explanations and more proofs. As a result, they have enabled me to do more than simply refresh forgotten facts and techniques. ● Options, Futures & Other Derivatives: After entering the world of finance, this book provided me with a good introduction to many areas within it. It also has a decent set of exercsies and a Solutions Manual. While the book serves its intended purpose well, it really is only an introduction. Many times I was frustrated by the appearance of a formula without any explanation for how it was derived (although, in fairness, the author does provide some derivations on the book's website). ● Holub on Patterns: Leads off with a couple excellent essays on OO design (including a classic on why getter and setter methods are evil). The heart of the book is two deeply annotated programs that each tie together about half of the GOF Design Patterns. Reading and working through these programs is like looking over the shoulder of a mentor. ● Mastering Regular Expressions: This material was more relevant to me when I worked at Amazon, but it's hard to avoid problems where regular expressions are the best tools for the job. (It's also easy to find problems where they are the worst tools for job, leading you to down to the path to the infamous "two problems".) This is the first and last book you need on the topic. ● Effective Java: The best book on Java, bar none. I can't count the number of errors I've seen that would have been avoided by following the advice in this book. (Granted, the set is probably countable-- I just can't count that high.) I no longer use the language on a regular basis, but, if I return to it, the first thing I'll do is read the second edition. ● Real World Haskell: The authors do a great job of clearly explaining some of the (many) challenging aspects of this language. Even though I am still a relative novice, and I may never use it in a job, I think that the effort I have invested into learning Haskell will still pay off. Getting my head around topics such as monads has definitely changed the way I think about □ Projects: ● tapl-haskell: I've become more and more of a fan of strong, static typing, mainly for purpose of eliminating as many errors as possible, as early as possible. To that end, I started working through Types and Programming Languages (TAPL), by Benjamin C. Pierce. To reinforce the material in the book, and to help teach myself Haskell, I started this project, which aims to provide Haskell ports of all of the OCaml implementations provided by Pierce. Contributions and/or feedback appreciated. ● sample-nash: This project contains Haskell implementations of the two algorithms described in a paper I co-authored, entitled Simple Search Methods for Find a Nash Equilibrium. Sadly, I no longer have a copy of the original C++ implementations used to produce the results described in the paper. So, I took that as an opportunity to further explore Haskell.
{"url":"http://www.ryanwporter.com/learn.html","timestamp":"2014-04-18T08:04:55Z","content_type":null,"content_length":"8196","record_id":"<urn:uuid:c59b8733-845e-4ed6-a61b-aa92f400e105>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Should it replace the regular income tax? Entry 7 of 7 Next Section Alternative Minimum Tax: Should it replace the regular income tax? Some commentators, with varying degrees of seriousness, have suggested that Congress repeal the regular individual income tax and instead make the alternative minimum tax (AMT) the only federal income tax on individuals. Proponents claim that the AMT applies a lower, nearly flat rate to a broader base and therefore would raise revenue more efficiently than the regular income tax. However, that characterization of the AMT applies only to the highest-income taxpayers, the original target of the tax. Most taxpayers affected by the AMT today pay a higher marginal rate on a narrower income base than they would if they were subject to only the regular income tax. In addition, the AMT imposes large marriage penalties and causes "bracket creep," since it is not indexed for • The AMT has only two statutory rates: 26 percent on the first $175,000 of alternative minimum taxable income (AMTI), and 28 percent on AMTI above that level. But the AMT actually imposes four marginal tax rates, not two, because the AMT exemption phases out as income rises. The exemption phases out at a rate of 25 cents for each extra dollar of AMTI above an income threshold ($150,000 of AMTI for married couples and $112,500 for singles). The phase-out thus eliminates the exemption entirely for couples with AMTI above $330,000 ($247,500 for singles). As a result, taxpayers in the phase-out range actually face higher effective tax rates of 32.5 or 35 percent, the latter equal to the top rate under the regular income tax (see figure 1). • Significantly more AMT taxpayers-79 percent in 2009-face higher effective marginal tax rates under the AMT than they would under the regular income tax. That figure will rise to 90 percent by 2010 as the AMT ensnares more and more middle-income filers who would have faced statutory rates of 15 or 25 percent under the regular income tax (see figure 2). • In addition, the relatively high AMT exemption means that the amount of income subject to tax under the AMT is often less than it is under the regular income tax. In 2009, 58 percent of AMT taxpayers will have more income subject to tax under the regular tax than theywould have under the AMT. That number will rise to 89 percent by 2010. Thus the conventional wisdom that the AMT applies a lower marginal tax rate to a broader income base is incorrect. In fact, exactly the opposite is true. Most AMT taxpayers face a higher marginal rate applied to a narrower tax base than they would if they were in the regular income tax system. • The AMT creates enormous marriage penalties. In 2006, if the AMT had been the only tax system, a married couple with two children in which each spouse earns $50,000 would have paid $5,837 more in tax than if they were single and one spouse claimed custody of the children. The marriage penalty grows even larger at higher incomes, reaching a maximum of over $15,000 for couples with incomes of about $450,000 (although marriage penalties under current law are nearly as large at such high income levels). • The AMT incurs significant "bracket creep:" the lack of indexing for inflation means that the tax rises in real terms as prices rise, unlike the regular income tax, which is indexed. Inflation pushes more income above the unindexed exemption threshold and, for high-income taxpayers, subjects more income to the exemption phase-out and the 28 percent AMT rate. Entry 7 of 7 Next Section
{"url":"http://www.taxpolicycenter.org/briefing-book/key-elements/amt/flat-tax.cfm","timestamp":"2014-04-17T00:49:41Z","content_type":null,"content_length":"58775","record_id":"<urn:uuid:f2f18773-a477-47f7-8b50-9b0a2d1546a8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Correlated variables in LCA? Stephen C Messer posted on Thursday, December 14, 2006 - 5:34 pm Folks: Please help me understand why I get a WARNING :"variable is uncorrelated with other variables within class" when conducting my LCA. It's not that I don't know that I could set the covariances free... the question is SHOULD I? In CFA for example, the tradition is to leave the observed variable variances set to uncorrelated. So here is my naive question: Is there something I am missing in LCA where I SHOULD be allowing the vars within classes to be correlated? Regardless of theory? If I allow all the vars to be correlated within classes (but constrained to be equal across classes), of course the model will fit the data "better" but what are the implications? Thanks for all the enlightenment you can provide Linda K. Muthen posted on Thursday, December 14, 2006 - 5:59 pm It's just a warning in case you had variables that you didn't realize you were using in the analysis. For example, if you forgot to include a USEVARIABLES statement when you were not analyzing all variables in a data set. Stephen C Messer posted on Friday, December 15, 2006 - 7:50 am Thanks Linda. Are there any scenarios where I might want to allow the variables to correlate within classes? Boliang Guo posted on Friday, December 15, 2006 - 8:12 am if you correlated two residual in one class, this will violate the local independence assumption for LCA. I think. Stephen C Messer posted on Friday, December 15, 2006 - 9:14 am thanks guo Bengt O. Muthen posted on Friday, December 15, 2006 - 9:56 am Note, however, that the local independence assumption of conventional LCA is not a sacred assumption - other, more flexible models than LCA could be very useful in many applications. So, yes, I would think there are scenarios where you might want to correlate variables within classes. We have one such example in the UG ex 7.16 with a reference. And of course factor mixture modeling is built on within-class correlation modeling - see for example the Muthen-Asparouhov (2006) article on tobacco dependence on our web site under Papers, Factor Mixture Analysis. Stephen C Messer posted on Friday, December 15, 2006 - 1:36 pm Outstanding. Thanks! Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=13&page=1901","timestamp":"2014-04-17T06:45:27Z","content_type":null,"content_length":"24127","record_id":"<urn:uuid:9b573339-92e3-4d30-8d88-b2fab63da582>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Little Neck SAT Math Tutor ...The Computer Science program (ABET accredited) at Loyola requires Bachelors of Science to take 4 semesters (2 years) worth of the C programming language. In 2009, I received a Bachelor of Science degree (with honors) in Computer Science from Loyola College in Maryland. I graduated with an overall GPA of 3.65 and major GPA of 3.76. 53 Subjects: including SAT math, reading, algebra 1, GRE ...I am an experienced mathematics and science teacher, with a wide range of interests and an extensive understanding of physics and mathematics. I love to talk with students of all ages about these subjects, and I would like to help you to appreciate their fundamental simplicity and beauty while g... 25 Subjects: including SAT math, chemistry, physics, calculus ...I also help foreign graduate students perfect their grammar and delivery in writing. I believe in building confidence while teaching material. We all excel faster in some areas and slower in 25 Subjects: including SAT math, reading, writing, English ...I love teaching math because I push my students to understand math conceptually. When I teach math, I use a lot of concrete objects and pictures so that my students understand on a deeper level, and these techniques spark curiosity in my students. While teaching English in Korea, I learned many... 20 Subjects: including SAT math, English, reading, writing ...Knowledge is power and it transformed me in many ways. I have had the pleasure of helping hundreds and hundreds of students for the last 15 years and see them improve. I love to see students empowered, realize their own potential and master challenges that they never thought possible before. 55 Subjects: including SAT math, English, reading, calculus
{"url":"http://www.purplemath.com/little_neck_sat_math_tutors.php","timestamp":"2014-04-20T11:15:20Z","content_type":null,"content_length":"24099","record_id":"<urn:uuid:66bedb52-1caf-4ded-a827-423fc43f295a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Anniversary cabinet with a wooden combination lock The cabinet was made as a wedding gift for my son and daughter in law. The cabinet itself follows conventional design and construction techniques - what makes the anniversary cabinet different is the combination locking mechanism. The four drawers require you to know a code (combination) in order to open them. Each drawer has a different combination. The idea then, is to give the combination for one drawer at a time (on the wedding anniversary). To get things going though the combination for the first drawer (the bottom one) is given "free" at the time the gift is presented so that the general operation can be explained and tested. In this case the guests at the wedding reception were given a sheet of paper and an envelope so that they could write a note to the bride and groom. The envelopes were sealed and placed at random in the top three drawers. So on the anniversary when the combination is revealed the contents (envelopes or other gifts) can be accessed. Of course this may not work as intended, as the level of bribery re getting the combination early could come into play. The video below shows how the combination locking system works and the Steps following the video give details on the construction of the cabinet and the locking system. Step 2: Some stages of cabinet construction Mortise and tenon joints, pocket holes, and dovetails are the primary joinery methods used in the construction of the cabinet and drawers. But any kind of joinery can be used to make a cabinet/drawer for the purposes of including a combination lock like the one described in this instructable. Step 3: Making the lock rod plates Each drawer has a "lock rod plate" that mounts with two screws to the drawer. The drill press and the table saw (with a sled) make this operation easy. I used scrap wood spacers to keep the spacing between openings consistent. There are 128 possible combinations for each drawer based on a little simple binary math. Because each rod has a position where it is correct (1), and its opposite (0), each rod's unlocked position can be said to represent a binary bit. The total number of working combinations can be worked out by determining what the largest number is that could be represented by the number of binary bits corresponding to the number of dowels. The range here is 0000000 to 1111111, with the possibilities in between being variants such as 1011010 or 1000001, and so-on, with each rod being aligned either as intended or in the reversed position. *Since the locked positions of the dowel can be thought of as continuous representations between 0.0 and 1.0, we can ignore them for this calculation, but being able to divide them into discrete chunks will be useful later on. 1111111 is the largest number that can be represented in 7 bits, and that number in decimal is 127. If we include 0000000 (0), which represents all of the dowels aligned as intended, that brings the total up to 128 possible arrangements per drawer. It's not terrible security, actually, considering that an apartment door that has been set up to work on a master key system might accept as many as 35 other keys. If we assume a fair level of precision and no angling of the cuts that might help to align dowels that are close but not on the money, we get 16 possible positions for each of the seven dials, of which eight are duplicates of their opposite position. That means 8^7 possible positions or 2,097,152 total positions if an attacker knows how the chest is made and is smart enough to only move the rod on an arbitrary 180 degree slice to prevent duplicating their efforts. In a six drawer system, that means that as long as no two drawers share the exact same dowel position to open, there are 128 x 6 combinations that will open at least one drawer, half of which can be thrown out by an attacker, leaving 64 x 6, or (Remember that 1111111 will open a drawer just as easily as 0000000, so why try both?), or 384 of the 2,097,152 total possible combinations. Assuming this, that means that there are still 2,096,768 combinations that will not open any drawer. If I've carried all my ones and put my decimal points in the right places, that makes the odds of putting in a code at random that opens one of the six drawers 3:16,384 or roughly 1 in 5,461. If the machining is extremely tight, you could increase the number of possible positions to 32 or even 64, though you might need to use steel or brass rods to achieve this level of precision, and at 64 positions per dial you would increase the number of possible positions to 4,398,046,511,104, or roughly 4.4 trillion total combinations while still only allowing 768 total combinations that would open a drawer at random, or only a single position per drawer if you only notch one side of the rod so that it doesn't operate in opposing positions. Friction locking the rods when the cover is in place would effectively prevent an attacker approximating the correct position and coaxing it in to place with brute force. Anyone who does mathematics or cryptography for a living is invited to double-check my work, but I think I got it right. A motivated attacker could feel their way from right to left, or left to right, or outside to middle by turning the knobs a half step at a time in pairs until they got some movement, significantly reducing the number of guesses required to get in. This attack could be thwarted by putting each drawer on a guide track with gliders or bearing slides to reduce lateral movement, but might be overkill depending on the actual level of security desired. One could simply smash the chest or drill out the dowels, but that's not really the point, is it? I hope I answered your question, and I think I got my numbers right. Again, if you're reading this and you're a cryptographer or professional mathematician, feel free to correct anything I've gotten wrong! Hi churusaa - Wow, thanks for the work on figuring this out! As you can see from others there is disagreement in the final calculated outcome and approach to solving the problem. Not being into this kind of determination (not a clue) I appreciate all input that results in some answer as to how many possible combinations are involved to open one of the drawers. If you have access to others with like abilities perhaps you might pass the page link on to them to see if your particular calculations can be verified. If you do get a chance to do this please let me know. Thanks again. Thanks for sharing certainly inspirational. Enter in a contest here. Possible improvement a hutch or a fire box, (Lord forbid that someone would ever need it.) Thanks tseay - I entered an Instructibles contest some time ago and won first place. Got real nice drill and impact driver as a prize. This is ingenious. I love the design and the thought that went into it. The lock mechanism is a work of art. Did you design the lock yourself or is it based off of a model? Either way, the details you put into it are incredible. Haunted Spider - Yes I designed the locking system myself and this is the second generation of the idea. The first cabinet (a totally different cabinet design as it was a jewellry cabinet) worked fine but it was easier to "break the code" as it didn't have the secondary locking system (closing the lock rod cover to enable opening the drawers). Thanks for the comments! I was thinking this would be a good jewelry cabinet. It seems like this might be a historical idea and I love it. But if this is an original idea then filing a patent is fairly simple to do. Since you likely already have the drawings that define your design. Just stamp a patent pending on your documents and make copies and file. This is awesome. Love it, so many little details, very skilled and thoughtful Great ideas and nice design. If you have numbers instead of little dots on each node, it would be easier for you to remember the combination. Right now I refer to points of the compass when I am thinking or talking about the different settings. But numbers would definitely be easier. Assuming 8 positions for each "flat", there would be 8 possible combinations if you had only one locking bar. If you had two, you'd have 8 * 8 combinations (8 positions for the second bar for each of the 8 positions for the first bar), or 64 combinations. if you had three locking bars, you'd have 8 * 8 * 8 combinations (all of the combinations you had for the first two bars would be repeated for each possible spot on the third bar), or 512 Continuing on, each bar adds a factor of 8 to the number of combinations, for a total of 8 * 8 * 8 * 8 * 8 * 8 * 8 = 2,097,152 possible combinations. But, if you take into account that your "flats" don't have to be exactly on the eights, and really wherever you felt like putting them, then you have an infinite number of combinations. 1048576, because the exact opposite position also opens it. Divide by 2 :) actually it would be 823,543 if there are 8 positions on each spindle (not using any positions in between pips) 2 of them will open the lock and 6 will not. so if we combine the 2 opens into one permutation (since if it is in either position it will open) and take the 6 locked positions that will give us 7 per spindle 7 * 7 * 7 * 7 * 7 * 7 * 7 = 823,543 You're treating the two separate opens into one. You could do the same with the ones that don't open, saying that they are just. Let's do a simplified example of two knobs. Assume the first knob opens on 1. That means it would also open on 5, but wouldn't on 2, 3, 4, 6, 7 or 8. Now the second knob opens on 2, and therefore also 6, but not on 1, 3, 4, 5, 7, or 8. You have 8 * 8=64 possible combinations of numbers. But only the combinations 1,2; 1,6; 5,2; 5;6 will open it. We still have 64 combinations. The fact that you have 4 solutions instead of just one, doesn't change the number of combinations. so that would give you 1,048,576 combinations and 128 possible solutions? your gunna need more drawers Nlinventor :) Interesting math discussion - will wait in case there is a new number before I setup to make a zillion drawers :) Good catch. If the cutouts were offset, you would have the 2+ million, but with the offsets centered, you do have to divide by two, and you end up with your number. Like the 2 million plus but like infinity even better! Thanks! Thank you very much Lorddrake. Really appreciate it. how will you prevent the recipient of the cabinet from watching the video? if you pause it just so, it would be pretty easy to steal the combinations ahead of time. They already knew all combinations before I did the video and instructables. It's hard to keep a secret! This is very cool, since you have a secret compartment you might think about trying to enter it into the spy contest too! Wish I had wood tools, this is very very cool and I would love one! Nice idea, execution and instructable! 5 Stars! I'm posting pictures of the three secret compartments on the actual wedding anniversary - today! All three compartments have false bottoms. The center one is accessed using a magnet or magnetic material. The left and right hand false bottoms are raised by pressing down on the back of the false bottoms. I just posted a video on youtube showing all of this. You can view it here: http:// BUT WAIT, THERE'S MORE! Not obvious unless you remove a drawer and look at the bottom. Each drawer has a collage of family pictures secured in place with a plexiglass cover. Thanks poofrabbit! Also thanks for the suggestion re entering into the spy contest. Do you know the process to do this for my project... what I'm wondering is should I modify the current entry to add the secret compartments (I thought I read somewhere that you are not allowed to add photos/videos after the project is published?) or would I have to do a completely new instructable showing the secret compartments? Lovely cabinet - worthy of being passed down. Might be handy to document the combination somehow on the piece, but ordinarily hidden so that it doesn't become a variant of The Musgrave Ritual: 'Whose was it?' 'Grandpa's' ... 'How was it opened?' 'First the second, then the fourth, then six and three'. : ) Good point. Not shown in the instructable are three separate "secret" compartments - maybe put the combinations there - oh yeah, probably not a good idea :) Gorgeous would work, and an evil, evil idea! :) Thanks mrmath - I didn't mean to be evil though :). With your user name maybe you can answer the question re the number of combinations (no pressure though). I just posted the answer as a new comment, and can't delete it for some reason. But it's there for you.
{"url":"http://www.instructables.com/id/Anniversary-cabinet-with-a-wooden-combination-lock/CG35DSOH3NNEV1U","timestamp":"2014-04-23T13:32:42Z","content_type":null,"content_length":"186128","record_id":"<urn:uuid:5d9a5df2-8b42-4ed0-bb00-6d75e6d12f6d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Pseudo-random graphs Results 1 - 10 of 52 , 1994 "... In this paper we introduce the notion of the limited-depth minor exclusion and show that graphs that exclude small limited-depth minors have relatively small separators. In particular, we prove that for any graph that excludes K h as a depth l minor, we can find a separator of size O(lh 2 log n n=l) ..." Cited by 36 (3 self) Add to MetaCart In this paper we introduce the notion of the limited-depth minor exclusion and show that graphs that exclude small limited-depth minors have relatively small separators. In particular, we prove that for any graph that excludes K h as a depth l minor, we can find a separator of size O(lh 2 log n n=l). This, in turn, implies that any graph that excludes K h as a minor has an O(h p n log n)-sized separator, improving the result of Alon, Seymour, and Thomas for the case where h AE p log n. We show that the d-dimensional simplicial graphs with constant aspect ratio, defined by Miller and Thurston, exclude K h minors of depth L for h = \Omega\Gamma L d\Gamma1 ) when d is a constant. These graphs arise in finite element computations. Our proof of separator existence is constructive and gives an algorithm to find the t-cut-covers decomposition, introduced by Kaklamanis, Krizanc, and Rao, in graphs that exclude small depth minors. This has two interesting implications. F... , 2002 "... We describe a new algorithm for generating all maximal bicliques (i.e. complete bipartite, not necessarily induced subgraphs) of a graph. The algorithm is inspired by, and is quite similar to, the consensus method used in propositional logic. We show that some variants of the algorithm are totally p ..." Cited by 26 (4 self) Add to MetaCart We describe a new algorithm for generating all maximal bicliques (i.e. complete bipartite, not necessarily induced subgraphs) of a graph. The algorithm is inspired by, and is quite similar to, the consensus method used in propositional logic. We show that some variants of the algorithm are totally polynomial, and even incrementally polynomial. The total complexity of the most efficient variant of the algorithms presented here is polynomial in the input size, and only linear in the output size. Computational experiments demonstrate its high efficiency on randomly generated graphs with up to 2,000 vertices and 20,000 edges. - In STOC , 2003 "... Abstract. We resolve the following conjecture raised by Levin together with Linial, London, and Rabinovich [16]. For a graph G, let dim(G) be the smallest d such that G occurs as a (not necessarily induced) subgraph of Z d ∞, the infinite graph with vertex set Z d and an edge (u, v) whenever ||u − v ..." Cited by 21 (3 self) Add to MetaCart Abstract. We resolve the following conjecture raised by Levin together with Linial, London, and Rabinovich [16]. For a graph G, let dim(G) be the smallest d such that G occurs as a (not necessarily induced) subgraph of Z d ∞, the infinite graph with vertex set Z d and an edge (u, v) whenever ||u − v|| ∞ = 1. The growth rate of G, denoted ρG, is the minimum ρ such that every ball of radius r> 1 in G contains at most r ρ vertices. By simple volume arguments, dim(G) = Ω(ρG). Levin conjectured that this lower bound is tight, i.e., that dim(G) = O(ρG) for every graph G. Previously, it was unknown whether dim(G) could be bounded above by any function of ρG. We show that a weaker form of Levin’s conjecture holds by proving that dim(G) = O(ρG log ρG) for any graph G. We disprove, however, the specific bound of the conjecture and show that our upper bound is tight by exhibiting graphs for which dim(G) = Ω(ρG log ρG). For several special families of graphs (e.g., planar graphs), we salvage the strong form, showing that dim(G) = O(ρG). Our results extend to a variant of the conjecture for finite-dimensional Euclidean spaces posed by Linial [15] and independently by Benjamini and Schramm [22]. 1. - Proc. London Math. Soc , 1983 "... A number of papers [1,2,3,4,6] recently have been concerned with the following question. What is the minimum number s{n) of edges a graph G on n vertices can have so that any tree on n vertices is isomorphic to some spanning tree of G? We call such a graph universal for spanning trees. Since Kn, the ..." Cited by 18 (4 self) Add to MetaCart A number of papers [1,2,3,4,6] recently have been concerned with the following question. What is the minimum number s{n) of edges a graph G on n vertices can have so that any tree on n vertices is isomorphic to some spanning tree of G? We call such a graph universal for spanning trees. Since Kn, the complete graph "... In recent years, a variety of graph optimization problems have arisen in which the graphs involved are much too large for the usual algorithms to be effective. In these cases, even though we are not able to examine the entire graph (which may be changing dynamically), we would still like to deduce v ..." Cited by 13 (2 self) Add to MetaCart In recent years, a variety of graph optimization problems have arisen in which the graphs involved are much too large for the usual algorithms to be effective. In these cases, even though we are not able to examine the entire graph (which may be changing dynamically), we would still like to deduce various properties of it, such as the size of a connected component, the set of neighbors of a subset of vertices, etc. In this paper, we study a class of problems, called distance realization problems, which arise in the study of Internet data traffic models. uppose we are given a set S of terminal nodes, taken from some (unknown) weighted graph. A basic problem is to reconstruct a weighted graph G including S with possibly additional vertices, that realizes the given distance matrix for S. We will first show that this problem is not only difficult but the solution is often unstable in the sense that even if all distances between nodes in S decrease, the solution can increase by a factor proport... - In Proc. 43’rd annual IEEE Symp. on Foundations of Computer Science , 2002 "... We show that there exists a graph G with n 2 nodes, where any forest with n nodes is a node-induced subgraph of G. Furthermore, the result implies existence of a graph with n nodes that contains all n-node graphs of fixed arboricity k as node-induced subgraphs. We provide a lower bound ..." Cited by 12 (0 self) Add to MetaCart We show that there exists a graph G with n 2 nodes, where any forest with n nodes is a node-induced subgraph of G. Furthermore, the result implies existence of a graph with n nodes that contains all n-node graphs of fixed arboricity k as node-induced subgraphs. We provide a lower bound of the size of such a graph. The upper bound is obtained through a simple labeling scheme for parent queries in rooted trees. - APPROX-RANDOM 2001, LNCS 3139 (2001) 170-180 the electronic journal of combinatorics 9 , 2001 "... Abstract. Let H be a family of graphs. We say that G is H-universal if, for each H ∈H,the graph G contains a subgraph isomorphic to H. Let H(k, n) denote the family of graphs on n vertices with maximum degree at most k. For each fixed k and each n sufficiently large, we explicitly construct an H(k, ..." Cited by 10 (9 self) Add to MetaCart Abstract. Let H be a family of graphs. We say that G is H-universal if, for each H ∈H,the graph G contains a subgraph isomorphic to H. Let H(k, n) denote the family of graphs on n vertices with maximum degree at most k. For each fixed k and each n sufficiently large, we explicitly construct an H(k, n)-universal graph Γ (k, n) with O(n 2−2/k (log n) 1+8/k) edges. This is optimal up to a small polylogarithmic factor, as Ω(n 2−2/k) is a lower bound for the number of edges in any such graph. En route, we use the probabilistic method in a rather unusual way. After presenting a deterministic construction of the graph Γ (k, n), we prove, using a probabilistic argument, that Γ (k, n) isH(k, n)-universal. So we use the probabilistic method to prove that an explicit construction satisfies certain properties, rather than showing the existence of a construction that satisfies these properties. 1Introduction and Main Result For a family H of graphs, a graph G is H-universal if, for each H ∈H,the , 1988 "... : One important aspect of efficient use of a hypercube computer to solve a given problem is the assignment of subtasks to processors in such a way that the communication overhead is low. The subtasks and their inter-communication requirements can be modeled by a graph, and the assignment of subtasks ..." Cited by 7 (1 self) Add to MetaCart : One important aspect of efficient use of a hypercube computer to solve a given problem is the assignment of subtasks to processors in such a way that the communication overhead is low. The subtasks and their inter-communication requirements can be modeled by a graph, and the assignment of subtasks to processors viewed as an embedding of the task graph into the graph of the hypercube network. We survey the known results concerning such embeddings, including expansion /dilation tradeoffs for general graphs, embeddings of meshes and trees, packings of multiple copies of a graph, the complexity of finding good embeddings, and critical graphs which are minimal with respect to some property. In addition, we describe several open problems. Keywords: hypercube computer, n-cube, embedding, dilation, expansion, cubical, packing, random graphs, critical graphs. 1 Introduction Let Q n denote an n-dimensional binary cube where the nodes of Q n are all the binary n- tuples and two nodes are - INFORMATION PROCESSING LETTERS , 1999 "... We consider the broadcasting problem in the shouting communication mode in which any node of a network can inform all its neighbours in one time step. In addition, during any time step a number of links less than the edge-connectivity of the network can be faulty. The problem is to find an upper bou ..." Cited by 7 (2 self) Add to MetaCart We consider the broadcasting problem in the shouting communication mode in which any node of a network can inform all its neighbours in one time step. In addition, during any time step a number of links less than the edge-connectivity of the network can be faulty. The problem is to find an upper bound on the number of time steps necessary to complete broadcasting under this additional assumption. Fraigniaud and Peyrat proved for the n-dimensional hypercube that n + O(log n) time steps are sufficient. De Marco and Vaccaro decreased the upper bound to n+7 and showed a worst case lower bound n + 2 for n 3. We prove that n + 2 time steps are sufficient. Our method is related to the isoperimetric problem in graphs and can be applied to other networks.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=463606","timestamp":"2014-04-18T02:01:15Z","content_type":null,"content_length":"37556","record_id":"<urn:uuid:53c2fb63-7e03-4a2e-acf6-6e0657431a51>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Binomial expansion and pascal's triangle September 24th 2012, 03:00 PM #1 Junior Member Sep 2012 New York Binomial expansion and pascal's triangle How many terms are in the expansion of (x + y)^n ( i think its n + 1 but i'm not sure if its right) How do the expansion of (x + y)^n differ from (x - y) ^ n ? [can you explain the steps of how you got the answer] thanks alot Re: Binomial expansion and pascal's triangle The binomial theorem says: $(x+y)^n=\sum_{k=0}^n{n \choose k}x^{n-k}y^k$ Using this, can you explain how to answer the questions with certainty? September 24th 2012, 03:10 PM #2
{"url":"http://mathhelpforum.com/calculus/204007-binomial-expansion-pascal-s-triangle.html","timestamp":"2014-04-19T11:22:12Z","content_type":null,"content_length":"33974","record_id":"<urn:uuid:0d339864-194c-4ec7-afdd-6d962017a036>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Introducing Wolfram Course Assistant Apps on Android Devices April 10, 2013 Posted by Now available on Android devices via the Google Play store, the Wolfram Algebra Course Assistant and Wolfram Calculus Course Assistant are part of the equation for academic success. In addition to harnessing the computational power of Wolfram|Alpha, the Wolfram Algebra and Wolfram Calculus Course Assistant Apps offer a slick and intuitive user interface. For students taking these courses for the first time, or those who need some assistance on specific algebra and calculus problems, these apps are powerful computational tools. For example, you can use our custom keyboard to input complex equations and get Step-by-step solutions to tough algebra problems: Or solve indefinite integrals complete with plots and alternate forms: Android devices are better than ever now that users can make use of Wolfram|Alpha-powered apps. Stay tuned—the Wolfram Algebra and Wolfram Calculus Course Assistant Apps are the first of many more to 3 Comments Finally! Too bad I’m out of college now or I would have bought these. Posted by Brandon April 10, 2013 at 12:42 pm Reply It would be nice to have an ad-supported version of the apps, at least for the main Wolfram|Alpha app. Posted by Lazza April 10, 2013 at 3:07 pm Reply Wow..hope this was available during my school times…did passed my engineering using casio scientific calculator. Well, but no problem these days we are using android apps in different ways. Will definitely give it a try! Ankit from VentureHire Posted by VentureHire June 18, 2013 at 7:38 am Reply Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more. Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies… Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes! Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon? Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step.
{"url":"http://blog.wolframalpha.com/2013/04/10/introducing-wolfram-course-assistant-apps-on-android-devices/","timestamp":"2014-04-16T17:22:52Z","content_type":null,"content_length":"43721","record_id":"<urn:uuid:580eb932-28fc-4e77-b94c-89b4e246aa78>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
NPOV's Beef Steer At The Sale Barn January 9th, 2008 Steers with an average weight of 616 lbs (279 kg) brought $101.75 /HW (hundred weight), which is roughly $1.02 / lb ($1.02 / .45 kg). This is not to say ALL calves sell for this amount. Breed, condition, and weight all play a factor in addition to cash market variables. Additionally, not all of an animal's weight is meat, so the above price already doesn't translate directly to price per pound in the store. Follow this link for a formula to determine how much meat should come from a market animal. According to this link, an average beef cow will dress out at about 63% of its weight.This gives you a ballpark figure of what Chuck might be worth at this time next year. Obviously, he will be raised differently than his herd mates and headed for a much different market, but his value will still be tied to the beef market. As producers, we hope our custom raised steer will bring a certain percentage above sale-barn value. How much will our consumers save and producers gain without the middle-man? Perhaps a better question to ask is, "would you be willing to pay more than $600 for food not ready for your plate?" At this time next year, Chuck will have a long way to go before he's ready for consumption.January 21st, 2009 606 lb. calves brought $100.50/HW, slightly less than last year. At this point, Chuck's market value is $609. January 4th, 2010 Chuck is now hanging peacefully in a cool, dark locker at the Minden Meat Market. We don't know his pre-slaughter weight, but we're guessing around 1,100 lbs (499 kg). If we assume the formula mentioned earlier is correct, we can expect 63% of this number, or 693 lbs (314 kg). How do we, then, determine how much we are paying for Chuck? Let's throw out the cost/live animal since the hanging weight will give us more of an idea of the true cost/lb for the end user (eaters) and work with the hanging weight. Follow this link to read about hanging weight. What was our total cost of raising Chuck? Unfortunately, it wasn't easy or even practical to keep track of feeding costs for Chuck, mainly because he always had a bunk mate. Also, during harvest, we fed Chuck corn cleanings for two months, which is essentially free. We think it's safe to use an estimated cost of $1,000, which is also handy since it's a nice, round number. If we divide $1,000 by the aforementioned hanging weight, we're looking at a cost of $1.44 / lb ($1.44 / .45 kg). If we use average figures from the formulas cited in the above link, we can expect roughly 485 lbs (220 kg) of take-home meat, which bumps our price up to $2.06 / lb ($2.06 / .45 kg). We will know more about final costs in a few weeks when we get Chuck back home. 4 comments: I know beef (especially ground beef) is one of the least expensive protein packed foods you can buy - but I am still surprised by such a low price for an entire cow. Is it worth it to you as a farmer to raise cattle? What does it cost to raise a typical steer with food, drugs, vet bills, transportation? And then there is the time you spend working with the steer (feeding it, etc). Val, these are all good questions that I can attempt to answer. Yes, it is worth it for us to raise cattle for a few reasons. Over the long haul, it has been profitable or at least break-even, we have fairly efficient facilities to raise cattle, and, probably most importantly, I like doing it. Ten years ago I was ready to sell all of our animals and rent out our facilities, but like other farming activities, raising cattle got in my blood. Additionally, this particular project has been extremely educational for me and is an area I've always been interested in. I look at it as direct, hands-on learning not only as a producer, but as a consumer as well. As for the cost of a typical steer, I'll add that to the original post. I saw an ad for farm raised natural beef this AM for $4.00/# - on the hoof and at the farm. Assuming a 1000# steer, another $150 for transportation and slaughtering (too low, it's been at least 10 years since the last time I slaughtered one of our steers) and a 63% cutout, the $/# of the hamburger, roasts, steaks, liver, tongue, etc from this animal would be $6.59. Looking at these numbers reinforces my decision to sell the calves and buy them back in the grocery store. I've not looked at the cost of hamburger at the natural food stores or the specialty meat shops yet, but at the regular grocery store I am paying $4.49 per pound for 96% lean hamburger.
{"url":"http://beefsteer.blogspot.com/2008/01/cost-of-beef.html","timestamp":"2014-04-19T12:16:20Z","content_type":null,"content_length":"43352","record_id":"<urn:uuid:dd61e060-dd07-44b3-9ab8-befef3775ffb>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Hello Hi Raider0451; Glad to welcome you to the forum. Don't underestimate what you have learned. I have found that people who learned their math from doing problems are very strong. Just between you and I. I knew an old dude, who had very little mathematical training (or so he claimed). He claimed he trained under Forman S. Acton and George Polya, but I don't believe that. He hated group theory, topology and differential geometry with a passion. If you began with let g be a group or G be a metric he would fire you. Sounds like an idiot doesn't he? One day 2 guys from a prestigious lab and 4 from a great school in Mass. (Weenies as he called them. He had a worse name for me.) came over and 10 tough problems were posed to me, him and the guests. Needless to say he got all of them right, outscoring me and the weenies. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=127991","timestamp":"2014-04-18T16:24:01Z","content_type":null,"content_length":"16669","record_id":"<urn:uuid:58765e17-6f4b-4524-b574-76360434efdf>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with a modular arithmetic proof December 11th 2012, 06:32 AM #1 Dec 2012 Help with a modular arithmetic proof Show that there is no solution of "x^2 is congruent to 3 modulo 5". This is part of a piece of coursework I have for my Mathematical Foundations module but the lecturer barely covered modular arithmetic and i'm pretty terrible at proofs, so any help would be greatly appreciated, thanks. Re: Help with a modular arithmetic proof Every integer x equals 0, 1, 2, 3 or 4 modulo 5, and if $x\equiv k\pmod{5}$, then $x^2\equiv k^2\pmod{5}$. So you only need to check that $k^2ot\equiv3\pmod{5}$ for k = 0, 1, 2, 3, 4, December 11th 2012, 06:38 AM #2 MHF Contributor Oct 2009
{"url":"http://mathhelpforum.com/advanced-math-topics/209587-help-modular-arithmetic-proof.html","timestamp":"2014-04-18T21:17:04Z","content_type":null,"content_length":"34215","record_id":"<urn:uuid:cac5f48c-6df0-4771-8387-8331baae0a01>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help y= e^x / 4+ln2x find the derivative at x=1, f'(1) Last edited by john-1; April 30th 2010 at 02:17 PM. Standard quotient rule $u = e^x \: \rightarrow \: u' = e^x$ $v = 1+ \ln(x) \:\rightarrow\: v' = \frac{1}{x}$ $y' = \frac{u'v-v'u}{v^2} = \frac{e^x(1+\ln x) - \frac{e^x}{x}}{(1+\ln x)^2}$ No idea what your condition means, I'm guessing y'(1) = 0 Hence $y'(1) = \frac{e - e}{1} = 0$ Note that $y' = m$. To find a co-ordinate sub in x=1 into y (not y') $f(1) = e$ so we have the co-ordinate (1,e) You can now use the equation of a straight line which is $y-y_1 = m(x-x_1)$ find the derivative at x=1, f'(1) I ended up getting 0. Last edited by john-1; April 30th 2010 at 02:15 PM.
{"url":"http://mathhelpforum.com/calculus/142337-derivative.html","timestamp":"2014-04-20T00:06:05Z","content_type":null,"content_length":"41039","record_id":"<urn:uuid:d402adc5-8e03-4bab-8ca3-744b7f7dd273>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplying Tips: Part 3 If you pick any of the square numbers on the grid and then subtract the odd numbers starting with 1, you get the numbers on the diagonal going the other way. So if we start with 36, subtract 1 to get 35, then subtract 3 to get 32, then subtract 5 to get 27. (If you compare this diagram to the tables grid, you’ll see how the numbers all fit in.) You can fill in the rest of the grid in exactly the same way, just using the even numbers (2,4,6,8…). Look at the diagonal running underneath the squares which goes 2,6,12, 20: you can fill this in by starting with 2, then adding 4, then 6, then 8 and so on. And then if you pick any of these answers (e.g. 20), you can subtract 2, then 4, then 6 to get the diagonal going the other way (e.g. 20 – 2 = 18 then 18 – 4 = 14 and 14 – 6 =8). Using these sequences of odd and even numbers, you can go on to create a set of times tables as big as you like, and you don’t need to do any multiplying!
{"url":"http://community.practutor.com/discussion-boards/170-multiplying-tips-part-3","timestamp":"2014-04-21T12:22:51Z","content_type":null,"content_length":"46951","record_id":"<urn:uuid:40e4e1d1-672f-43e5-aca3-57d4e8b54e93>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimal approximations by piecewise smooth functions, and associated variational problems Results 11 - 20 of 778 - JOURNAL OF SCIENTIFIC COMPUTING , 2002 "... This paper is devoted to the modeling of real textured images by functional minimization and partial differential equations. Following the ideas of Yves Meyer in a total variation minimization framework of L. Rudin, S. Osher and E. Fatemi, we decompose a given (possible textured) image f into a su ..." Cited by 150 (23 self) Add to MetaCart This paper is devoted to the modeling of real textured images by functional minimization and partial differential equations. Following the ideas of Yves Meyer in a total variation minimization framework of L. Rudin, S. Osher and E. Fatemi, we decompose a given (possible textured) image f into a sum of two functions u + v, where u E BV is a function of bounded variation (a cartoon or sketchy approximation of f), while v is a function representing the texture or noise. To model v we use the space of oscillating functions introduced by Yves Meyer, which is in some sense the dual of the BV space. The new algorithm is very simple, making use of differential equations and is easily solved in practice. Finally, we implement the method by finite differences, and we present various numerical results on real textured images, showing the obtained decomposition u + v, but we also show how the method can be used for texture discrimination and texture segmentation. , 2006 "... Combinatorial graph cut algorithms have been successfully applied to a wide range of problems in vision and graphics. This paper focusses on possibly the simplest application of graph-cuts: segmentation of objects in image data. Despite its simplicity, this application epitomizes the best features ..." Cited by 149 (5 self) Add to MetaCart Combinatorial graph cut algorithms have been successfully applied to a wide range of problems in vision and graphics. This paper focusses on possibly the simplest application of graph-cuts: segmentation of objects in image data. Despite its simplicity, this application epitomizes the best features of combinatorial graph cuts methods in vision: global optima, practical efficiency, numerical robustness, ability to fuse a wide range of visual cues and constraints, unrestricted topological properties of segments, and applicability to N-D problems. Graph cuts based approaches to object extraction have also been shown to have interesting connections with earlier segmentation methods such as snakes, geodesic active contours, and level-sets. The segmentation energies optimized by graph cuts combine boundary regularization with region-based properties in the same fashion as Mumford-Shah style functionals. We present motivation and detailed technical description of the basic combinatorial optimization framework for image segmentation via s/t graph cuts. After the general concept of using binary graph cut algorithms for object segmentation was first proposed and tested in Boykov and Jolly (2001), this idea was widely studied in computer vision and graphics communities. We provide links to a large number of known extensions based on iterative parameter re-estimation and learning, multi-scale or hierarchical approaches, narrow bands, and other techniques for demanding photo, video, and medical applications. , 1997 "... This article addresses two important themes in early visual computation: rst it presents a novel theory for learning the universal statistics of natural images { a prior model for typical cluttered scenes of the world { from a set of natural images, second it proposes a general framework of designi ..." Cited by 148 (18 self) Add to MetaCart This article addresses two important themes in early visual computation: rst it presents a novel theory for learning the universal statistics of natural images { a prior model for typical cluttered scenes of the world { from a set of natural images, second it proposes a general framework of designing reaction-diusion equations for image processing. We start by studying the statistics of natural images including the scale invariant properties, then generic prior models were learned to duplicate the observed statistics, based on the minimax entropy theory studied in two previous papers. The resulting Gibbs distributions have potentials of the form U(I; ; S) = P K I)(x; y)) with S = fF g being a set of lters and = f the potential functions. The learned Gibbs distributions con rm and improve the form of existing prior models such as line-process, but in contrast to all previous models, inverted potentials (i.e. (x) decreasing as a function of jxj) were found to be necessary. We nd that the partial dierential equations given by gradient descent on U(I; ; S) are essentially reaction-diusion equations, where the usual energy terms produce anisotropic diusion while the inverted energy terms produce reaction associated with pattern formation, enhancing preferred image features. We illustrate how these models can be used for texture pattern rendering, denoising, image enhancement and clutter removal by careful choice of both prior and data models of this type, incorporating the appropriate features. Song Chun Zhu is now with the Computer Science Department, Stanford University, Stanford, CA 94305, and David Mumford is with the Division of Applied Mathematics, Brown University, Providence, RI 02912. This work started when the authors were at ... - J. Comput. Phys , 2001 "... The level set method was devised by Osher and Sethian in [64] as a simple and versatile method for computing and analyzing the motion of an interface Γ in two or three dimensions. Γ bounds a (possibly multiply connected) region Ω. The goal is to compute and analyze the subsequent motion of Γ under a ..." Cited by 136 (12 self) Add to MetaCart The level set method was devised by Osher and Sethian in [64] as a simple and versatile method for computing and analyzing the motion of an interface Γ in two or three dimensions. Γ bounds a (possibly multiply connected) region Ω. The goal is to compute and analyze the subsequent motion of Γ under a velocity field �v. This velocity can depend on position, time, the geometry of the interface and the external physics. The interface is captured for later time as the zero level set of a smooth (at least Lipschitz continuous) function ϕ(�x,t), i.e., Γ(t)={�x|ϕ(�x,t)=0}. ϕ is positive inside Ω, negative outside Ω andiszeroonΓ(t). Topological merging and breaking are well defined and easily performed. In this review article we discuss recent variants and extensions, including the motion of curves in three dimensions, the Dynamic Surface Extension method, fast methods for steady state problems, diffusion generated motion and the variational level set approach. We also give a user’s guide to the level set dictionary and technology, couple the method to a wide variety of problems involving external physics, such as compressible and incompressible (possibly reacting) flow, Stefan problems, kinetic crystal growth, epitaxial growth of thin films, , 2010 "... In this paper we study a first-order primal-dual algorithm for convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1/N) in finite dimensions, which is optimal for the complete class of non-smooth problems we are considering in this paper ..." Cited by 134 (14 self) Add to MetaCart In this paper we study a first-order primal-dual algorithm for convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1/N) in finite dimensions, which is optimal for the complete class of non-smooth problems we are considering in this paper. We further show accelerations of the proposed algorithm to yield optimal rates on easier problems. In particular we show that we can achieve O(1/N 2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(1/e N) on problems where both are uniformly convex. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and image segmentation. 1 - Proceedings of the IEEE , 2002 "... This paper reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coheren ..." Cited by 122 (18 self) Add to MetaCart This paper reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coherent picture of this framework. A second goal is to describe how this topic fits into the even larger field of MR methods and concepts–in particular making ties to topics such as wavelets and multigrid methods. A third is to provide several alternate viewpoints for this body of work, as the methods and concepts we describe intersect with a number of other fields. The principle focus of our presentation is the class of MR Markov processes defined on pyramidally organized trees. The attractiveness of these models stems from both the very efficient algorithms they admit and their expressive power and broad applicability. We show how a variety of methods and models relate to this framework including models for self-similar and 1/f processes. We also illustrate how these methods have been used in practice. We discuss the construction of MR models on trees and show how questions that arise in this context make contact with wavelets, state space modeling of time series, system and parameter identification, and hidden - IEEE Trans. Med. Imag , 2003 "... Abstract—We propose a shape-based approach to curve evolution for the segmentation of medical images containing known object types. In particular, motivated by the work of Leventon, Grimson, and Faugeras [15], we derive a parametric model for an implicit representation of the segmenting curve by app ..." Cited by 119 (10 self) Add to MetaCart Abstract—We propose a shape-based approach to curve evolution for the segmentation of medical images containing known object types. In particular, motivated by the work of Leventon, Grimson, and Faugeras [15], we derive a parametric model for an implicit representation of the segmenting curve by applying principal component analysis to a collection of signed distance representations of the training data. The parameters of this representation are then manipulated to minimize an objective function for segmentation. The resulting algorithm is able to handle multidimensional data, can deal with topological changes of the curve, is robust to noise and initial contour placements, and is computationally efficient. At the same time, it avoids the need for point correspondences during the training phase of the algorithm. We demonstrate this technique by applying it to two medical applications; two-dimensional segmentation of cardiac magnetic resonance imaging (MRI) and three-dimensional segmentation of prostate MRI. Index Terms—Active contours, binary image alignment, cardiac MRI segmentation, curve evolution, deformable model, distance transforms, eigenshapes, implicit shape representation, medical image segmentation, parametric shape model, principal component analysis, prostate segmentation, shape prior, statistical shape model. I. - In Advances in Neural Information Processing , 2000 "... Abstract We present a new view of image segmentation by pairwise similarities. We interpret the similarities as edge flows in a Markov random walk and study the eigenvalues and eigenvectors of the walk's transition matrix. This interpretation shows that spectral methods for clustering and segmentati ..." Cited by 111 (6 self) Add to MetaCart Abstract We present a new view of image segmentation by pairwise similarities. We interpret the similarities as edge flows in a Markov random walk and study the eigenvalues and eigenvectors of the walk's transition matrix. This interpretation shows that spectral methods for clustering and segmentation have a probabilistic foundation. In particular, we prove that the Normalized Cut method arises naturally from our framework. Finally, the framework provides a principled method for learning the similarity function as a combination of features. 1 Introduction Among the most successful methods in image segmentation combine a global optimality segmentation criterion with local similarity features[3]. Similarity between two pixels i; j is defined as a positive function Sij depending on the local image properties of the pixels(e.g. color, texture, edge flow). Local features are not only computationally convenient, they are also supported by neurological evidence about the human perception of shapes. - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2010 "... This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentati ..." Cited by 111 (8 self) Add to MetaCart This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by userspecified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications. - in Proc. Int. Conf. Computer Vision , 1999 "... In this paper, we describe a new region-based approach to active contours for segmenting images composed of two or three types of regions characterizable by a given statistic. The essential idea is to derive curve evolutions which separate two or more values of a predetermined set of statistics comp ..." Cited by 108 (20 self) Add to MetaCart In this paper, we describe a new region-based approach to active contours for segmenting images composed of two or three types of regions characterizable by a given statistic. The essential idea is to derive curve evolutions which separate two or more values of a predetermined set of statistics computed over geometrically determined subsets of the image. Both global and local image information is used to evolve the active contour. Image derivatives, however, are avoided, thereby giving rise to a further degree of noise robustness compared to most edge-based snake algorithms. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=44014&sort=cite&start=10","timestamp":"2014-04-16T16:37:31Z","content_type":null,"content_length":"42441","record_id":"<urn:uuid:640b7f31-881f-4e6a-9826-dcdb09c7c809>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
COMS E6998-2: Advanced Cryptography Spring 2004 Due Dates • Project Proposal: 3/8 14:00 EST • Progress Report: 3/31 16:00 EST • Project: 4/25 (if progress report not on time) • Project: 5/2 (if progress report on time) The proposal should include the area that you want to investigate, the papers that you plan to read to that end, and your goals for the project. At this stage your goals may be vague and broad, though if you have very specific goals in mind, please include them in the proposal. The scope (e.g how many papers have been published in the area to date, how many papers you need to read and understand for your project, and in what depth) may vary considerably, though we will try to guide students towards comparable amounts of work to complete the project. In the second stage you will have to specify your goals much more clearly, typically in the form of a specific research problem you wish to resolve. Outline your planned approach towards satisfying these goals based on the progress you have made by studying the area. Your final project will have to be in-depth research into a well-defined problem (suggesting the problem and making it well defined is part of your job, though you're allowed and encouraged to discuss your ideas with the instructor). Please notify us of your general area of choice as soon as you can. Several of the suggestions below can support more than one group (working on different subareas), but if several groups consider projects that overlap too much, the first group to request it will get priority. For all the areas below, contact us for pointers to the important/latest papers in the area. Project Suggestions (in no particular order:) • Zero Knowledge: Several advanced topics in zero-knowledge can form the basis for a project. For example, zero-knowledge proofs of knowledge, non-interactive zero-knowledge, non-black-box zero-knowledge, concurrent zero-knowledge, etc. • Chosen-ciphertext security for public key encryption • Reductions and completeness in secure computation • A universal composability framework for secure multi-party computation • The random oracle model • Quantum cryptography • Private information retrieval • Threshold cryptography • Secret sharing • Deniable encryption • Exposure resilient cryptography • Privacy preserving data mining • Anonymity and credential systems • Algorithmic tamper-proof security • Pairing-based cryptography • Identity based encryption (this is currently a subset of the topic above) • Steganography • Digital signatures with special properties (e.g. proxy-signatures, aggregate signatures, blind signatures, chameleon signatures, signcryption, forward-secure signatures, group signatures, etc...) • Incremental cryptography (encryption/hashing) • Byzantine agreement • Pseudo-free groups (see the recent paper of Rivest introducing this topic at http://theory.lcs.mit.edu/~rivest/publications.html) • Formal methods in cryptography. • Verifiable random functions • Implementation of huge random objects (see http://www.wisdom.weizmann.ac.il/~oded/p_toro.html) • Circular encryption • Zero-knowledge sets / databases • Average case cryptography / lattices in cryptography Topics that were already chosen (but may possibly be chosen by another group in consultation with instructor) • Micropayments • Game theory and cryptography • Secure computation of approximations
{"url":"http://www1.cs.columbia.edu/~tal/6998/projects.html","timestamp":"2014-04-20T05:43:46Z","content_type":null,"content_length":"4968","record_id":"<urn:uuid:ebbf621b-eef3-41d3-a578-d2d585cc691c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Telescope Optics Engineering, by Inc. Dr. Frank Melsheimer, DFM Engineering, Inc. Longmont, Colorado, USA Delaware click here for pdf file of this article Ave. Unit D Longmont, CO 80501 1. Telescope Optics 303-678-8143 We recently responded to a Request For Proposal (RFP) for a 24-inch (610-mm) aperture telescope. The optical specifications specified an optical quality encircled energy (EE80) value Fax: of 80% within 0.6 to 0.8 arc seconds over the entire Field Of View (FOV) of 90-mm (1.2-degrees). From the excellent book, "Astronomical Optics" page 185 by Daniel J. Schroeder, we find the definition of "Encircled Energy" as "The fraction of the total energy E enclosed within a circle of radius r centered on the Point Spread Function peak". I want to emphasize the "radius r" as we will see this come up again later in another expression. The first problem with this specification is no wavelength has been specified. Perfect optics will produce an Airy disk whose "radius r" is a function of the wavelength and the aperture of the optic. The aperture in this case is 24-inches (610-mm). A typical Airy disk is shown below. This Airy disk was obtained in the DFM Engineering optical shop. Actual Airy disk of an unobstructed aperture with an intensity scan through the center in blue. Perfect optics produce an Airy disk composed of a central spot with alternating dark and bright rings due to diffraction. The Airy disk above only shows the central spot and the first bright ring, the next ring is too faint to be recorded. The first bright ring (seen above) is 63 times fainter than the peak intensity. The second bright ring is 360 times fainter than the peak intensity so cannot be detected with the 8-bit (256 levels of intensity) CCD camera and frame grabber used to obtain the above image. The radius (not the diameter) of the first dark ring is the familiar equation: α = 1.22 λ / D α is the radius of the first dark ring in radians λ is the operating wavelength in the same units as D D is the aperture of the telescope optic in the same units as λ Many people forget that α is the angular radius and they think of this as the diameter. Also, the value 1.22 is for an unobstructed aperture. The central obscuration for an optic causes the energy in the central spot to be reduced and re-distributed out among the rings. Schroeder presents a table on page 183 showing the effect of a central obscuration. The telescope requested in the RFP has a central obscuration ratio of about 50% (central obscuration / aperture). The value 1.22 in the above equation becomes about 1. This means that the central spot diameter actually gets smaller. However, the energy contained in the central spot is greatly reduced. The table on page 186 of Schroeder shows the encircled energy within the Airy dark rings. For a central obscuration ratio of about 0.5, the energy is reduced to about 48% (compared to 84% for the unobstructed case). Schroeder's table only goes up to a central obscuration of 0.4, so the value for the constant was extrapolated for a central obscuration of 0.5. With a central obscuration ratio of about 0.5, one must encircle the second dark ring to get >80% of the energy. The radius of the second ring for this obscuration is about: α = 2.5 λ / D The encircled energy including the second bright ring contains about 85% of the energy. At 0.58 μm (580-nm) wave length, the resulting encircled spot radius is: α = 2.5 x 0.00058-mm /610-mm x 206,265 (There are 206,265 arc seconds in a radian) α = 0.49 arc seconds radius or 1 arc second diameter At 2.5 μm wave length, the resulting encircled spot radius is: α = 2.5 x 0.0025-mm /610-mm x 206,265 α = 2.1 arc seconds radius or 4.2 arc seconds diameter From the above, the customer specified encircled energy (EE80) value of 80% within 0.6 to 0.8 arc seconds radius, can not be achieved for wavelengths longer than about 1 μm. However, the customer specified the telescope operating wavelength range as 350 nm to 2.5 μm. Our many decades of experience has shown us that most astronomers forget that the equation: α = 1.22 λ / D specifies the radius and not the diameter and is only for an unobstructed aperture. They are then disappointed when the images are more than twice as large as they think they should The off axis images are degraded due to the optical aberrations; coma, field curvature, and astigmatism. The next section discusses how the off axis image quality is improved using a field corrector. 2. Field Corrector Discussion: A Ritchey-Chrétien optical system is free from coma, but has astigmatism and field curvature. The field curvature is dependent only upon the difference in the radii of the primary and secondary mirrors and is independent of the prescription (classical Cassegrain, Ritchey-Chrétien, or Dall-Kirkham). The astigmatism and the field curvature can be corrected by a 2 element or 3 element field corrector depending upon how large the field is. The preliminary field corrector design for the 24-inch F/7 telescope with a 1.2-degree FOV is shown below. Chromatic problems are minimized if the field corrector has nearly zero power (the focal ratio is unchanged by the field corrector). The chromatic aberration is shown within the spot diagrams below for 6 colors ranging in wavelength from 350-nm to 1.5-μm. The boxes are 1-arc second on a side. Geometrical Spot Diagrams 6 colors and 4 field angles (Does not include diffraction) The wavelength range (the band pass) is from 350-nm to 1.5-μm. The preliminary design uses fused silica (quartz) for the 3 elements. This material is expensive, but is needed to include the near IR. Field correctors are expensive to build due of the number of optical surfaces and the number of tools required for grinding and polishing the surfaces. The above field corrector has 6 surfaces. To fabricate the individual elements requires, as a minimum, 1 tool for each surface (6 tools). The 3 convex surfaces will each require a concave test plate which requires 3 more tools. A test plate is required because only concave (or flat, or small diameter convex) surfaces can be tested directly with an interferometer. A test plate has a concave surface on one side and a flat surface on the other side. The concave surface must have the same radius as the radius of the surface to be tested. The test plate is set on the surface to be tested, and interference fringes are developed between the two surfaces. Typically the test plates are made 6-inches in diameter to test optics of this size. There are a total of 12 surfaces that need to be polished (6 for the actual elements, 3 test surfaces, and 3 flat surfaces). Only 9 tools are required because 3 surfaces are flat and flat tools are a standard tool in the optical shop. We find that each field corrector is unique to the specific telescope requirements, so new tools and test plates need to be made every time. In production, there would be (2) tools for every surface, one for grinding and one for polishing (18 tools plus the flat tools).
{"url":"http://www.dfmengineering.com/news_telescope_optics.html","timestamp":"2014-04-20T21:20:22Z","content_type":null,"content_length":"21452","record_id":"<urn:uuid:9e7755e7-9cd0-4bb8-8bac-769eddec2507>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
First NDP benchmarks (2007) Sparse matrix vector multiplication This benchmark is explained it much detail in Data Parallel Haskell: a status report. Runtimes comparing to sequential C code on Intel Xeon (x86) and Sun SunFire9600 (Sparc) are in time-colour.png. The parallel Haskell code is more efficient from 2 PEs for the SunFire and from 4 PEs for the Xeon processors. We blame the low sequential performance for the Xeon on the lack of effort that has been put into generating good straight-line code (both in the NCG and when compiling via C), this includes inadequate register allocation and lack of low-level optimisations. The speedup for the Xeon box and the SunFire are in speedup-colour.png and the speedup for our 8x dualcore Opteron NUMA box is in serenity-all-speedup-colour.png. The speedup of on the NUMA machine is limited by the memory bandwidth for smvm. When we only use one core per CPU, the benchmark scales much better. Moreover, the memory traffic/compute ratio is slightly more favourable when processing arrays of Floats than when processing arrays of Doubles. Connected components in undirected graphs This is based on http://www.cs.cmu.edu/~scandal/nesl/algorithms.html#concomp. Currently, two of the tree algorithms described there are implemented: Awerbuch-Shiloach and the hybrid algorithm. The random mate algorithm needs a data-parallel random number genenerator and is left for later. The algorithms are interesting because they actually don't do a lot of computations; they mostly filter and copy edges. Thus, it is perhaps not unreasonable to expect that if they scale well, then so will more computation-intensive ones. Also, they should make any inefficiencies introduced by missed fusion opportunities quite obvious. Both algorithms take the number of nodes (n :: Int) and an array of edges (es :: Int :*: Int) and yield an array of Ints of length n where each node is assigned a number between 0 and n-1. Nodes which are assigned the same number are connected. The algorithms are described and compared in http://citeseer.ifi.unizh.ch/greiner94comparison.html. At the moment, I have only benchmarked them for a random graph with 1000000 nodes and 40000 edges; eventually, I'll add benchmarks for other kinds of graphs described in the paper. The benchmarks have been run on a dual Intel Xeon 2.8 GHz with two cores per processor which effectively gives us 4 processors overall. The parallel versions are currently very much slower than the sequential ones, particularly so for Awerbuch-Shiloach. This is because fusion doesn't work for the parallel algorithms at the moment. No benchmarks on 4 processors yet, as the 4th PE is currently busy. The sequential code is in http://darcs.haskell.org/packages/ndp/Data/Array/Parallel/test/nesl/concomp/AwShU.hs and the parallel in http://darcs.haskell.org/packages/ndp/Data/Array/Parallel/test/ Version Threads Time (ms) Speedup sequential 1600 parallel 1 29800 2 16800 1.8 3 12800 2.3 4 ??? ??? The sequential code is in http://darcs.haskell.org/packages/ndp/Data/Array/Parallel/test/nesl/concomp/HybU.hs and the parallel in http://darcs.haskell.org/packages/ndp/Data/Array/Parallel/test/nesl Version Threads Time (ms) Speedup sequential 1850 parallel 1 7450 2 4600 1.6 3 3800 2.0 4 ??? ??? I haven't completely parallelised this one yet (it's only a matter of implementing some parallel combinators). Attachments (3) • speedup-colour.png (6.0 KB) - added by chak 7 years ago. “Speedup of smvm on 2x dualcore Xeon and SunFire9600” • time-colour.png (6.9 KB) - added by chak 7 years ago. “Runtime of smvm on 2x dualcore Xeon and SunFire9600, including sequential C reference implementation” • serenity-all-speedup-colour.png (7.7 KB) - added by chak 7 years ago. “Speedup of smvm on 8x dualcore Opteron for Float and Double, including the use of one core/CPU and two cores/CPU” Download all attachments as: .zip
{"url":"https://ghc.haskell.org/trac/ghc/wiki/DataParallel/Benchmarks?version=5","timestamp":"2014-04-23T18:52:03Z","content_type":null,"content_length":"20797","record_id":"<urn:uuid:490fb789-63f6-42cd-bd12-c87a92c7b224>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Point L on the grid represents the position of a camera used by an ecologist to study the behavior of lions. The ecologist set up another camera 3 units directly to the right of L. What are the coordinates of the position of the second camera? (-4, 1) (1, -4) (-4, 3) (1, -1) • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. "3 units directly to the right" can be found by adding 3 to the x-coordinate of the initial point, so you must first be able to identify the initial point. That is done by ascertaining the x-coordinate and the y-coordinate. The x-coordinate is found by seeing how many units to the right or left of the y-axis (vertical line where x=0) the point is. The y-coordinate is found by seeing how many units above or below the x-axis (horizontal line where y=0) the point is. Best Response You've already chosen the best response. Thanks :D Best Response You've already chosen the best response. Best Response You've already chosen the best response. So which one is it, I'm lost? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50b41299e4b0a5a78e159939","timestamp":"2014-04-16T22:41:59Z","content_type":null,"content_length":"38655","record_id":"<urn:uuid:11775bb3-437c-4cae-96cc-66785b77985a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Pharmacokinetic and -dynamic modelling of G-CSF derivatives in humans. Jump to Full Text MedLine PMID: 22846180 Owner: NLM Status: MEDLINE Abstract/ BACKGROUND: The human granulocyte colony-stimulating factor (G-CSF) is routinely applied to support recovery of granulopoiesis during the course of cytotoxic chemotherapies. However, OtherAbstract: optimal use of the drug is largely unknown. We showed in the past that a biomathematical compartment model of human granulopoiesis can be used to make clinically relevant predictions regarding new, yet untested chemotherapy regimen. In the present paper, we aim to extend this model by a detailed pharmacokinetic and -dynamic modelling of two commonly used G-CSF derivatives Filgrastim and Pegfilgrastim. RESULTS: Model equations are based on our physiological understanding of the drugs which are delayed absorption of G-CSF when applied to the subcutaneous tissue, dose-dependent bioavailability, unspecific first order elimination, specific elimination in dependence on granulocyte counts and reversible protein binding. Pharmacokinetic differences between Filgrastim and Pegfilgrastim were modelled as different parameter sets. Our former cell-kinetic model of granulopoiesis was essentially preserved, except for a few additional assumptions and simplifications. We assumed a delayed action of G-CSF on the bone marrow, a delayed action of chemotherapy and differences between Filgrastim and Pegfilgrastim with respect to stimulation potency of the bone marrow. Additionally, we incorporated a model of combined action of Pegfilgrastim and Filgrastim or endogenous G-CSF which interact via concurrent receptor binding. Unknown pharmacokinetic or cell-kinetic parameters were determined by fitting the predictions of the model to available datasets of G-CSF applications, chemotherapy applications or combinations of it. Data were either extracted from the literature or were received from cooperating clinical study groups. Model predictions fitted well to both, datasets used for parameter estimation and validation scenarios as well. A unique set of parameters was identified which is valid for all scenarios considered. Differences in pharmacokinetic parameter estimates between Filgrastim and Pegfilgrastim were biologically plausible throughout. CONCLUSION: We conclude that we established a comprehensive biomathematical model to explain the dynamics of granulopoiesis under chemotherapy and applications of two different G-CSF derivatives. We aim to apply the model to a large variety of chemotherapy regimen in the future in order to optimize corresponding G-CSF schedules or to individualize G-CSF treatment according to the granulotoxic risk of a patient. Authors: Markus Scholz; Sibylle Schirm; Marcus Wetzler; Christoph Engel; Markus Loeffler Publication Type: Journal Article; Research Support, Non-U.S. Gov't Date: 2012-07-30 Journal Title: Theoretical biology & medical modelling Volume: 9 ISSN: 1742-4682 ISO Abbreviation: Theor Biol Med Model Publication Date: 2012 Date Detail: Created Date: 2012-11-28 Completed Date: 2013-06-21 Revised Date: 2013-07-12 Medline Nlm Unique ID: 101224383 Medline TA: Theor Biol Med Model Country: England Journal Info: Other Details: Languages: eng Pagination: 32 Citation Subset: IM Affiliation: Institute for Medical Informatics, Statistics and Epidemiology, University of Leipzig, Haertelstrasse 16-18, 04107 Leipzig, Germany. markus.scholz@imise.uni-leipzig.de Export APA/MLA Format Download EndNote Download BibTex MeSH Terms Descriptor/ Bone Marrow / drug effects Qualifier: Granulocyte Colony-Stimulating Factor / pharmacokinetics*, pharmacology Granulocytes / drug effects Hematopoiesis / drug effects Models, Biological* Recombinant Proteins / pharmacokinetics, pharmacology Reg. No./ 0/Recombinant Proteins; 0/pegfilgrastim; 121181-53-1/Filgrastim; 143011-72-7/Granulocyte Colony-Stimulating Factor From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine Full Text Journal Information Article Information Journal ID (nlm-ta): Theor Biol Med Model Download PDF Journal ID (iso-abbrev): Theor Biol Med Model Copyright ©2012 Scholz et al.; licensee BioMed Central Ltd. ISSN: 1742-4682 open-access: Publisher: BioMed Central Received Day: 17 Month: 2 Year: 2012 Accepted Day: 12 Month: 6 Year: 2012 collection publication date: Year: 2012 Electronic publication date: Day: 30 Month: 7 Year: 2012 Volume: 9First Page: 32 Last Page: 32 PubMed Id: 22846180 ID: 3507764 Publisher Id: 1742-4682-9-32 DOI: 10.1186/1742-4682-9-32 Pharmacokinetic and -dynamic modelling of G-CSF derivatives in humans Markus Scholz12 Email: markus.scholz@imise.uni-leipzig.de Sibylle Schirm1 Email: sschirm@imise.uni-leipzig.de Marcus Wetzler1 Email: marcus.wetzler@imise.uni-leipzig.de Christoph Engel12 Email: christoph.engel@imise.uni-leipzig.de Markus Loeffler12 Email: markus.loeffler@imise.uni-leipzig.de 1Institute for Medical Informatics, Statistics and Epidemiology, University of Leipzig, Haertelstrasse 16-18, 04107 Leipzig, Germany 2LIFE - Leipzig Research Center for Civilization Diseases, University of Leipzig, Philipp-Rosenthal-Strasse 27, 04103 Leipzig, Germany Introduction and background The human granulocyte colony-stimulating factor (G-CSF) is routinely applied in various cancer chemotherapy regimen in order to ameliorate or prevent neutropenia caused by the unspecific toxicity of the drugs used [^1^-^4]. G-CSF proved to be highly potent in stimulating granulopoiesis via several modes of action such as mitotic activation of granulopoietic progenitors and precursors, accelerated maturation of bone marrow cell stages and increased release of mature bone marrow cells [^5^-^10]. In case of conventional (non-myeloablative) chemotherapies, the haematopoietic system usually recovers without further medication. But, G-CSF can significantly speed up this process allowing dose- and time-intensifications of multi-cycle chemotherapies [^11,^12]. While platelets and red blood cells can show cumulative toxicity during the course of intensified regimen, appropriate G-CSF prophylaxis often results in a complete recovery of circulating granulocytes within one therapy cycle, i.e. within two or three weeks [^4,^13]. Several pharmaceutical derivatives of G-CSF are available now. The first generation of G-CSF pharmaceuticals were recombinant derivatives such as Filgrastim (non-glycosylated) or Lenograstim (glycosylated). Both derivatives are virtually identical to endogenously produced G-CSF [^14^-^16] but Filgrastim is more frequently used in clinical trials. Filgrastim is eliminated by both, renal elimination and specific degradation mediated by G-CSF receptors or neutrophil elastase [^17^-^22]. This results in a short half-life in vivo requiring multiple injections during one cycle of chemotherapy. As next generation G-CSF derivative, Pegfilgrastim (pegylated Filgrastim) was developed in order to improve the pharmaceutical properties of Filgrastim. Indeed, Pegfilgrastim shows a remarkably prolonged half-life in vivo mainly due to reduced renal clearance [^23^-^25]. Therefore, only one injection (with fixed dose) is required during one chemotherapy cycle. On the other hand, pegylation of proteins also might reduce the receptor binding affinity, and withit, the efficacy of the drug [^26^-^29]. But for Pegfilgrastim, this effect appears to be less important than the gain in half-life, since it is generally believed that a single injection of Pegfilgrastim is at least as effective as multiple injections of Filgrastim in treating neutropenia [^2,^30^-^33]. There are some ongoing efforts to further improve the pharmacokinetic properties of G-CSF derivatives by additional pegylations (e.g. Maxy-G34, [^29]). Although the application of pegylated G-CSF is much more convenient for both patients and clinicians, we believe that Filgrastim will not completely be replaced since it can be applied more individually e.g. in dependence on the neutropenic risk of a patient or in cases when granulopoietic recovery is insufficient while pegylated G-CSF was applied [^34]. Additionally, for the purpose of stem cell mobilization, Filgrastim is not inferior compared to Pegfilgrastim but has less severe side-effects [^35]. In view of the highly differing pharmacokinetic properties of the available G-CSF derivatives, we constructed pharmacokinetic models of the G-CSF derivatives Filgrastim, Pegfilgrastim and the novel Maxy-G34 in mice and rats [^29,^36]. Our aim was to identify basic pharmacokinetic model mechanisms especially with respect to the degradation of G-CSF in vivo and to compare the resulting pharmacokinetic parameters between the G-CSF derivatives. We now aim to translate these model insights to humans. Effectiveness of G-CSF treatment depends on many variable therapy parameters such as applied chemotherapy, individual factors, G-CSF derivative used, and especially, its dosing and timing schedule [^ 4,^34,^37,^38]. Chemotherapy induced neutropenia and G-CSF induced granulocytosis via different modes of action in combination with a strong specific elimination of G-CSF mediated by circulating granulocytes results in complex dynamics of both G-CSF serum concentration and circulating granulocytes as well. In consequence, the optimization of G-CSF treatment is a non-trivial task and cannot be performed solely on the basis of clinical trials. We showed in the past that biomathematical cell-kinetic models of granulopoiesis under chemotherapy and G-CSF support are useful to optimize chemotherapy regimen regarding granulotoxicity [^39^-^41]. Our former model already included a preliminary model of Filgrastim application. In the present paper, we update our model with respect to improved pharmacokinetic and dynamic modelling of Filgrastim and Pegfilgrastim based on our models in mice. Additionally, our cell-kinetic model has been improved by a more elaborated model of chemotherapy action. The resulting model is now able to explain the time courses of granulocytes and G-CSF serum concentrations for virtually all datasets published in the literature. We also discuss how the model can be used to optimize G-CSF scheduling of chemotherapies. Structure of the human model of granulopoiesis under chemotherapy with G-CSF support Our cell kinetic model of granulopoiesis is an ordinary differential equations system modelling the time-dependent content of and the fluxes between the following cell compartments: S (pluripotent stem cells), CG (colony forming units of granulocytes and macrophages), PGB (proliferating granulopoietic blasts), MGB (maturing granulopoietic blasts - subdivided into metamyelocytes (G4), banded granulocytes (G5) and segmented granulocytes (G6)) and GRA (circulating granulocytes). At this, the efflux of one compartment equals the influx of the subsequent compartment. The system is highly regulated via growth factor mediated feedback loops. The most important one is G-CSF which regulates the compartments CG, PGB and MGB, but not S[^5^-^10,^42,^43]. Modes of action comprise improvement of proliferation, acceleration of maturation and improvement of the release of mature blood cells from bone marrow to blood. The latter one is also denoted as postmitotic amplification in the following. Production and consumption of G-CSF are regulated by mature cells. We also modelled a subcutaneous compartment in which G-CSF is usually injected. Chemotherapy induces an instantaneous depletion of bone marrow cell stages which is specific for each cell stage and dependent on the applied drugs and drug doses. The cell-kinetic model is essentially the same as presented and discussed in [^40], except for a few changes which we will discuss later. Basic model structure is shown in Figure 1. A complete set of model equations and parameters is presented in the Additional file 1. For unperturbed granulopoiesis, the model is autonomous and has a single fixed-point (steady-state). This fixed-point appears to be stable for the parameter set which we propose later, i.e. transient perturbations result in damped oscillations until the fixed-point is re-established. Permanent perturbations such as constant G-CSF stimulation or chemotherapy damage result in new fixed-points of increased or depressed granulopoiesis, respectively. However, stable oscillations of the system can occur for alternative parameter sets especially if parameters of stem cell regulation are changed. In general, we assume that the system is in steady-state at the beginning of any treatment. That is, initial values equals normal values. Basic model mechanisms In this section, we briefly describe two important regulation mechanisms of our granulopoiesis model which are needed in the following. A more detailed discussion of all regulation mechanisms of the model is given in [^40]. Most of our cell kinetic parameters such as amplification or transition times are regulated between a minimum and a maximum value by a so called Z-function, which is a function of another quantity such as circulating G-CSF. We use the following class of Z-functions: [Formula ID: where X is the regulator of the quantity Y and Y^min, Y^nor, Y^max, b[Y ]are the parameters of the Z-function for the regulation of Y . The parameter b[Y ]defines the steepness of the function and is called sensitivity parameter in the following. Modelling of delays The maturation of cells and the transition between maturing compartments is neither a random first order transition without dependence of cell age (which is equivalent to an exponential distribution of transit times) nor a transition with fixed time delay. In [^40] we showed that a cascade of first order transitions results in a Gamma distribution of the transition times. More precisely, if we divide a compartment with transit time T in N subcompartments connected by first order transitions with time T/N, the resulting distribution is a Gamma distribution with expectation T and variance T^ 2/N. Hence, the number of subcompartments corresponds to a variance estimate regarding the transition time. The method of dividing compartments into subcompartments in order to introduce an element of time delay will later be adopted for the modelling of delayed G-CSF or chemotherapy actions. Refinement of the model Pharmacokinetic model assumptions Our pharmacokinetic model of G-CSF is essentially based on models developed for mice and rats [^29,^36]. The underlying model assumptions are as follows: 1. The pharmacokinetic model contains three compartments in which G-CSF is present: The subcutaneous compartment CG−CSFsc in which G-CSF is usually injected, the central compartment CG−CSFcent in which G-CSF is haematologically active and a peripheral compartment CG−CSFper representing reversible bindings of G-CSF (e.g. protein binding [^44]). 2. Subcutaneously injected G-CSF results in a delayed influx of G-CSF into the central compartment caused e.g. by lymphatic absorption [^45]. The delay is modelled by the division of the subcutaneous compartment into two subcompartments. 3. Transitions between central and peripheral compartment are reversible and are modelled by a first order kinetic in both directions [^46]. 4. Endogenous G-CSF is produced by endothelial cells [^47]. The production is regulated by the demand of mature granulocytes. We implemented a phenomenological rather than mechanistic modelling of this principle. At this, the production of endogenous G-CSF is modelled as a function of the content of the final bone marrow compartment and circulating granulocytes. This is in complete analogy to former versions of our model [^40]. 5. Since the bioavailability of G-CSF derivatives is dose-dependent [^29], we assume that a part of the applied G-CSF is removed from the subcutaneous compartment without entering the central compartment. This is modelled by a Michaelis-Menten kinetic within the first subcompartment of the subcutaneous tissue. 6. G-CSF is irreversibly removed from the central compartment by two independent processes: An unspecific renal elimination which is modelled by a first order kinetic [^24,^46] and a specific degradation mediated by the number of circulating granulocytes. The latter one is modelled by a Michaelis-Menten kinetic which is assumed to be proportional to the number of circulating granulocytes [^36,^48^-^51]. Two key mechanisms are discussed for the specific degradation which are cleavage by neutrophil elastase [^20,^21,^52,^53] and G-CSF receptor binding and internalization [^19,^22,^54^- ^56]. Since neutrophil elastase is mainly produced by granulocytes [^20], proportionality of degradation with granulocyte count can be assumed for the first mechanism. For the second mechanism, the proportionality assumption is less clear since the majority of G-CSF receptors are within the bone marrow which dynamics are somewhat different from the dynamics of mature granulocytes. 7. Differences in G-CSF derivatives are model by different model parameters rather than differences in model structure. In view of the high similarity of Filgrastim and endogenous G-CSF [^14^-^16], we assume that the pharmacokinetic and -dynamic parameters of Filgrastim and endogenous G-CSF are the same. In contrast, we assume differences between Filgrastim and Pegfilgrastim for some of the parameters. More precisely, pharmacokinetic differences were assumed with respect to absorption, distribution and degradation of the G-CSF pharmaceuticals in order to model observed differences in G-CSF serum dynamics after subcutaneous applications [^23^-^25]. Pharmacodynamik differences are modelled by different parameterizations of the G-CSF mediated regulatory mechanisms (Z-functions). The latter is motivated by experimental results suggesting a reduced receptor binding affinity of pegylated G-CSF [^26^-^29]. Based on these assumptions, we formulate the pharmacokinetic model equations in the next section. A schematic structure of the model can be found in Figure 2. Pharmacokinetic model equations Endogenous production According to [^40], the relative G-CSF production PG−CSFendo is a Z-function of the relative content of segmented granulocytes in bone marrow and granulocytes in circulation. [Formula ID: where ω[G6] and ω[GRA] are weighting parameters. It holds that PG−CSFendo_nor=1 whereas all other parameters of the Z-functions are free parameters. Exogenous G-CSF application Exogenous G-CSF applications are modelled by an injection function. Let be the Heaviside-function, then the injection function reads as follows: [Formula ID: where t[i ]≥ 0 (i = 1,…,L) are the time points of G-CSF injections and d[G−CSF](t[i]) are the corresponding doses (in μg). The parameter t[inf ]is the duration of the injection which we assumed to be constant (t[inf ]= 5s). The injection function is specific for each G-CSF derivative and site of injection (subcutaneous, intravenous). That is, for concurrent Filgrastim and Pegfilgrastim applications injections at concurrent sites one needs a maximum of four injection functions PG−CSFexo_sc_fil, PG−CSFexo_iv_fil for subcutaneous and intravenous Filgrastim injections and PG−CSFexo_sc_peg and PG−CSFexo_iv_peg for subcutaneous and intravenous Pegfilgrastim injections respectively. Subcutaneous compartment The subcutaneous compartment is divided into two subcompartments sc_1 and sc_2 where the efflux of the first compartment is the influx to the second compartment. G-CSF is applied to the first subcompartment (second term of (4)). In the first subcompartment there is a dose-dependent loss of G-CSF modelled by a Michaelis-Menten kinetic (third term of (4)). For Filgrastim injections it holds [Formula ID: [Formula ID: with the initial values CG−CSFsc_1(0)=CG−CSFsc_2(0)=0. For Pegfilgrastim injections the first term of the right-hand side of (4) is substituted by PG−CSFexo_peg_sc. Likewise, the Filgrastim parameters kscF, vmaxF and kmF are substituted by corresponding Pegfilgrastim parameters. Central compartment For Filgrastim injections it holds that [Formula ID: The balance equation (6) contains terms in the following order: The endogenous production, a potential intravenous injection, the influx from the subcutaneous compartment, the unspecific elimination, the efflux to the peripheral compartment, the influx from the peripheral compartment and the specific elimination (Michaelis-Menten kinetic) which is proportional to the relative granulocyte concentration CGRArel. The corresponding equation for Pegfilgrastim is the same except for the endogenous production, which is zero, the intravenous injection function which is substituted by PG−CSFexo_peg_iv, the parameters and the initial value which is again zero. The relative endogenous production function of G-CSF is multiplied by a constant PG−CSFref in order to adjust the normal value of CG−CSFcent to a reference amount of G-CSF which is the product of the distribution volume VDF and the reference G-CSF serum concentration CG−CSFcent_ref which will be determined later on the basis of measurements available from the literature. Assume that CG−CSFcent(0) =CG−CSFcent_nor=VDFCG−CSFcent_ref, the parameter PG−CSFref can be calculated exploiting the steady-state conditions of equation (6), i.e. PG−CSFendo(0)=1, PG−CSFexo_fil_iv(0)=0 and −kcpFCG−CSFcent(0) [Formula ID: Peripheral compartment For both G-CSF derivatives, we have [Formula ID: [Formula ID: where the parameters k[cp], k[pc] and V[D] are specific for Filgrastim and Pegfilgrastim respectively. New pharmacodynamic model assumptions The pharmacodynamic model describes the dynamics of the bone marrow cell stages, circulating granulocytes, G-CSF, corresponding regulations and the action of chemotherapy as well. In Table 1 we present the major model compartments and its regulatory features. A complete set of equations can be found in the Additional file 1. We used the same pharmacodynamic model of G-CSF as presented in [^40] except for a few simplifications and additional assumptions which we will discuss now: The cytokine GM-CSF (granulocyte macrophage colony-stimulating factor) was no longer considered in order to simplify the model. Since the endogenous productions of GM-CSF and G-CSF in our former model were both related to the demand of bone marrow cell stages, the time courses of endogenous G-CSF and GM-CSF after external perturbations are similar, making the cytokines undistinguishable from the modelling point of view (compare [^40]). Therefore, only G-CSF was considered in the present model. It replaces GM-CSF regarding the regulation of the CG compartment. This simplification is also biologically plausible since G-CSF receptors are also expressed at myeloid progenitors [^42]. The transition times T[CG ]and T[PGB ]were constant in our former version of the model but are now dependent on G-CSF via a Z-function. This model assumption was made in order to account for the increased number of mitoses in these compartments. But, due to the lack of data, it must be considered as speculative. Since Pegfilgrastim and Filgrastim are supposed to have different G-CSF receptor binding affinities [^26^-^29], we assume different regulatory Z-functions for both derivatives. But we assumed the same pharmacodynamic parameters for Filgrastim and endogenously produced G-CSF. While in our former model version the G-CSF concentration instantaneously affected the value of the Z-functions, we now introduce a time delay regarding G-CSF action. This was motivated by transduction network analyses which revealed a delayed response of the transcriptome to G-CSF stimulations [^57]. The delay is modelled by a cascade of four subcompartments (see section “Basic model mechanisms”). The efflux of the last subcompartment is the delayed G-CSF concentration CG−CSFcent_rel_del which is the new argument of our Z-functions. [Formula ID: [Formula ID: with CG−CSFcent_rel_del=DG−CSF·CG−CSF4. The delay parameter D[G−CSF ]is specific for Pegfilgrastim and Filgrastim but constant for all Z-functions. Hence, (1) reads for all quantities Y regulated by G-CSF. At this, the normal value of the Z-function Y^nor refers to CG−CSFnor for endogenous G-CSF or Filgrastim and to an absolute amount of 1μg for Pegfilgrastim. In case of Pegfilgrastim injections, Pegfilgrastim and endogenous G-CSF compete with respect to receptor binding. To model this process, the Z-functions of Pegfilgrastim and Filgrastim were added using a weighting factor ω[P ]which is again a Z-function of the quotient of the two doses with minimum 0 and maximum 1. [Formula ID: [Formula ID: where Y is an arbitrary regulated quantity such as a transition time or an amplification. For all these quantities we assumed the same Z-function of the weighting parameter ω[P]. In the previous version of our model, chemotherapy was modelled by an instantaneous depletion of bone marrow cells lasting exactly one day. Since independence of cytotoxic action of the single chemotherapy components was assumed, the effect of chemotherapy could be modelled by a step function with a step width of one day. Although the metabolism of cytotoxic drugs is usually fast, the nadir of bone marrow cell stages is typically reached a few days after the application [^58]. To account for this observation, we delayed the toxic effect of chemotherapy applications by a cascade of four subcompartments in complete analogy to (10), (11) resulting in a delayed toxicity function Ψ^GRA. The delay parameter is specific for the cytotoxic drugs used. Hence, two parameters are required to define the toxic effect of a chemotherapeutic drug to a single cell line: a delay parameter and a scaling factor of the toxicity function. While the delay is only specific for the applied chemotherapeutic drug but constant for all cell stages, the scaling factor is specific for both. Since our model is a model of granulopoiesis, we can only make predictions for absolute neutrophil counts. However, in clinical practice often only leukocytes are available. In our former model version, we assumed proportionality of leukocytes and absolute neutrophil count which is only roughly correct [^38,^40]. To be more precise, we now calculate the leukocyte count as the sum of lymphocytes and granulocytes. To avoid a full model of lymphopoiesis, we modelled the reduced lymphocyte count under chemotherapy by an exponential function of the corresponding toxicity function. [Formula ID: where c[LY ]= 3000 cells per μl and c[GRA ]= 4000 cells per μl are the normal concentrations of lymphocytes and granulocytes respectively. Ψ[LY ]is the toxicity function for lymphocytes which is analogously defined as the toxicity functions of granulopoiesis (see 6.) Construction of toxicity functions Since the precise structure of the toxicity function depends on the schedule of the chemotherapy, we demonstrate the construction of the toxicity functions using six cycles of CHOP therapy with a cycle duration of 14 days (6xCHOP-14) as example. During CHOP-14 therapy, the cytotoxic drugs cyclophosphamide, doxorubicine and vincristine are applied concomitantly at the first day of each cycle. Since contributions of these drugs to granulotoxicity cannot be separated, we assume a unique toxicity function for this drug combination. Hence, the chemotherapy injection function Ψ6×CHOP−14inj reads as follows: which is in analogy to (3). Here, no dose parameter is required since intensity of damage is defined by the toxicity parameters later. We set t6×CHOP−14inf=1d. The delayed action of chemotherapy can now be modelled in analogy to (10), (11): where i = 2,…,4, X∈{LY,GRA}, i.e. the delay parameter D is specific for each chemotherapy and different for the toxicity functions of granulocytes and lymphocytes. Now, the toxicity functions are defined as: where k are the toxicity parameters and Yε{S,CG,PGB,MGB}, i.e. the toxicity function is specific for the different cell stages of granulopoiesis. If drugs are applied at different schedules, corresponding toxicity functions are added. The sketched principle can easily be generalized to derive toxicity functions of arbitrary chemotherapy schedules. Model calibration, parameter estimation and validation Estimation of parameters A main goal of our study is to construct pharmacokinetic models of Filgrastim and Pegfilgrastim. Based on the pharmacokinetic model equations presented above, a set of unknown pharmacokinetic parameters needs to be determined. Since detailed bone marrow data of human granulopoiesis are not available, most of the bone marrow parameters are only known up to a certain range or are completely unknown. Furthermore, model parameters regarding sensitivity of regulatory mechanisms (sensitivity parameters) have no direct biological measurable equivalent. Finally, we want to apply the model to chemotherapy settings requiring a quantification of corresponding toxicity and delay parameters. Many of the parameters of the present model version were adopted from an earlier version of the model especially if they are not very sensitive regarding model behaviour (compare [^40]). But the inclusion of new regulatory mechanisms (see section “New pharmacodynamic model assumptions”) made some adaptations of model parameters necessary. To address these challenges, we established the following stepwise fitting procedure keeping parameters identified at the previous step constant. 1. Pharmacokinetic model parameters were determined on the basis of available cytokine dynamics after G-CSF application. In order to model the specific elimination mechanism, we imprinted the corresponding data of granulocyte dynamics at this stage of modelling. We obtained a unique parameter set which is valid for all dosing and timing schedules of all scenarios considered. 2. Pharmacodynamic parameters were determined by fitting the predictions of the model to available granulocyte and leukocyte dynamics of different scenarios (G-CSF application and simple chemotherapies for which the number of chemotherapy parameters to be fitted is relatively low). The resulting parameter set is valid for all scenarios with and without chemotherapy applications. Since the stem cell compartment is the basis of all of our models of haematopoietic lineages, we decided to keep corresponding parameters constant and as presented in [^40,^59]. 3. Afterwards, toxicity parameters of more complex chemotherapies can be estimated. More details of chemotherapy modelling, parameter estimation and exploration of patients with different risk of haematotoxicity can be found in a separate publication of our group (Wetzler et al., to appear). A complete parameter list for our model is provided in the Additional file 1. Not all data sets were used to fit parameters. A few data sets were kept in reserve in order to validate the model. Available data sets Data sets were collected from the literature by an extensive search. For our modelling purposes, close-meshed time series of G-CSF and absolute neutrophil counts (ANC) or leukocytes are especially valuable. Corresponding data were extracted from the publications as precise as possible using automated tools. Data sets for which no means or medians of the patients could be retrieved were neglected. Data sets comprise single or multiple applications of G-CSF in healthy volunteers and conventional chemotherapies of different diseases with or without G-CSF prophylaxis. Additionally, we can rely on own clinical trials data for which one of us (Markus Loeffler) is the responsible biostatistician or for which we have cooperation agreements. Leukocyte raw data under chemotherapy are available from published studies of the German High Grade Non-Hodgkin-Lymphoma Study Group. These studies were conducted in accordance with the Declaration of Helsinki. Corresponding protocols were approved by the ethics review committee of each participating center. Written informed consent was obtained from the patient for publication of this report and any accompanying images. An overview of data used for model fitting and validation is given in Table 2. Fitting procedure As mentioned above, unknown parameters of the model were determined by fitting the predictions of the model to available clinical data minimizing the L^1 distance between logarithmized model and logarithmized median of data. More precisely, we have [Formula ID: where f[model](t,k) is the solution of the model equation system for the granulocyte compartment at the time t based on the parameter set k = k[1],…k[n]. For each scenario, t[0] and t[1] describe the first and the last time point for which data are available. To obtain the curve logf[data](t), the logarithms of the patients medians were linearly interpolated. Logarithms of data were used to provide an optimal fit of the nadir phase of cell counts. In the following, the left hand side of equation (16) is referred to as the fitness function. As in our previous papers, evolutionary strategies were used for the numerical solution of the optimisation problem. This method is especially suitable for our problem since it requires a minimum of computationally expensive calculations of the fitness function, it can deal with a large number of free parameters with only a linear growth in effort and it is the only chance to obtain a global optimum as good as possible. Evolutionary strategies are non-deterministic optimization methods which are based on the principles of evolution (mutation by chance, reproduction, realization of phenotypes and survival of the fittest). For mathematical optimisation, parameter settings were taken instead of livings, that is parental parameter settings are changed by chance (mutation), combined to form new parameter settings (reproduction) and were used to solve the model equation system (realization). The parameter settings for which one obtains a good agreement between the model prediction and the data were taken to create the next generation of parameter settings (survival of the fittest). The fitness function is a measure for this agreement. We used a (1+3) evolutionary strategy with self-adapting mutation step size most of the time. That is, one possibly immortal parent creates three children in each step. See also [^68,^69] for further details of evolutionary strategies. Fitting of chemotherapy schedules requires additional parameters with respect to the chemotherapy toxicity. The delay parameter of the toxicity is specific for each drug but is the same for all bone marrow cell stages. Four toxicity parameters are required to model the cell stage specific toxicities. Another parameter represents increased toxicity for the first chemotherapy application (first cycle effect). Finally, two parameters represent the toxicity to the lymphopoetic system (eq. (15)). In general, modelling a chemotherapy required several sets of these parameters in order to model all drugs or drug combination with different schedules. However, for the purpose of model calibration, we only considered simple chemotherapy schedules in order to reduce the number of additional unknown parameters to be fitted. The CHOP regimens are based on the application of three cytotoxic drugs (cyclophosphamide, doxorubicine and vincristine) at the same time. Hence, only one set of chemotherapy parameters was assumed to model the effects of this drug combination. Additionally, for these regimens different G-CSF schedules are available, which is especially useful for the calibration of our pharmacokinetic and -dynamic models of Filgrastim and Pegfilgrastim. Not all of our data sets were utilized for parameter fitting. A subset of data sets was kept in reserve in order to validate the model. Data sets used for model validation comprise the data of [^63,^ Sensitivity analysis Since our model contains several parameters which are speculative or unknown only up to a certain range, we peformed an extensive sensitivity analysis of all parameters. For this purpose, parameters were increased or decreased by 2.5% and the corresponding change of the fitness function was determined. At this, only affected model scenarios were considered. The changes of the fitness function were plotted as bar diagrams for each parameter in order to facilitate comparisons. Figures are shown in the Additional file 1. Simulation and numerical methods Our model has been programmed with MATLAB 7.5.0.342 (R2007b) and SIMULINK toolbox (The MathWorks Inc., Natick, MA, USA). Simulations of the model were performed by numerical integration of the equation system using the variable step solver from Adams and Bashford implemented in the SIMULINK toolbox. Pharmacokinetic model of Filgrastim and Pegfilgrastim Given the model equations of section “Pharmacokinetic model equations”, we determined the pharmacokinetic parameters for Filgrastim and Pegfilgrastim separately by fitting the model to time series of G-CSF serum concentrations after single or multiple application of one of the two G-CSF derivatives. Parameter estimates are shown in Table 3. Combined with our pharmacodynamic model (see section “The new pharmacodynamic model”), these parameter estimates resulted in a good fit of all model scenarios. Examples are presented in Figure 4. A complete list of all scenarios is presented in the Additional file 1. The new pharmacodynamic model Unknown pharmacodynamic parameters were determined by fitting the predictions of our model to available time courses of ANC or leukocytes after single or multiple injections of G-CSF or chemotherapy. All scenarios presented in section “Available data sets” were used for this simultaneous fitting process except for those reserved for model validation (compare section “Fitting procedure”). Fitted parameters resulted in a good fit of all model scenarios except for time points shortly after G-CSF injections. Examples are shown in Figure 4. All other scenarios of G-CSF application can be found in the Additional file 1. Chemotherapy scenarios are presented in the next section. The most sensitive pharmacodynamic parameters influencing the behaviour of the model are parameters regarding the CG compartment (regulation of the proliferative fraction, transition time and amplification), the amplification in the PGB compartment and the postmitotic amplification. Sensitivity parameters of the regulation functions are generally less sensitive except for the sensitivity parameter of the regulation of the postmitotic amplification. A complete list of the results of the sensitivity analysis can be found in the Additional file 1. Pharmacodynamic differences between Filgrastim and Pegfilgrastim can be traced back to differences of the regulation functions. In general, compared to Pegfilgrastim, the regulation functions of Filgrastim express a higher sensitivity regarding changes of the G-CSF concentration, that is, greater slopes and higher values under maximum stimulation (see Figure 5 for an example). In contrast to our former model of granulopoiesis, we assumed a delayed effect of G-CSF (assumption 4 in section “New pharmacodynamic model assumptions”). Consequences of this assumption are studied in Figure 6 on the basis of model simulations of single Filgrastim injections. The estimated delay is moderate in size resulting in a small shift of the time course of cell stages. This shift is negligible for MGB but more pronounced for the granulocyte compartment which can be explained by the postmitotic amplification mechanism (see [^40]). The new chemotherapy model Chemotherapy was modelled by a transient depletion of bone marrow cell stages. In contrast to our former model of granulopoiesis, we assumed a delay of the bone marrow depletion. The effect of this delay is studied in Figure 7. The delay resulted in a later occurrence and a reduced depth of the nadir of leukocytes. The delay was assumed to be different for different chemotherapeutic drugs or drug combinations (see Wetzler et al. for further details). Data of the CHOP regimen were utilized to fit both, the pharmacokinetic and -dynamic model and the set of specific toxicity parameters as well. This set of toxicity parameters was valid for all G-CSF schedules applied as supportive therapy for CHOP. Results of these scenarios are shown in Figure 8. Comparison of model and data revealed a good agreement. For almost all time points, the model curve is within the interquartile range of the data. Chemotherapy and delay parameters for the CHOP regimen can be found in the Additional file 1. Since we assumed a simplified model of lymphocyte toxicity under chemotherapy, we can estimate the ratio of granulocytes and leukocytes under therapy, offering a possibility to validate the model. Results of the CHOP-14 regimen with Filgrastim application at day 4 to 13 are displayed in Figure 9. This ratio was estimated to be clearly not constant varying between 68% and 98%. Validation of the model A few datasets were kept in reserve in order to validate our model. The phase 1 data of Varki et al.[^63] were not used for model fitting as well as the data of CHOP chemotherapy under Pegfilgrastim treatment of George et al.[^65]. Compared to the CHOP data used for model fitting, the data of George et al.[^65] comprise G-CSF serum levels as well. Both scenarios fit well with our model prediction (see Figure 10). No additional parameter fittings were performed to model these scenarios. In the present paper, we developed an ordinary differential equations model of human granulopoiesis under chemotherapy and G-CSF support. The model was built on the basis of a former model of granulopoiesis of our group which now has been improved primarily by the incorporation of a detailed pharmacokinetic and -dynamic model of two G-CSF derivatives (Filgrastim and Pegfilgrastim). At this, the pharmacokinetic model was adopted from similar models developed for G-CSF applications in mice and rats developed by our group. Unknown model parameters were obtained by fitting the predictions of the model to available datasets. The combined pharmacokinetic and -dynamic model correctly predicts the time course of a variety of datasets comprising single or multiple injections of G-CSF into healthy volunteers or patients under CHOP chemotherapy. We were able to describe the differences between the G-CSF derivatives by a set of different pharmacokinetic- and -dynamic parameters. The model was validated on the basis of datasets not used for model fitting. The presented model is by far not the first attempt to model granulopoiesis or G-CSF applications. Published models comprise for example pure pharmacokinetic models [^25,^46], pharmacokinetic and -dynamic models of G-CSF application on the cellular level [^49], in healthy volunteers [^50,^51,^70], for the treatment of cyclic neutropenia [^71], for high-dose chemotherapy with stem cell transplantation [^72,^73] or for conventional chemotherapy patients [^74,^75]. We developed a model of human granulopoiesis under chemotherapy in the past including a preliminary model of Filgrastim application [^40]. To the best of our knowledge, there is so far no granulopoiesis model of humans under conventional chemotherapy comprising a detailed pharmakokinetic- and dynamic model of the two G-CSF derivatives Filgrastim and Pegfilgrastim. This combination allows us to derive clinically meaningful applications of the model. As mentioned, the presented model was based on a former model of our group. This model was based on biologically plausible assumptions regarding the production of mature granulocytes via a cascade of bone marrow cell stages, the action of chemotherapy and the action of growth factor mediated feedback loops. Equations describe the fluxes between cell compartments. G-CSF was modelled as the major regulatory element of both, the transition time and the amplification within the compartments. Chemotherapy was modelled by an instantaneous and transient cell loss of all bone marrow cell stages. Since these basic model assumptions were intensively discussed in our former paper [^40] we will focus on our new model assumptions and parameters in the following. The major improvement of our model is the incorporation of a detailed pharmacokinetic model of the two G-CSF derivatives Filgrastim and Pegfilgrastim. Both are widely used in clinical practice in order to ameliorate leukopoenia during cancer chemotherapy. The pharmacokinetic model was constructed in complete analogy to the pharmacokinetic models which we developed for mice and rats recently [ ^29,^36]. That is, we made the same physiological assumptions and used the same model equations but different parameters. Furthermore, we used the same model equations for both drugs assuming that pharmacokinetic differences between the drugs can be traced back to different parameters rather than different mechanisms of action. The drugs were typically injected into the subcutaneous tissue resulting in a delayed absorption by the circulating blood compartment probably via lymphatic absorption [^45]. The delay was modelled by a set of concatenated first-order differential equations rather than a fixed time delay. We showed in the past that this kind of modelling is equivalent to a Gamma-distributed transition time, which is biologically plausible. At this, the variance is determined by the number of subcompartments [^40]. However, this variance appears to be of lesser importance for the model behaviour. In analogy to [^36], we observed that a number of subcompartments between two and ten would also work well. To reduce the computational in the present model, we used the smaller number. Data collected in mouse and rat experiments suggested that subcutaneously injected G-CSF has a dose-dependent bioavailability [^29,^36]. Therefore, we introduced a loss term into the equations of the subcutaneous compartment. Since there is some evidence of reversible protein binding of G-CSF molecules, we modelled a first order transition between the blood compartment and a peripheral compartment [^46]. Our model assumptions regarding endogenous production of G-CSF are speculative. We assumed that the production is regulated between a minimum and a maximum value in dependence on bone marrow cellularity [^47]. In steady-state, the production is constant in order to sustain a fixed serum concentration determined by averaged data from the literature. Degradation of G-CSF was modelled by two independent processes, an unspecific renal clearance modelled by a first order transition and a specific degradation via neutrophil elastase or G-CSF receptors. All three degradation mechanisms are biologically well understood but their relative importance is unknown (see discussed literature in section “Pharmacokinetic model assumptions”). The specific degradation was modelled by a Michaelis-Menten kinetic which was assumed to be proportional to the number of granulocytes. For the degradation mechanism due to neutrophil elastase, this assumption seems to be appropriate [^21]. On the other hand, since G-CSF receptors are also present in bone marrow progenitors and precursors [^42], it appears to be less appropriate for the receptor-mediated clearance mechanism. Nevertheless, we assumed proportionality as the most parsimonious model resulting in a good agreement of predictions and data. The assumption also worked well for the pharmacokinetic modelling in mice [^36] but not for rats [^29]. Moreover, we experimented with alternative model assumptions assuming consumption of G-CSF by both, mature neutrophils and precursors which did not significantly improve the quality of our model predictions. Variable numbers of G-CSF receptors per cell were observed in dependence on the G-CSF level [^56]. Modelling this obervation would require additional assumptions and parameters. We decided to skip this in the current version of the model in view of the relatively good quality of model predictions in the clinical scenarios considered so far. The model equations worked well to explain the dynamics of G-CSF serum concentrations after single or multiple injections of Filgrastim or Pegfilgrastim in healthy or diseased people. The same equations worked also well for a third G-CSF derivative, namely Maxy-G34, which is a novel G-CSF derivative currently under development by Maxygen Inc. However, due to a confidentiality agreement with Maxygen Inc., the results are not shown in the present paper. In order to make predictions regarding the response of granulopoiesis after the application of G-CSF, it was necessary to attach a pharmacodynamic model of G-CSF applications. Since there is some evidence that the pegylations of the Pegfilgrastim molecule interact with its binding affinity to the G-CSF receptor [^26^-^29], we decided to assume different regulation functions for Filgrastim and Pegfilgrastim. Hence, Z-functions of the transition times and amplifications in CG, PGB and MGB are assumed to be different for Filgrastim and Pegfilgrastim. On the other hand, we assumed the same Z-functions for Filgrastim and endogenous G-CSF. Due to the fact that Filgrastim/endogenous G-CSF and Pegfilgrastim were assumed to be different regarding Z-functions, it was necessary to merge the superimposing effects of Pegfilgrastim and endogenous G-CSF or concurrent Pegfilgrastim and Filgrastim applications as well. This was solved by a weighting factor which is regulated between zero and one in dependence on the ratio of Pegfilgrastim and Filgrastim or endogenous G-CSF in the system. If the Pegfilgrastim concentration is high or low, then the system is mainly influenced by the Z-function of Pegfilgrastim or Filgrastim respectively. Although this assumption is plausible, the complete regulation mechanism via the combined Z-functions must be considered as speculative, since especially the shape of the regulation functions can hardly be observed or measured. Another speculative mechanism introduced into our model update was the delayed effect of G-CSF action. By model fitting, we estimated that the corresponding delay time is about 6h which appears to be in the right order of magnitude compared to the dynamics presented in [^57]. However, the overall impact of the delay on model dynamics is limited. At least for the scenarios considered in the present paper, it is not critical for the quality of the agreement of model and data. Furthermore, some adjustments were performed regarding chemotherapy modelling. Instead of an instantaneous cytotoxic effect of chemotherapy assumed in our former model version, we now assume a delayed effect to account for available data of the dynamics of bone marrow cell stages after chemotherapy applications in mice [^58]. The delay parameter was assumed to be constant for all cell stages but specific for the applied drugs or drug combinations and was modelled by a cascade of first order transitions. This modelling is rather a phenomenological than a mechanistic approach since the delay is caused by many factors such as toxification of the applied drugs at different time scales, transient cell cycle arrests of cells and delayed apoptosis of cells due to irreversible damage Another improvement of our model is due to a semi-explicite modelling of lymphocyte toxicity. This was necessary in order to apply the model to a sufficiently large dataset of time courses of both, granulocytes and leukocytes as well. In our former model, we assumed proportionality of granulocytes and leukocytes during G-CSF application or chemotherapy which is only roughly correct [^38] and further unpublished data of our chemotherapy studies). To avoid a complete cell kinetic model of lymphopoiesis requiring a large set of new and unknown model parameters, lymphocyte counts were modelled by a separate simple characteristic. We assumed no effect of G-CSF on lymphopoiesis but a toxic effect of chemotherapy modelled by an exponential depletion of lymphocytes according to a (delayed) chemotherapy toxicity function. This toxicity is again specific for the drugs or drug combinations applied. The resulting model was able to explain the time courses of leukocytes and granulocytes under G-CSF or chemotherapy adequately within the framework of one model. The model is based on a relatively large set of parameters. Due to missing bone marrow data during chemotherapy and growth factor application, only a limited knowledge regarding the required cell-kinetic and toxicity parameters is available. Rough ranges for transition times and amplification rates in steady-state or under stimulation by G-CSF were obtained from the literature. But especially values under minimal stimulation, the sensitivity parameters of the Z-functions and the toxicity parameters are not available from literature data. Hence, many model parameters were determined indirectly by fitting the predictions of the model to available datasets. For this purpose, we collected a set of suitable data from the literature and clinical trials for which we have access to raw data. Densely measured time courses of G-CSF serum levels after application in combination with granulocyte or leukocyte counts after chemotherapy and different G-CSF schedules are especially useful. Data of patients were pooled to construct a model that fits to the median of patients. A unique parameter set was identified which is valid for all scenarios considered. No adjustments were performed in order to fit single scenarios. Not all datasets were used for model fitting enabling an opportunity for model validation. Despite of the utilization of several datasets comprising different G-CSF dosing and timing schedules with and without chemotherapy, there remained a large uncertainty regarding parameter estimates, and consequently, the current parameter settings must be considered as preliminary. This is especially true for parameters with a low impact on our fitness function in the scenarios considered as demonstrated by our sensitivity analysis. Additionally, the toxicity parameters show some degree of dependence in the sense that a higher toxicity at one cell stage can to some degree be compensated by a lower toxicity at a subsequent cell stage and vice versa. Consequently, further datasets and validation scenarios are required to improve the confidence regarding our parameter settings. The estimates of our pharmacokinetic parameters resulted in a good fit of all time series data of G-CSF serum concentration for both Filgrastim and Pegfilgrastim applications as well. The estimated values fit well to our biological understanding of the drugs. Due to the pegylation of the drug, it was expected that the unspecific renal clearance is significantly reduced for Pegfilgrastim, which is in agreement with our parameter estimates. We also estimated a reduced specific degradation of Pegfilgrastim which could be explained by a reduced receptor binding affinity or hydrophilic properties of pegylated molecules [^26,^27,^76]. We also made the same observation for our pharmacokinetic models constructed in mice and rats [^29,^36]. Protein binding was estimated to be almost negligible for Filgrastim but important for Pegfilgrastim in agreement with our observations in mice [^36]. Finally, the estimated distribution volumes are in rough agreement with findings of other authors [^50,^77]. The estimates of our cell-kinetic, pharmacodynamic and toxicity parameters also resulted in a good fit of the time courses of granulocytes and leukocytes after application of chemotherapy, G-CSF or combinations of it. Possible exceptions are cell counts measured shortly after the first application of Filgrastim, which seem to be underestimated in some scenarios. We conclude that the model works well on a day-wise scale but might be unable to explain short-term or transient effects of G-CSF applications e.g. on the scale of hours. Modelling such short-term effects would require a better database, since almost all available time courses of granulocytes and leukocytes were measured at most at a day-wise scale. However, in order to make predictions regarding the efficiency of different G-CSF schedules, we are also more interested in the long-term dynamics of granulocytes and leukocytes in the course of the therapy rather than short-term effects after single injections. Estimates of the pharmacodynamic parameters of Filgrastim and Pegfilgrastim suggest that Filgrastim has a higher potency to stimulate the bone marrow. This is in agreement with our biological understanding that pegylation reduces the receptor binding affinity. An analogous observation was made for Pegfilgrastim and the novel drug Maxy-G34 which has even more pegylation sites than Pegfilgrastim [^29]. Although modelling of chemotherapy was not the primary goal of the present paper, it was necessary to model at least a few conventional chemotherapy regimen to study the pharmacokinetic properties of the G-CSF derivatives under granulopenic conditions. Chemotherapy was modelled as a transient delayed toxic effect on all bone marrow cell stages. Corresponding toxicity parameters are specific for the bone marrow cell stages and the drugs and drug concentrations used. For the development of our model, we used the data of the most simple chemotherapy regimen CHOP. With our toxicity parameter estimates one obtains a good fit of all CHOP regimen with different G-CSF schedules. Other conventional chemotherapies were modelled by assuming different toxicity parameters but the same cell kinetic model. Corresponding model simulations will be demonstrated in a separate paper of our group comprising about 20 different chemotherapy scenarios. Generally, the model can be applied to arbitrary conventional chemotherapy regimens for which data of leukocyte or granulocyte time courses are available for at least one G-CSF scheduling. Based on these data, sets of toxicity parameters of corresponding chemotherapies can be estimated. Using these parameters, it is possible to make clinically relevant predictions regarding the time course of G-CSF serum concentrations, bone marrow cell stages and mature cell counts in circulation under different G-CSF schedules, allowing to optimize G-CSF treatment. We will exploit the clinically relevant applications of our model in the near future. We established a human pharmacokinetic and -dynamic model of Filgrastim and Pegfilgrastim applications under cytotoxic chemotherapy. The model is able to explain a large number of clinical time series data of G-CSF serum concentrations, granulocytes and leukocytes of patients treated with G-CSF and with or without chemotherapy. A unique parameter set valid for all scenarios was established by fitting the predictions of the model to clinical data. The model was validated on a set of scenarios not used for parameter fitting. Differences between Filgrastim and Pegfilgrastim could be traced back to biologically plausible differences in parameter estimates. Effects of chemotherapy can be quantified by a set of toxicity parameters. Given these toxicity parameters, the model can be used to simulate the dynamics of G-CSF, bone marrow cell stages and circulating granulocytes or leukocytes of yet untested G-CSF schedules. The model is currently applied in the planning phase of clinical trials in order to optimize G-CSF treatment. Competing interests The authors declare that they have no competing interests. Authors’ contributions Model development: S.S., M.W., C.E., M.S. Parameter estimation and model simulations: S.S. Paper writing: M.S. All authors contributed to the discussion and the paper writing. All authors read and approved the final manuscript. Supplementary Material Additional file 1 Supplement Material. Complete list of model equations, complete list of model parameters, additional model and data comparisons, sensitivity analysis [^8,^40,^64,^78^-^80]. Click here for additional data file (1742-4682-9-32-S1.pdf) S.S. and M.W. were funded by a grant of the Federal Ministry of Education and Research of the Federal Republic of Germany (“Haematosys”, BMBF / PTJ0315452A). M.S. was funded by LIFE – Leipzig Research Center for Civilization Diseases, University of Leipzig. LIFE is funded by means of the European Union, by the European Regional Development Fund (ERDF) and by means of the Free State of Saxony within the framework of the excellence initiative. Crawford J,Pegfilgrastim administered once per cycle reduces incidence of chemotherapy-induced neutropeniaDrugsYear: 200262Suppl 1899812479597 Dale D,Current management of chemotherapy-induced neutropenia: the role of colony-stimulating factorsSemin OncolYear: 2003303914508714 Siena S,Secondino S,Giannetta L,Carminati O,Pedrazzoli P,Optimising management of neutropenia and anaemia in cancer chemotherapy-advances in cytokine therapyCrit Rev Oncol HematolYear: Wunderlich A,Kloess M,Reiser M,Rudolph C,Truemper L,Bittner S,Schmalenberg H,Schmits R,Pfreundschuh M,Loeffler M,German High-Grade Non-Hodgkin’s Lymphoma Study Group (DSHNHL): Practicability and acute haematological toxicity of 2- and 3-weekly CHOP and CHOEP chemotherapy for aggressive non-Hodgkin’s lymphoma: results from the NHL-B trial of the German High-Grade Non-Hodgkin’s Lymphoma Study Group (DSHNHL)Ann OncolYear: 20031488189312796026 Souza LM,Boone TC,Gabrilove J,Lai PH,Zsebo KM,Murdock DC,Chazin VR,Bruszewski J,Lu H,Chen KK,Barendt J,Platzer E,Moore MAS,Mertelsmann R,Welte K,Recombinant human granulocyte colony-stimulating factor: effects on normal and leukemic myeloid cellsScienceYear: 198623261652420009 Begley CG,Nicola NA,Metcalf D,Proliferation of normal human promyelocytes and myelocytes after a single pulse stimulation by purified GM-CSF or G-CSFBloodYear: 1988716406452449922 Lord BI,Bronchud MH,Owens S,Chang J,Howell A,Souza L,Dexter TM,The kinetics of human granulopoiesis following treatment with granulocyte colony-stimulating factor in vivoProc Natl Acad Sci USAYear: Mackey MC,Aprikyan AA,Dale DC,The rate of apoptosis in post mitotic neutrophil precursors of normal and neutropenic humansCell ProlifYear: 200336273412558658 Kim HK,De La Luz Sierra M,Williams CK,Gulino AV,Tosato G,G-CSF down-regulation of CXCR4 expression identified as a mechanism for mobilization of myeloid cellsBloodYear: 200610881282016537807 Christopher MJ,Link DC,Regulation of neutrophil homeostasisCurr Opin HematolYear: 2007143817133093 Diehl V,Franklin J,Pfreundschuh M,Lathan B,Paulus U,Hasenclever D,Tesch H,Herrmann R,Dorken B,Muller-Hermelink HK,Duhmke E,Loeffler M,Standard and increased-dose BEACOPP chemotherapy compared with COPP-ABVD for advanced Hodgkin’s diseaseN Engl J MedYear: 20033482386239512802024 Pfreundschuh M,Truemper L,Kloess M,Schmits R,Feller AC,Ruebe C,Rudolph C,Reiser M,Hossfeld DK,Eimermacher H,Hasenclever D,Schmitz N,Loeffler M,German High-Grade Non-Hodgkin’s Lymphoma Study GroupTwo-weekly or 3-weekly chop chemotherapy with or without etoposide for the treatment of elderly patients with aggressive lymphomas: results of the NHL-B2 trial of the DSHNHLBloodYear: Engel C,Loeffler M,Schmitz S,Tesch H,Diehl V,Acute hematologic toxicity and practicability of dose-intensified BEACOPP chemotherapy for advanced stage Hodgkin’s disease. German Hodgkin’s Lymphoma Study Group (GHSG)Ann OncolYear: 200011911051411061603 Frampton JE,Lee CR,Faulds D,Filgrastim. A review of its pharmacological properties and therapeutic efficacy in neutropeniaDrugsYear: 1994485731607530630 Frampton JE,Yarker YE,Goa KL,Lenograstim. A review of its pharmacological properties and therapeutic efficacy in neutropenia and related clinical settingsDrugsYear: 1995495767937541335 Houston AC,Stevens LA,Cour V,Pharmacokinetics of glycosylated recombinant human granulocyte colony-stimulating factor (lenograstim) in healthy male volunteersBr J Clin PharmacolYear: Tanaka H,Tokiwa T,Influence of renal and hepatic failure on the pharmacokinetics of recombinant human granulocyte colonystimulating factor (KRN8601) in the ratCancer ResYear: 199050661566191698539 Khwaja A,Carver J,Jones HM,Paterson D,Linch DC,Expression and dynamic modulation of the human granulocyte colony-stimulating factor receptor in immature and differentiated myeloid cellsBr J Haematol Year: 1993852542597506564 Ericson SG,Gao H,Gericke GH,Lewis LD,The role of polymorphonuclear neutrophils (PMNs) in clearance of granulocyte colony-stimulating factor (G-CSF) in vivo and in vitroExp HematolYear: El Ouriaghli F,Fujiwara H,Melenhorst JJ,Sconocchia G,Hensel N,Barrett AJ,Neutrophil elastase enzymatically antagonizes the in vitro action of G-CSF: implications for the regulation of granulopoiesisBloodYear: 20031011752175812393522 Hunter MG,Druhan LJ,Massullo PR,Avalos BR,Proteolytic cleavage of granulocyte colony-stimulating factor and its receptor by neutrophil elastase induces growth inhibition and decreased cell surface expression of the granulocyte colony-stimulating factor receptorAm J HematolYear: 20037414915514587040 Kotto-Kome AC,Fox SE,Lu W,Yang BB,Christensen RD,Calhoun DA,Evidence that the granulocyte colony-stimulating factor (G-CSF) receptor plays a role in the pharmacokinetics of G-CSF and PegG-CSF using a G-CSF-R KO modelPharmacol ResYear: 200450555815082029 Tanaka H,Satake-Ishikawa R,Ishikawa M,Matsuki S,Asano K,Pharmacokinetics of recombinant human granulocyte colonystimulating factor conjugated to polyethylene glycol in ratsCancer ResYear: Crawford J,Clinical uses of pegylated pharmaceuticals in oncologyCancer Treat RevYear: 200228Suppl A71112173409 Yang BB,Lum PK,Hayashi MM,Roskos LK,Polyethylene glycol modification of filgrastim results in decreased renal clearance of the protein in ratsJ Pharm SciYear: 2004931367137315067712 Harris JM,Chess RB,Effect of pegylation on pharmaceuticalsNat Rev Drug DiscovYear: 2003221422112612647 Lowenhaupt K,Wang PJ,Horan T,Lauffenburger DA,Sarkar CA,Parsing the effects of binding, signaling, and trafficking on the mitogenic potencies of granulocyte colony-stimulating factor analoguesBiotechnol ProgYear: 20031995596412790662 Veronese FM,Mero A,The impact of PEGylation on biological therapiesBioDrugsYear: 20082231532918778113 Scholz M,Engel C,Apt D,Sankar SL,Goldstein E,Loeffler M,Pharmacokinetic and Pharmacodynamic modelling of the novel human G-CSF derivative Maxy-G34 and Pegfilgrastim in the ratCell ProlifYear: Holmes FA,Jones SE,O’Shaughnessy J,Vukelja S,George T,Savin M,Richards D,Glaspy J,Meza L,Cohen G,Dhami M,Budman DR,Hackett J,Brassard M,Yang BB,Liang BC,Comparable efficacy and safety profiles of once-per-cycle pegfilgrastim and daily injection filgrastim in chemotherapy-induced neutropenia: a multicenter dose-finding study in women with breast cancerAnn OncolYear: 20021390390912123336 Grigg A,Solal-Celigny P,Hoskin P,Taylor K,McMillan A,Forstpointner R,Bacon P,Renwick J,Hiddemann W,Openlabel, randomized study of pegfilgrastim vs. daily filgrastim as an adjunct to chemotherapy in elderly patients with non-Hodgkin’s lymphomaLeuk LymphomaYear: 2003441503150814565651 Vose JM,Crump M,Lazarus H,Emmanouilides C,Schenkein D,Moore J,Frankel S,Flinn I,Lovelace W,Hackett J,Liang BC,Randomized, Multicenter, Open-Label Study of Pegfilgrastim Compared With Daily Filgrastim After Chemotherapy for LymphomaJ Clin OncolYear: 200321351451912560443 Pinto L,Liu Z,Doan Q,Bernal M,Dubois R,Lyman G,Comparison of pegfilgrastim with filgrastim on febrile neutropenia, grade IV neutropenia and bone pain: a meta-analysis of randomized controlled trialsCurr Med Res OpinYear: 2007232283229517697451 Ziepert M,Schmits R,Trumper L,Pfreundschuh M,Loeffler M,Prognostic factors for hematotoxicity of chemotherapy in aggressive non-Hodgkin’s lymphomaAnn OncolYear: 20081975276218048382 Kroschinsky F,Holig K,Ehninger G,The role of pegfilgrastim in mobilization of hematopoietic stem cellsTransfus Apher SciYear: 20083832374418490197 Scholz M,Ackermann M,Engel C,Emmrich F,Loeffler M,Kamprad M,A pharmacokinetic model of filgrastim and pegfilgrastim application in normal mice and those with cyclophosphamide-induced granulocytopaeniaCell ProlifYear: 20094268132219689472 Hartmann F,Zeynalova S,Nickenig C,Reiser M,Lengfelder E,Duerk H,de Witt M,Schubert J,Loeffler M,Pfreundschuh M,Peg-filgrastim (Peg-F) on day 4 of (R-)CHOP-14 chemotherapy compared to day 2 in elderly patients with diffuse large B-cell lymphoma (DLBCL): Results of a randomized trial of the German high-grade non-Hodgkin’s lymphoma study group (DSHNHL)J Clin Oncol, ASCO Ann Meeting Proc Part IYear: 20072518S19511 Scholz M,Ackermann M,Emmrich F,Loeffler M,Kamprad M,Effectiveness of cytopenia prophylaxis for different filgrastim and pegfilgrastim schedules in a chemotherapy mouse modelBiologics: Targets & TherapyYear: 200932737 Engel C,Scholz M,Loeffler M,A computational model of human granulopoiesis to simulate the hematotoxic effects of multicycle polychemotherapyBloodYear: 20041042323233115226178 Scholz M,Engel C,Loeffler M,Modelling human granulopoiesis under poly-chemotherapy with G-CSF supportJ Math BiolYear: 20055039743915614553 Scholz M,Engel C,Loeffler M,Model-based design of chemotherapeutic regimens that account for heterogeneity in leucopoeniaBr J HaematolYear: 200613272373516487172 Tsuji K,Ebihara Y,Expression of G-CSF receptor on myeloid progenitorsLeuk LymphomaYear: 20014261351711911419 Prosper F,Stroncek D,McCarthy JB,Verfaillie CM,Mobilization and homing of peripheral blood progenitors is related to reversible downregulation of alpha4 beta1 integrin expression and functionJ Clin InvestYear: 1998101112456679616217 Kuwabara T,Uchimura T,Takai K,Kobayashi H,Kobayashi S,Sugiyama Y,Saturable uptake of a recombinant human granulocyte colony-stimulating factor derivative, nartograstim, by the bone marrow and spleen of rats in vivoJ Pharmacol Exp TherYear: 1995273111411227540687 Kota J,Machavaram KK,McLennan DN,Edwards GA,Porter CJ,Charman SA,Lymphatic absorption of subcutaneously administered proteins: influence of different injection sites on the absorption of darbepoetin alfa using a sheep modelDrug Metab DisposYear: 2007352211221717875672 Kuwabara T,Kobayashi S,Sugiyama Y,Pharmacokinetics and pharmacodynamics of a recombinant human granulocyte colony stimulating factorDrug Metab RevYear: 1996286256588959393 Lenhoff S,Rosberg B,Olofsson T,Granulocyte interactions with GM-CSF and G-CSF secretion by endothelial cells and monocytesEur Cytokine NetworkYear: 1999104525532 Layton JE,Hockman H,Sheridan WP,Morstyn G,Evidence for a novel in vivo control mechanism of granulopoiesis: mature cell-related control of a regulatory growth factorBloodYear: 198974130313072475185 Sarkar CA,Lauffenburger DA,Cell-level pharmacokinetic model of granulocyte colony-stimulating factor: implications for ligand lifetime and potency in vivoMol PharmacolYear: 20036314715812488547 Roskos LK,Lum P,Lockbaum P,Schwab G,Yang BB,Pharmacokinetic / pharmacodynamic modeling of pegfilgrastim in healthy subjectsJ Clin PharmacolYear: 20064674775716809800 Wang B,Ludden TM,Cheung EN,Schwab GG,Roskos LK,Population pharmacokinetic-pharmacodynamic modeling of filgrastim (r-metHuG-CSF) in healthy volunteersJ Pharmacokinet PharmacodynYear: Falanga A,Marchetti M,Evangelista V,Manarini S,Oldani E,Giovanelli S,Galbusera M,Cerletti C,Barbui T,Neutrophil activation and hemostatic changes in healthy donors receiving granulocyte colony-stimulating factorBloodYear: 1999932506251410194429 Levesque JP,Takamatsu Y,Nilsson SK,Haylock DN,Simmons PJ,Vascular cell adhesion molecule-1 (CD106) is cleaved by neutrophil proteases in the bone marrow following hematopoietic progenitor cell mobilization by granulocyte colony-stimulating factorBloodYear: 2001981289129711520773 Shimazaki C,Uchiyama H,Fujita N,Araki S,Sudo Y,Yamagata N,Ashihara E,Goto H,Inaba T,Haruyama H,Serum levels of endogenous and exogenous granulocyte colony-stimulating factor after autologous blood stem cell transplantationExp HematolYear: 199523149715028542937 Steinman RA,Tweardy DJ,Granulocyte colony-stimulating factor receptor mRNA upregulation is an immediate early marker of myeloid differentiation and exhibits dysfunctional regulation in leukemic cellsBloodYear: 1994831191278274731 Tkatch LS,Rubin KA,Ziegler SF,Tweardy DJ,Modulation of human G-CSF receptor mRNA and protein in normal and leukemic myeloid cells by G-CSF and retinoic acidJ Leukoc BiolYear: 1995579649717540644 Srinivasa SP,Doshi PD,Extracellular signal-regulated kinase and p38 mitogen-activated protein kinase pathways cooperate in mediating cytokine-induced proliferation of a leukemic cell lineLeukemia Year: 20021622445311840291 Lohrmann HP,Schreml W,Cytotoxic Drugs and the Granulopoietic SystemYear: 1982Berlin: Springer Verlag Scholz M,Gross A,Loeffler M,A biomathematical model of human thrombopoiesis under chemotherapyJ Theor BiolYear: 2010264228730020083124 van der Auwera P,Platzer E,Xu ZX,Schulz R,Feugeas O,Capdeville R,Edwards DJ,Pharmacodynamics and pharmacokinetics of single doses of subcutaneous pegylated human g-csf mutant (Ro 25-8315) in healthy volunteers: comparison with single and multiple daily doses of filgrastimAm J HematolYear: 200166424525111279634 Borleffs JC,Bosschaert M,Vrehen HM,Schneider MM,van Strijp J,Small MK,Borkett KM,Effect of escalating doses of recombinant human granulocyte colony-stimulating factor (filgrastim) on circulating neutrophils in healthy subjectsClin TherYear: 19982047227369737832 Chatta GS,Price TH,Allen RC,Dale DC,Effects of in vivo recombinant methionyl human granulocyte colony-stimulating factor on the neutrophil response and peripheral blood colony-forming cells in healthy young and elderly adult volunteersBloodYear: 1994849292329297524759 Varki R,Pequignot E,Leavitt MC,Ferber A,Kraft WK,A glycosylated recombinant human granulocyte colony stimulating factor produced in a novel protein production system (AVI-014) in healthy subjects: a first-in human, single dose, controlled studyBMC Clin PharmacolYear: 200992 Johnston E,Crawford J,Blackwell S,Bjurstrom T,Lockbaum P,Roskos L,Yang BB,Gardner S,Miller-Messana MA,Shoemaker D,Garst J,Schwab G,Randomized, dose-escalation study of sd/01 compared with daily filgrastim in patients receiving chemotherapyJ Clin OncolYear: 200018132522252810893282 George S,Yunus F,Case D,Yang BB,Hackett J,Shogan JE,Meza LA,Neumann TA,Liang BC,Fixed-dose pegfilgrastim is safe and allows neutrophil recovery in patients with non-hodgkin’s lymphomaLeuk Lymphoma Year: 200344101691169614692520 Pfreundschuh M,Schubert J,Ziepert M,Schmits R,Mohren M,Lengfelder E,Reiser M,Nickenig C,Clemens M,Peter N,Bokemeyer C,Eimermacher H,Ho A,Hoffmann M,Mertelsmann R,Truemper L,Balleisen L,Liersch R,Metzner B,Hartmann F,Glass B,Poeschel V,Schmitz N,Ruebe C,Feller AC,Loeffler M,German High-Grade Non-Hodgkin’s Lymphoma Study GroupSix versus eight cycles of bi-weekly CHOP-14 with or without rituximab in elderly patients with aggressive CD20+ B-cell lymphomas: a randomised controlled trial (RICOVER-60)Lancet OncolYear: 20089210511618226581 Zwick C,Hartmann F,Zeynalova S,Poeschel V,Nickenig C,Reiser M,Lengfelder E,Peter N,Schlimok G,Schubert J,Schmitz N,Loeffler M,Pfreundschuh M,German High-Grade Non-Hodgkin Lymphoma Study GroupRandomized comparison of pegfilgrastim day 4 versus day 2 for the prevention of chemotherapy-induced leukocytopeniaAnn OncolYear: 20112281872187721292644 Rechenberg I,Evolutionsstrategie ’94Year: 1994Stuttgart: frommann-holzboog Schwefel HP,Evolution strategies: A family of nonlinear optimization techniques based on imitating some principles of organic evolutionAnn Oper ResYear: 1984165167 Vainstein V,Ginosar Y,Shoham M,Ranmar DO,Ianovski A,Agur Z,The complex effect of granulocyte colony-stimulating factor on human granulopoiesis analyzed by a new physiologically-based mathematical modelJ Theor BiolYear: 2005234331132715784267 Foley C,Bernard S,Mackey MC,Cost-effective G-CSF therapy strategies for cyclical neutropenia: mathematical modelling based hypothesesJ Theor BiolYear: 2006238475476316115650 Ostby I,Rusten LS,Kvalheim G,Grottum P,A mathematical model for reconstitution of granulopoiesis after high dose chemotherapy with autologous stem cell transplantationJ Math BiolYear: Ostby I,Kvalheim G,Rusten LS,Grottum P,Mathematical modeling of granulocyte reconstitution after high-dose chemotherapy with stem cell support: effect of post-transplant G-CSF treatmentJ Theor Biol Year: 20042311698315363930 Shochat E,Rom-Kedar V,Segel LA,G-CSF control of neutrophils dynamics in the bloodBull Math BiolYear: 2007697229933817554586 Foley C,Mackey MC,Mathematical model for G-CSF administration after chemotherapyJ Theor BiolYear: 20092571274419007795 Waladkhani AR,Pegfilgrastim: a recent advance in the prophylaxis of chemotherapy-induced neutropeniaEur J Cancer CareYear: 200413371379 Wiczling P,Lowe P,Pigeolet E,Luedicke F,Balser S,Krzyzanski W,Population pharmacokinetic modelling of filgrastim in healthy adults following intravenous and subcutaneous administrationsClin PharmacokinetYear: 200948128172619902989 Wichmann H-E,Loeffler M,Mathematical Modeling of Cell Proliferation: Stem Cell Regulation in HemopoiesisYear: 1985Boca Raton: sCRC Press Schmitz S,Franke H,Loeffler M,Wichmann HE,Diehl V,Model analysis of the contrasting effects of GM-CSF and G-CSF treatment on peripheral blood neutrophils observed in three patients with childhood-onset cyclic neutropenia BritJ HaematolYear: 199695616625 Dale DC,Fauci AS,Wolff SM,Alternate-day prednisoneTnE J MedYear: 19742912211541158 Figure 1 ID: F1] (Basic structure of the cell-kinetic model of granulopoiesis). Major model compartments describing granulopoietic cell stages are S (pluripotent stem cells), CG (colony forming units of granulocytes and macrophages), PGB (proliferating granulopoietic blasts), MGB (maturing granulopoietic blasts - subdivided into metamyelocytes (G4),banded granulocytes (G5) and segmented granulocytes (G6)) and GRA (circulating granulocytes). The system is regulated by feedback loops. A major loop is mediated by G-CSF which is produced endogenously but can also be applied subcutaneously. Chemotherapy (CX) induces acute cell loss. The model is essentially the same as in [^40]. Figure 2 ID: F2] (Model structure of the pharmacokinetic model of G-CSF). The major compartments, cytokine fluxes and regulations are presented (MM = Michaelis Menten kinetic). The subcutaneous compartment is divided into two subcompartments with first order transition. Figure 3 ID: F3] (Estimated bioavailability of subcutaneously injected Filgrastim or Pegfilgrastim based on systematic model simulations): Bioavailability was estimated by calculating G-CSF amounts absorbed by the central compartment relative and the total amount of subcutaneously injected G-CSF (x-axis). Due to the modelled loss in the subcutaneous tissue, the bioavailability is dose-dependent. Circles indicate estimates for pharmaceutically available doses of 300μg and 480μg of Filgrastim or 6000 μg of Pegfilgrastim respectively [^12,^67]. Figure 4 ID: F4] (Comparison of model and data for G-CSF applications): Comparison of model and selected datasets of single Filgrastim injections (scenario A), multiple Filgrastim injections (scenario B) and single Pegfilgrastim injections of different doses (scenarios C and D). For each scenario, we present the time courses of ANC and G-CSF, respectively. A complete list of scenarios can be found in the Additional file 1. Figure 5 ID: F5] (Regulation functions of Filgrastim and Pegfilgrastim): Comparison of Filgrastim and Pegfilgrastim with respect to the regulation function of the amplification in PGB. The circle marks the value under steady-state conditions. Figure 6 ID: F6] (Effect of G-CSF delay): Effect of the delay parameter of G-CSF action on cell-counts of specific cell compartments. Figure 7 ID: F7] (Effect of chemotherapy delay): Effect of the delay parameter of the chemotherapy studied for the CHOP regimen. Figure 8 ID: F8] (Comparison of model and data for chemotherapy scenarios): Comparison of model and data for the CHOP-21 regimen and time intensified CHOP-14 regimen supported by various Filgrastim and Pegfilgrastim schedules. The solid line is the model prediction. Dots are patient medians at corresponding time points and the grey lines mark the interquartile range of the data. All scenarios are based on the same model parameters. Figure 9 ID: F9] (Ratio of granulocytes and leukocytes): Based on model simulations, the ratio of granulocytes and leukocytes under CHOP-14 chemotherapy is predicted. Figure 10 ID: F10 (Validation of model): Validation of the model on the basis of two datasets not used for model fitting. Solid line is the model prediction. Dots and dotted lines are the data and the ] interpolated data respectively. [TableWrap ID: ] Table 1 Major compartments of the pharmacokinetic model and corresponding regulations Compartment Regulations Regulator S proliferative fraction bone marrow content self-renewal probability bone marrow content CG proliferative fraction bone marrow content amplification G-CSF transition time G-CSF PGB amplification G-CSF transition time G-CSF MGB post-mitotic amplification G-CSF transition time G-CSF GRA turn-over - G-CSF endogenous production late bone marrow cell stages specific degradation GRA external applications - Most of these compartments are mediated by G-CSF. A complete set of model equations can be found in the Additional file 1. [TableWrap ID: ] Table 2 Data sets utilized for the establishment and validation of the granulopoiesis model Type of data Disease G-CSF schedules Chemotherapy References none single application 3, 5, 10 μg/kg none [^31] Phase I studies with Filgrastim none single application 5, 10 μg/kg none [^60] none 10 applications 75, 150, 300, 600 μg none [^61] none 14 applications 30, 300 μg none [^62] none single application 4, 8 μg/kg none [^63] Phase I studies with Pegfilgrastim none 30, 60, 100, 300 μg/kg none [^50] Phase II studies with Pegfilgrastim LuCa 30, 100, 300 μg/kg none [^64] NHL 6000 μg, day 2 CHOP [^65] NHL no G-CSF CHOP-21* [^12] NHL 480 μg, day 4–13 CHOP-14* [^12] Phase III studies with CX and w/wo Filgrastim NHL 480 μg, day 6–12 CHOP-14* [^66] Phase III studies with CX + Peg NHL 6000 μg, day 2, 4 CHOP-14* [^67] Studies with access to raw data are indicated with an asterisk. CX = chemotherapy, LuCa = lung cancer, NHL = high-grade non-Hodgkin’s lymphoma. [TableWrap ID: ] Table 3 Pharmacokinetic parameters of Filgrastim and Pegfilgrastim Parameter Meaning Filgrastim Pegfilgrastim k[sc] subcutaneous absorption [h^−1] 0.161 0.107 k[m] Michaelis-Menten constant of 34.7 5.5 subcutaneous elimination [μg] v[max] Maximum of subcutaneous 67.3 16.5 elimination [h^−1] k[u] unspecific elimination [h^−1] 0.441 0.087 kmGRA Michaelis-Menten constant of 22.4 30.8 specific elimination [μg] vmaxGRA Maximum of specific elimination [h^−1] 4.77 5.16 k[cp] transition central to peripheral [h^−1] 0.000 0.075 k[pc] transition peripheral to central [h^−1] - 0.548 V[D] distribution volume [l] 1.156 4.091 CG−CSFcent_ref reference G-CSF serum concentration μgl 0.02 Compared to Pegfilgrastim, we estimated that Filgrastim is more easily absorbed from the subcutaneous compartment, has a lower bioavailability (see Figure 3) and a higher specific and unspecific elimination. Reversible binding is neglectable for Filgrastim but not for Pegfilgrastim. The distribution volume is higher for Pegfilgrastim than for Filgrastim. Article Categories: • Research Keywords: Chemotherapy, Filgrastim, Granulopoiesis, Haematotoxicity, Leucopenia, Pegfilgrastim. Previous Document: CYP1A1 MspI polymorphism and acute myeloid leukemia risk: meta-analyses based on 5018 subjects. Next Document: Superimposed Preeclampsia in Women with Chronic Kidney Disease.
{"url":"http://www.biomedsearch.com/nih/Pharmacokinetic-dynamic-modelling-G-CSF/22846180.html","timestamp":"2014-04-21T00:51:31Z","content_type":null,"content_length":"135148","record_id":"<urn:uuid:4d9ca1af-5702-4a8f-b9b5-bac3a0c68ae9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Table of Contents 1. Introduction My goal is to make this class as hands-on as possible. I could just stand at the front of the class and lecture every day, but I would find that boring and I think you would too. So, I intend to construct the class in such a way that we spend at least two class period every week doing lab activities. (That, incidentally, is a promise to you, and one you should feel free to hold me to). In order to be able to work independently in lab, there are a lot of skills you need to develop first. In particular, you need to know how to make measurements of length, mass, and volume, you need to be able to record and present this information in an organized way, and you need to have some idea of how accurate your data is. In this unit, we will devote a lot of time to developing these basic skills. It may seem strange to be spending three or four weeks just learning how to measure things, but the time we invest here will pay off immensely in the rest of the year when you find that you are able to comfortably deal with a wide variety of tasks you encounter in lab. 2. Measurement Systems 2.1. I can measure an amount using a variety of different units Understand that you can measure something using various different units. "Units of liquid measure" 1. Go over do-now 2. Ranking these measures 3. Measuring with different units 10/20/08 (Monday) Class notes: Measurement Introduction (pdf) Detailed aims By the end of this class, students will know what is meant by unit and measure when used in the scientific sense. These two are defined as follows: I measure something by seeing how many times some standard amount fits into it. That standard amount is called a unit. A measurement is written as some number of a particular unit. Students should also be able to explain why a number without a unit attached is not a measurement - that is, why saying that "I am 6.25 tall" is a meaningless statement. Students will also recognize that there are many different units I can use to measure any given thing, and that I will end up with a different number depending on what unit is used. Thus, there are many different measurements that can represent the same amount. For example, "12 cups" or "6 pints" or "3 quarts" are all ways of representing the same amount. Once an amount has been measured in one unit, students will be able to predict whether the number will come out larger or smaller when it is measured in a different unit. Doing this requires, of course, that you know which unit is larger. The general rule is that a bigger unit can measure the same thing in a smaller number of units. This thought process will be practiced repeatedly in future classes as a check on conversions. Students will recognize measurement as a quotition, dividing some amount into pieces of a standard size. Description of instruction The goal in going over the do-now is to "activate prior knowledge" about these various liquid measures. We will do this by talking about various situations in which those measures are used. For example, everyone will be familiar with a gallon as the unit that milk usually comes in. For the other units, I might tell a story like: "When my family makes buttermilk pancakes, we get a quart of buttermilk from the store. We also get a pint of whipping cream and whip it up to put on top. I might put about a cup of syrup on my pancakes, if I really like syrup, and I might drink a cup of orange juice poured from a quart container. If we want some fruit on top, we might open up a pint can of peaches..." This story - or whatever else students can remember - would be accompanied by holding up the exemplars of the various units that I have with me. By the time we're done with that, it should be really obvious to everyone what the correct size order is, but if we don't explicitly mention the order in talking about what the units are used for, we can still be able to give students a chance to fix up their order before we share it. Unfortunately, this is science class, so it's not enough to be convinced of the order by looking at it; we have to find a way to prove that that is the right order. ("I have a really bad sense of volume - whenever I'm putting away leftovers I end up getting out a tupperware that is either just a bit too small to hold everything, or quite a bit too large. So to me, it's just not obvious by looking that a gallon is larger than a quart - you have to prove it to me somehow.") Once we have proved that our order is right, we need to define some terms before we move on: measure and unit. Once we are clear on what is meant by that, we will try measuring something using a variety of different units. Ideally, I'd like to have something with a volume around two quarts (a 2 liter soda bottle?), and let various lab groups (or individuals, since we won't be in partner desk configuration) measure it out using a gallon, quart, pint, or cup. We could then compare the measurements that they produce. This would give fairly interesting results because of the fact that two liters is not an exact number in any of these systems. There are two things I want students to get out of this demo. First, they should notice that we have four different measurements (½ gallon, 2 quarts, 4 pints, 8½ cups) that are all ways of measuring exactly the same amount of water. It is hard for students at first to accept the fact that all these measurements can mean the same thing. Once they understand that, the more interesting thing to note is that the larger the unit used, the smaller the number of that unit that I end up with. This is something that we will need to discuss in order to understand why, mathematically, that is the Once students grasp that idea - that a bigger unit will produce a smaller number when measuring the same thing - they will be able to try several different types of problems, which will be given on the practice that we do at the end of the class and for homework: • Predict whether the number will become smaller or larger when I measure come amount in a smaller or larger unit. • Tell me which of two units is larger by comparing the number part of two measurements of the same thing. • Given two measurements, either tell me that one is definitely the larger (because it is a larger number of a larger unit) or tell me that we don't know enough yet to compare the two (because one is a larger unit but a smaller number) This last problem type will lead directly into what we will do tomorrow. It is the most challenging because it requires students to recognize when they don't actually know enough to answer a problem. 2.2. When I measure with a smaller unit I get a bigger number Understand that with two measurements of the same amount, the one using a smaller unit will have a larger number. "Bigger unit, smaller number" 1. Go over do-now 2. Measuring with unusual units 3. Comparing measurements 10/21/08 (Tuesday) Class notes: Number and Units (pdf) Detailed aims The goal of this class is to reinforce what we did yesterday. Specifically, many students could not figure out on the homework and in class how it would be possible to compare two units without needing to look up somewhere an authoritative explanation of the size of those units. So, we will repeat all the sort of problems we were thinking about yesterday, but using units that are clearly not part of any kind of standard. Students will be able explain how to compare the size of two containers by filling one and pouring into the other. They will be able to explain that if A fills B with water left over, A is bigger; that if A goes into B with space left, B is bigger, and that if A exactly fills B with nothing left over, they are the same size. They will be aware that there are also other, less certain ways to compare container size, such as timing how long it takes to fill each one or comparing their weights when full. Students will understand that if I have already measured a container using one unit, I know that I will get a larger number when remeasuring it with a smaller unit, and a smaller number when remeasuring it with a larger unit. They will be able to explain this in ordinary language: "More of these fit in, because they're smaller" or "because this one is bigger, it doesn't take as many to fill it up." Students will be able to rank several unknown units in order of size, if they are told how some amount measures in each. In doing this, they will make use of and refer back to for proof the idea that if I can fit more of a particular unit into something, it must be smaller. Given two measurements in different units, where it is known which unit is larger, students will be able to say whether the two might possibly be equal. This is simply another extension of the same ideas; the two measures can only be equal if the bigger unit is with the smaller number. Hence, we have looked at the statement, "If two measures are equal, the one using a bigger unit should have a smaller number" from all three possible directions: we have compared the size of the units to predict the relationship in the number, have compared the numbers to predict the relationship of the units, and have looked at the unit and number relationships to establish whether equality of the two measures is possible. This last skill acts as a useful check on any result of conversion. In the case where two measures are definitely not equal, students will be able to explain which is larger and which smaller, because one has a larger number of a bigger unit and the other has a lesser number of a smaller unit. Description of instruction I have brought three tupperwares from home. The clear ones hold roughly 2 cups, the green ones roughly three cups, and the blue ones roughly four cups. It should be clear by observation that the blue is larger than the clear, but we will prove this by pouring water and describe in ordinary English how we established that proof. A soda bottle measures approximately 3¾ clears, and thus we can predict that it will measure a smaller number of blues. The actual measurement is 2¼ blues. We use the green unit somewhat differently, first measuring that the soda bottle is exactly 3 greens, and then using htat to establish that the green is intermediate in size, larger than the clear but smaller than the blue. Time permitting, we can prove this by pouring water. The next section of class is meant to reinforce the part of the homework that most students seemed to have struggled with: the part where we look at two measurements and try to use the relation between the units and the number to determine if the two measurements may be equal or if one is definitely larger. This is a much more difficult and abstract procedure to get used to, because instead of using one comparison to predict another comparison, knowing that the measures are equal, we are using the agreement of two comparisons to say whether equality is possible or not. 2.3. I can figure out how many of one unit go into another Understand that you can measure something using various different units. "Coin conversions" 1. Go over do-now 2. Figuring out conversion rates 3. Converting liquid measures 10/22/08 (Wednesday) Class notes: Conversion (pdf) Cards: liquidMeasureCards.pdf (This smaller version might be easier to use at home) Detailed aims Students will be able to determine the conversion rates between units by measuring out one unit with another, and will understand why doing this once allows us to be able to convert, without measuring, any measurement in one of those units into another. We will define what it means to convert: To convert a measurement into different units means to figure out how big that amount would be in the other units, without actually measuring anything, simply by knowing how many of one unit is equal to the other. Students will know to check their work by making sure that a larger number always goes with a smaller unit. Students will at this point be thinking of conversion as a process of trading in a fixed number of one unit for a fixed number of another. They will not necessarily see this as division, but that is what tomorrow is about. There are two specific procedures that I expect all students should be comfortable doing at this point: • Converting a mixture of measurements in various units down into the smallest unit so that the total can be written as one measurement • Converting a measurement into larger units, when it comes out to a whole number of those units. • Starting with a large number of a small unit and converting up into larger units as much as possible, so that the total number of units used is at a minimum. This is analogous to determining digits in a place value system. • Figure out the conversion rate between two units given measurements of the same amount in two different units. • Use a conversion rate given for unfamiliar units to perform conversions. Note what is absent from this: students are not yet being required to perform conversion that give a fractional answer. So, for example, they might be asked to convert six cups, one pint, and two quarts into gallons, but not three cups and a quart. Description of instruction The point in going over the do-now is not to figure out what the right answers are, but to notice what we need to know in order to get that answer. We can convert easily between different types of coins because we know how many of one coin are equal to another coin. In talking over the do-now, we will define the term conversion, and point out that if you know how many of one thing adds up to another, you can translate a measurement into different units without having to redo the measurement. Our goal today is to learn how to convert with the liquid measures that we saw yesterday. In order to do this, our first step is to figure out the conversion rates. We will do this in lab groups, in partner desk configuration. Starting with a full gallon of water, I will have five lab groups work to determine the conversion from gallons into quarts, while the other groups watch. Then we will pass off the last full quart to another set of five lab groups and have them measure it out in pints. Finally, the last set of five groups will take the pints and measure them out into cups. We will end up knowing how many quarts I can trade in a gallon for, how many pints I can trade in a quart for, and how many cups I can trade in a pint for. Once we have these exchange rates worked out, the "wet" part of the lab is over. We'll continue working in partners, but now we'll be using the cards, instead of actual measures. We need to make sure the desks are dry, then pass out one set of cards to each lab group. We'll start with a "real" problem in which one group measures something out in cups, and we work together to convert that up into a mixture of quarts, pints, and cups; we can then have someone measure it out in that way to verify our result. This verifies that our way of converting does in fact work properly. Another quick conversion we could do for verification would be to predict how many pints are in a gallon, and verify that by demonstration. Then, groups can quickly work through individually a series of problems that require them to perform conversions by trading cards for each other. 2.4. Mathematical language is used to describe converting measurements Understand what is meant when I write that some combination of measurements is equal to another. "Converting liquid measures" 1. Go over do-now 2. Mathematical language 10/23/08 (Thursday) Class notes: Measurement and Conversion (pdf) Cards: liquidMeasureCards.pdf (This smaller version might be easier to use at home) Detailed aims In yesterday's class, I recorded some of the conversions that we had done in mathematical language, like this: 1 gallon = 4 quarts = 8 pints = 16 cups It was clear in class, and even more clear on the homework, that students were not understanding what a sentence like this means. Someone comfortable with mathematical language reads this as, "One gallon is the same amount as four quarts, which is the same amount as eight pints, which is the same amount as sixteen cups. This person probably has a picture in their head, and can explain in words, that if the gallon were originally full of water, that same water could be poured off into exactly four quarts, and then each of those poured off into two pints, and so on. For a student coming out of elementary school, none of this is obvious. In elementary school, math is a matter of doing calculations, not expressing relationships. So, students are familiar with equations only in the sense of "7 + 4 = 11", and as a result they tend to assume that the "=" sign means "the answer is" or "comes out to." At some point in middle school, students make the transition to being able to understand what the "=" really means, and making this transition is thought by many researchers to be one of the most important steps a clid makes in going from arithmetic to algebra. By the end of this class, students will have been exposed to the idea that a sentence written in math language need not be a "problem" to fill in, but can simply be a statement of a fact we have discovered. This, of course, is a fact that we will need to reinforce over and over again, partly by using mathematical language whenever possible to record things we have found out, and then checking understanding. So, for example, when in the coming days we find that a tupperware holds 11 cups, we ought to write "Blue tupperware = 11 cups" and then check that students understand that this means that they hold the same amount of water, that eleven full cups of water could be poured into the tupperware and would exactly fill it. Students will be able to translate a sentence in mathematical language into English. In particular, if the sentence involves equality between several combinations of measurements, students will be able to list off the measurements present in each group and connect the two with "is the same amount as." Students will understand that, in the particular case of liquid measures, an "=" sign means that if all the units on one side were full of water, they could be poured into the units on the other side and would exactly fit. Description of instruction I'm afraid this class was rather boring because I was trying to emphasize an important point, but had not thought of any way other than lecture to do it. I had originally thought that we could quickly get through the language stuff and move on to the card game, but it became clear in class that the language was a real challenge, so I ended up taking the whole class for that. The main thing that I emphasized was going over and over each problem, showing it as a sentence in English, as an expression in math, and as an operation done with liquid measures, in each emphasizing that the things on either side of the "=" were to be grouped together, and emphasizing that the "=" was establishing equality between those two groups. 2.5. Converting by trading units for others Be able to convert among liquid measures and record mathematically what you did. "Converting liquid measures" 1. Go over do-now 2. Conversion card game 10/27/08 (Monday) Class notes: Recording Conversion (pdf) Cards: liquidMeasureCards.pdf (This smaller version might be easier to use at home) Detailed aims Students will be able to convert from any collection of units into other unit by using the idea of "trading" some number of one for another. For today, they will be expected to be able to do this only when they have a handy visual representation of the units, such as the cards we will use in class and the units to color in that we have on the homework. Students will be able to record the process of conversion using mathematical language. In particular, when writing out a mathematical sentence expressing the conversion, they will be aware that the " =" signs separate that sentence into sections, with each section listing out the combination of units present at a particular step. They will also understand that by putting an "=" between two of these sections they are saying that the collection of units in one section holds the same amount as the collection of units in the other section. They may choose to make this clearer to themselves by drawing a box around each section of the sentence. They will recognize that each new section is produced simply by pouring the water present in the previous section into different containers. Students will not be expected, for today, to know how to figure out what order to trade units in in order to make a more complex conversion. They will either be explicitly instructed in what to trade, or will be given problems (as in the do-now) that only require one step. The goal of playing the units card game is for students to reach the point where they think it is too slow and they would rather just work things out mathematically and write them down on paper. However, I am asking all students to practice using the cards, at least for the next two days, so that they will have the image of trading in units for others firmly embedded in their heads and will be able to use that idea intuitively as they are doing conversions "on paper." Description of instruction There is not really any new instruction going on today; this is all practice of what we learned last week. I'll be using the document camera so that it is very clear exactly what I expect to be written on the worksheet and where, and exactly how the manipulation of cards is to be done. When we are doing the card game, we will do one step at a time together as a class, then check to make sure everyone has it right; tomorrow I will be asking students to take on a bit more responsibility and actually go through an entire problem on their own. 2.6. Planning what units to trade first Be able to plan what order to convert things in. Explaining the steps of conversion 1. Go over do-now 2. Conversion card game 10/28/08 (Tuesday) Class notes: Planning Conversion (pdf) Cards: liquidMeasureCards.pdf (This smaller version might be easier to use at home) Detailed aims Students will be able to read a problem calling for a conversion into particular units, and plan out what steps to take to accomplish that. The general rule is to start by converting whatever units, among what you have, are farthest away in size from what you are looking for. The big change from yesterday is that instead of telling students what to trade in at each step, I am now expecting them to take that initiative on their own. I am also expecting them to be able to work through a whole problem without direction, instead of going one step at a time. Description of instruction The do-now proved to be a real challenge because we haven't done exactly that task before. I did the first problem together as a class, using the markup we have developed for math sentences - that is, I put a box around each side of the equation, then circled the unit that was being traded in, and wrote under it what it was exchanged for. Most students found that once they saw the process, it was easy to duplicate. In going over the do-now, the big point to discover was that whoever did that conversion wasted a lot of effort. They should have converted the gallons into quarts first, so that they could convert the quarts all together; instead, they ended up having to trade in quarts in two separate steps. In general, when planning what to convert first, you want to start with the unit that is furthest away, in size, from the units you want to end up with. In doing the problems, the challenge for students was not getting the answer (in most cases), but having the discipline to correctly write out each step of the process. We have been using a particular pattern for doing this: You start by writing down what you are told you have at the beginning. Then, in each step, you first circle the units you want to trade in, then write under them what they turn into. This makes it fairly straightforward to then write an equal sign, followed by whatever units you have now, combining the units you just traded for with the other units that were already there. We are still using a box around each part of the equation to show that those units all belong together and that the = sign is talking about two whole sets of units rather than just what is immediately left and right of 2.7. Practicing our liquid measure conversions Become very comfortable converting liquid measures. Writing in Math Language 1. Go over do-now 2. Conversion card game 10/29/08 (Wednesday) Class notes: Practicing Conversion (pdf) Handout: Conversion Card Game (pdf) Cards: liquidMeasureCards.pdf (This smaller version might be easier to use at home) Detailed aims Most students are able to do any conversion problem at this point, if they are careful and determined. The hope for today is just to practice doing this so many times that converting liquid measures will be one thing students can be absolutely confident that they will get right on the test. For those students who are already really good at this, the activity also provides one section at a higher level of difficulty, one that forces even me to have to think carefully. Description of instruction The do-now proved fairly simple for most students, although some are still confused by being faced with a unit (blue tupperwares) for which they don't know how big it is, and some students are still trying to turn it into a math problem - adding up the numbers somehow - instead of just directly translating the English sentence into math. However, once they saw that all they needed to write was "2 blue tupperwares = 1 gallon + 1 quart + 1 pint + 1 cup," most students seemed to see why that was the answer. Maybe the problem was just too simple, so they were trying instinctively to find a way to make it harder. 2.8. Applying our measuring skills to area Apply what we know about units and measurement to area. Measuring area 1. Go over do-now 2. Area units 3. Mixing units 10/30/08 (Thursday) Class notes: Area Measures (pdf) Handout: areaMeasures.pdf Manipulatives: smallArea.pdf midArea.pdf largeArea.pdf Detailed aims Our goal is to review two "big ideas" from the very beginning of the unit: • I measure something by counting how many of some unit fit into it. • If I measure something with two different units, the smaller unit will go in a bigger number of times. By applying these rules in a completely unfamiliar context (area, rather than liquid volume), I hope to make students' understanding of these rules more connected and flexible. Description of instruction 2.9. Applying our conversion skills to area Apply what we know about conversion to area. Figuring out the conversion rules 1. Go over do-now 2. Combining into larger units 3. Breaking down into reds 10/31/08 (Friday) Class notes: Area Conversion (pdf) Handout: areaMeasurables.pdf Manipulatives: smallArea.pdf midArea.pdf largeArea.pdf Detailed aims As in class yesterday, the goal for today is to reinforce basic ideas about measurement by applying them to a new sort of situation. We know already that we can measure area by seeing how many area units fit into something, and that we can expect that when measuring with a smaller unit we will fit in a bigger number of them. Students will recall that in order to figure out the conversion rule between two units, I need to measure the larger unit with the other. Having discovered the conversion rules for our area units, they will be able to apply those rules to convert some mixture of units either into as many larger units as possible, or into all the smallest unit. The goal, again, is not to develop specific proficiency with this particular measurement system - after all, these units are ones they will never see again - but to practice the process fo working out what the conversion rules are and then applying them. Students will also practice using math sentences to show the process they are going through in doign these conversions. Description of instruction 2.10. Applying our measuring and conversion skills to length Apply what we know about units, measurement, and conversion to length. 11/3/08 (Monday) Class notes: Length Measures (pdf) Manipulatives: smallLength.pdf midLength.pdf largeLength.pdf lengthMeasurables.pdf Detailed aims Today we will try to do, all in one day, working independently as much as possible, the whole process of figuring out a measurement system. Given a collection of units, students will be able to measure with those units, either purely with one unit or with a combination of units. They will be able to relate the size of a unit to the number of times that it goes into something, so that they have an intuitive check that says that a larger number is expected when converting or remeasuring in a smaller unit. Students will also be able to independently discover and apply the conversion rules. This process will require them to know to measure one unit with another, to record what they discovered, and then to be able to use that fact to convert a mixture of units into either all the smallest ones or as many larger ones as possible. Description of instruction 2.11. Reviewing what we have learned about measurement Review what we have learned about measurement. 11/4/08 (Tuesday) Class notes: Measurement Review (pdf) Manipulatives: isometricAreaSmall.pdf isometricAreaMedium.pdf isometricAreaLarge.pdf isometricAreaHuge.pdf isometricAreaExamples.pdf Detailed aims Description of instruction 3. Scraps left over from planning 3.1. Sometimes converting requires us to use fractions -Establish that a cup is a half pint, a pint is a half quart, a quart is a quarter gallon -Express amounts such as three quarts as a fraction: "three quarter gallons" -Problem type: Convert generically from any unit to any other using fractions to express the answer when necessary -Work problems using cards and then demo using water; for example, 23 cups is 1 gallon, 1 quart, 1 pint, 1 cup, and is also 1 7/16 gallons; 7/16 is nearly half. -Problem type: Convert a measure with fractions in it to another measure. This is a lot trickier because you need to either know how to multiply fractions, or you need to be able to look at the denominator and recognize what unit the numerator is. -Work problem with cards and then demo using water: How can I measure out 1 5/8 gallon using the least number of units? 3.2. What we did for volume may also be done with length The goal of this class will be to quickly walk through the same process we just did for volume - ranking in order of size, determining conversion rates, and converting, possibly with fractions - for a completely different sort of measure: linear size. This is the "Y'all do" part of the release of responsibility model; the goal is to have the next section (area) be done independently. 3.3. Extending to different measures: Area To really reinforce what we have been doing with volume, we could also repeat the same process with area. This is interesting as a problem because depending on the area measures used, you would probably have no choice but to measure some more complicated shape with a mix of measures, and yet you can still, in the end, express it using any one of those measures alone, using fractions. 3.4. Various scraps left over from planning By the end of this unit, all students will... 1. Understand that when measuring something (length, volume, etc.) there are a number of different units that I can measure it in. A measurement is reported as some number of that unit. 2. Understand that I can measure out the same amount using different units and get different numbers. 3. Discuss which unit is bigger, and what that means about which unit will yield a bigger count. 4. Determine how many of one unit is equivalent to another, and express that relationship in words and as an equation. 5. Given words or an equation describing the relationship between two units, identify which unit is bigger and perform conversions. 6. Check your work in conversions using what you know about which unit is bigger. 7. Given many related units, express some quantity using a minimum total number of units. (In other words, no more than one cup, one pint, three quarts) 8. Talk about what fraction one unit is of another (only unit fractions) 9. Convert correctly even when a fraction is produced. 10. Refine a measurement system by breaking a unit into halves, fourths, etc. 11. Express verbally and mathematically the relationships among fractions: a quarter is half of a half, two eighths equal one quarter (treating fractions as separate units) 12. Relate the process of generating fractions to the relationships among them. 13. Measure in inches with appropriate fractions 14. Add power-of-two fractions by converting as you go 3.5. More scraps: Conversions with Liquid Measures Understand that measurements can be expressed in many different units. What do you know about these four measures of liquid? Quart, Gallon, Pint, Cup 1. Go over do-now 2. Find relationships between measures 3. Converting Game 10/20/08 (Monday) In doing the do-now, I expect students, at the very least, to try to rank the four measures from least to greatest. Hopefully they can also mention some examples of things that come in a unit that large (gallon or quart of milk, cups of various things in baking). In going over the do-now, I will show them what each of those measures looks like, and we will all agree together on which is the biggest and which the smallest. Then, various lab groups will try to answer the question of how many quarts in a gallon, how many pints in a quart, and how many cups in a pint. By the end of class, we should be able to do a bit of converting using the cards, solving problems (for now) that just convert everything down into cups. In the next day, we would address a few other problems: • How do I convert straight from gallons to pints or cups, or from quarts to cups? We can build up these conversion factors based on the conversions we already know. • How do I express a number of cups as an amount of gallons (or similar problems). This is introducing the use of fractions in this context. • How can I resolve a number of cups, or a mix of units, into a collection with the least total number of measures? This is the problem of getting a number with a proper digit in each place. What are the rules for when I'm done? (no more than one of cups and pints, no more than three quarts...) • Our final goal would be to be able to look at a mixture of different measures and express it as a single fraction of a gallon. This would mean choosing which denominator to use and converting all others. Good practice for adding fractions later on. 1. Introduce the idea of using several different units together and converting between them, in the context of american liquid measure units (cup, pint, quart, gallon) 2. Introduce the idea of using fractions in measurements. Since there are four quarts in a gallon, three quarts can be written as 3/4 gallon. 3. (insert a day here to practice conversions and binary fractions?) 4. Introduce length measures. Have students measure lengths using strips cut from a sheet of paper. These don't correspond to a "real" measure. The goal would be to arrive at the idea that the strip may be broken down into halves, quarters, and so on to improve accuracy. 5. Reinforce what we just did by a lab measuring volumes of rice in plastic cups. How do we subdivide the cup? 6. Introduce the typical form of inch ruler, and learn to name all the fractions on it. 7. Practice measuring things in inches 8. Review all this binary measurement systems stuff 9. Test or quiz 4. Fractions in measurement 4.1. Using fractions in measurement Be able to use fractions in writing a measurement. Liquid measure fractions 1. Go over do-now 2. Fractions of a gallon 3. Converting with fractions 4. Combining fractions Thursday 11/6/08 Class notes: Fractions in Measurement (pdf) Detailed aims Students will be able to figure out what fraction one unit is of another. So, for example, recalling that there are four quarts in a gallon, they will be able to express that same idea by saying that a quart is one fourth of a gallon. They should be able to have a picture in their minds that this means that when a gallon is divided into four equal pieces, each is a quart. Once they know what unit fraction of a gallon is represented by a given unit, students will also be able to recognize that some number of that unit can also be written as a fraction. So, for example, three quarts is three fourths of a gallon, since each quart is one fourth of a gallon. Description of instructions The big surprise in teaching this lesson was how difficult it was for students to understand what a fraction means at all. "Three fourths of a gallon," to most students, does not evoke the picture of a gallon divided into fourths, of which three are reserved. We need to do a lot more work, then, on simply understanding fractions, before we try to do anything fancy with them. 4.2. Unit fractions Be able to read and understand measurements with fractions. Figuring out the fractions 1. Go over do-now 2. Unit fractions of a rod 3. Writing fraction measurements 4. Writing mixed number measurements Friday 11/7/08 Class notes: Unit Fractions (pdf) Detailed aims Students will be able to use and manipulate fractions in simple ways using ordinary language. So, for example, they will be able to generate sentences like "A pint is an eighth of a gallon. So, if I have five pints of water, that is five eighths of a gallon." Students will be able to use a sequence of relationships among units to express all the units as a fraction of the largest one. The picture I expect them to use to do this is one of breaking that unit down into smaller units, which are broken down into smaller still, keeping track of how many pieces are present at each step. So, for example, I get quarts by breaking down a gallon into four pieces, and I get pints by breaking each of those four pieces in half. Since this leaves me with eight pieces, a pint must be an eighth of a gallon. Students will be able to express a measurement in smaller units as a fraction of a larger unit, by first naming that measurement in words, then substituting for the name of the unit the fraction it is of the larger unit, and then writing that in math symbols. So, a student presented with "five pints" would rewrite that as "five eighths of a gallon" and then translate that into math as "5/8 Students will be able to combine a smaller unit with some number of a larger unit to produce a mixed number of the larger unit. So, for example, two gallons and three pints is 2 3/8 gallon. The mixed number has three different "numbers" in it, but it needs to be understood as a single quantity. Students will also practice reading in English phrases, not only fractional measurements, but also mixed number measurements. This is an essential skill for two reasons: firstly, because we need to be able to use words to communicate the math symbols to each other in class, and secondly, because embedded in the syntax of the English phrase describing a mixed number is all the information you need to know in order to figure out what that mixed number looks like. 4.3. Constructing fractional lengths Break down units into pieces to represent a particular mixed number or fraction. Fractions in measurement review 1. Go over do-now 2. The parts of a mixed number 3. Constructing fractions Thursday 11/14/08 Class notes: Constructing Fractions (pdf) Detailed Aims So far, we have focused on just naming with fractions an amount that is laid out in known units. So, for example, since a cup is a sixteenth of a gallon, another way to say three cups is 3/16 gallon. This kind of connection is easy to make because when I am looking at that measurement 3/16 I have in my mind the idea that the 3 is how many cups I have, and the 16 comes because a cup is a sixteenth of a gallon. The first step toward building a more robust understanding of fractions - one that does not depend on being able to name each possible fraction as a particular, smaller unit - is to be able to take a unit and break it up into arbitrary fractions. I have chosen to focus on halves, quarters, and eighths since these fractions can be constructed easily and accurately by eye. In reading a mixed number measurement (W N/D unit), a student should be able to explain that this means you have W whole units, and then N pieces out of a unit broken into D pieces. They should be able to draw this measurement on a pictorial representation of the units by following a three-step process: 1. Fill in the first W whole units 2. Draw lines splitting the next unit into D pieces 3. Fill in the first N of those pieces When faced with a string of boxes representing units, or which some full boxes and some fraction of a box are filled in, a student should be able to write that amount as a mixed number by reversing this process; that is: 1. Write down as the "big number" W the number of completely filled-in units 2. Count the total number of pieces the next unit is broken up into, and write that down as D 3. Count how many of those D pieces are filled in, and write that down as N (Originally, I had intended to do this lesson with paper strips instead of coloring in pictures. The paper strips are much easier to break up accurately into fractions, but the first two classes proved that the frustration of trying to get a whole class of 30 to handle paper and scissors in a controlled way while following instructions was just not worth it. In the end, I'm really glad we switched over to a picture way of showing what we are doing, since it makes it a lot easier to record how you did a problem and refer back to it to discuss it in class. Class notes: Constructing Fractions (pdf) 4.4. Halves, Quarters, and Eighths Be able to read and understand measurements with fractions and mixed numbers. Halves, quarters, and eighths 1. Go over do-now 2. Name that measurement 3. Which measurement is largest? 4. Odd one out Friday 11/14/08 Class notes: Halves, Quarters, and Eighths (pdf) Detailed aims The goal of this class is just to practice what we have learned yesterday about constructing fractions. To make this more fun, I have arranged the classwork into a number of "games" in which there is some further step required after simply identifying or drawing a measurement. 4.5. Crazy Fractions Stretch your understanding of what a mixed number is. Bend your mind Tuesday 11/18/08 Class notes: Crazy Fractions (pdf) Detailed Aims The purpose of introducing "mixed mixed numbers" is twofold: • Often carrying a procedure one step further along the same pattern is a good way to help students finally grasp the underlying concept and structure, rather than just relying on being able to do the procedure by rote. • This is a use of mixed numbers that looks cool and complicated, but which all students can grasp. Thus, it provides more of a sense of challenge in a class that has been largely review of very basic math concepts. By the end of class, students will be able to draw a mixed number in which the numerator of the fraction part is another mixed numbers. Most students will be able to carry this one step farther, figuring out what would happen if the mixed number in the numerator had a mixed-number numerator of its own. The underlying concept here is that since a mixed number is constructed out of three numbers, and is itself regarded as a single number, I can put a mixed number into either the numerator or denominator of a fraction and still have a meaningful number. The goal is to stretch both the concept of "number" and the understanding of the procedure in which we fill in a whole number of units, split the next unit into parts, and fill in some number of those parts. 4.6. Measuring with Fractions Use fractions to measure a length more accurately. Review of mixed numbers 1. Go over do-now 2. Measuring dimensions 3. Finding equivalent lengths Thursday 11/20/08 Class notes: Measuring with Fractions (pdf) Detailed aims I inserted this day here for two reasons. First, I wanted to put in more review of the basics - what a mixed number is - before we move on to converting mixed numbers and fractions. Secondly, I was afraid that we might have gotten so focused on drawing and reading fractions in the particular form of a filled in portion of a bar that we might have forgotten that the goal is to show how long that measurement is. The new skill for today is that of continuing to break down a unit until you reach the point where the edge of one of the fractional pieces lines up with whatever you are trying to measure. Here, we are still restricting ourselves to halves, quarters, and eighths, since it gives us a simple procedure: • Fill in all the full units, and stop once you get to the unit within which the line to be measured ends. Write how many full units that is. • Cut that unit in half. If the half line lines up with the edge, stop and write 1/2 • Cut each half into quarters. If either the 1/4 or 3/4 line lines up with the edge, stop and write that. • Cut the half into eighths, and write your best guess of which eighth line the edge lines up with. Apart from this new skill, students will also hopefully come out of class remembering that our purpose in drawing fractions has been to show what length a particular mixed number measurement has. 4.7. Equivalent Fractions Understand how fractions and mixed numbers may be converted into equivalent forms. Finding equivalent measurements 1. Go over do-now 2. Equivalent fractions 3. Mixed numbers and improper fractions 4. Changing the denominator Thursday 11/20/08 Class notes: Equivalent Fractions (pdf) Detailed aims Most of our students are familiar with the idea that a fraction can be converted to another denominator, or that improper fractions can be converted into mixed numbers and vice versa. But many of them, even the top students, make mistakes in doing this that betray a lack of understanding of the fundamentals of what a fraction is. The goal in this unit has been to give students a concrete, consistent picture of what a fraction is, so that they will be ready to use fractions in making real-world measurements. Today, we will be using that picture to explain how converting a fraction to a different form works. This will both strengthen our understanding of this model of fractions, and reinforce what students know about converting fractions. Students will leave class considering it intuitively obvious that I can produce an equivalent fraction by multiplying the numerator and denominator by the same amount. Visually, this just means taking each of the pieces into which the unit has been broken to show a fraction, and breaking them down into more pieces. If I have a fraction bar showing 5/6, I can split each piece in half, producing 12 pieces of which 10 are colored: 10/12. If instead I were to break each piece in thirds, I would have 18 pieces of which 15 are colored. In both cases, the numerator and denominator are multiplied by the same number, because if I break all the pieces in thirds, that of course means that the colored pieces are broken into thirds. Students will also find it intuitively obvious now what I need to do to convert a mixed number into an improper fraction or vice versa. Converting a mixed number into an improper fraction means splitting up the whole pieces each into the same number of pieces specified by the denominator. So, if I have 2 2/3, I split the two whole units into thirds (6 pieces) and thus have a total of eight thirds filled in. 4.8. Fractions of an inch Convert between equivalent combinations of fractions. Working out equivalences 1. Go over do-now 2. Fractions of an inch 3. Writing the fraction in a different way 4. Measuring with an inch ruler Friday 11/21/08 Class notes: Fractions of an Inch (pdf) Detailed aims - Recognize that the marks on an inch break it up into halves, quarters, eighths, and sixteenths - Measure the length of a line in inches - Measure out and draw a line of a given length 4.9. Adding fractions Tuesday 11/25/08 Class notes: Adding Fractions (pdf) 4.10. Finding a common denominator Wednesday 11/26/08 Class notes: Common Denominator (pdf) 4.11. Fractions of a cup Work with fraction-of-a-cup units. Wednesday 11/12/08 Class notes: Fractions in Measurement (pdf) Detailed aims Description of instructions 5. The metric measurement system 5.1. Metric measurements Develop an understanding of the size of meters, decimeters, centimeters, and millimeters. 1. The metric system 2. Constructing the metric units 3. Measuring with metric units 5.1.1. The metric system You are probably used to using a ruler that is marked off in inches and feet. In science, however, we more often use a system of measurement called the metric system. The name comes from the fact that all the various units of measurement are based on a single length called a meter, which is a little longer than three feet. The system of feet and inches that we use is actually, when you think about it, rather difficult to work with. There are three feet in a yard, twelve inches in a foot, and inches are usually broken up into sixteenths; this means that converting between the different sorts of measurement takes a lot of math. In contrast, the metric system was designed so that each new unit is made up of ten of the next smaller unit. A meter is broken up into ten decimeters, and each decimeter is broken up into ten centimeters. A centimeter can also be broken down into ten parts; a tenth of a centimeter is called a millimeter. This works very well with our place value system, which also has each place worth ten times as much as the next smaller. So, for example, if something is 4.82 meters long, this means 4 meters plus 8 decimeters plus 2 centimeters. Each place refers to a particular unit. This also makes it easy to convert from one type of units to another; if I want to say how many centimeters are in 4.82 meters, I just move the centimeters over into the ones place; 4.82 meters = 482 centimeters. 5.1.2. Constructing the metric units I've brought in to class a large number of strips of paper that are all exactly one meter long. Your job, working with your lab partner, is to figure out how to break up a one meter strip into decimeters. Remember, you will have to split it up into ten equal parts - some thinking about fractions should help you come up with a strategy for that. Once you've constructed your decimeters, you can try to construct centimeters as well. In order to do that, you will have to break up a decimeter into ten equal parts. (This should be easier to do now that you have a strategy that worked for breaking up the meter) You might even be able to split up one of your centimeters into millimeters. You can label your decimeters, centimeters, and millimeters with the abbreviations dm, cm, and mm. The abbreviation for meters is just m. 5.1.3. Measuring with metric units Now that you have created your own set of metric units, your ticket to leave will be to use them to successfully measure some things. Remember, you have a choice of which units to use; for smaller things you might want to use centimeters, whereas for larger things you might want to use meters. Measure for me: • The width and height of an ordinary sheet of paper • The width and height of your desk • The length of your pen or pencil • The width and height of your DEAR book 5.2. Using metric units Build intuition for the size of different metric measurements. 1. Combining metric units 2. Interpreting metric measurements 3. Switching the type of metric units 5.2.1. Combining metric units Suppose that I start off measuring something in decimeters, and I find that it is 8 dm long, with a bit left over. Then I decide that I want to switch into centimeters to be more specific. Do I need to start over and measure out the whole thing with just centimeters? As you probably realized, I don't have to remeasure the 8 dm at all; I know that it is equal to 80 cm because each decimeter is ten centimeters. If I find that I have to add three more centimeters to get to the actual length of the object, then I have a total length of 83 cm: 80 centimeters in the form of 8 decimeters, plus 3 more individual centimeters. The tens place of my centimeters measurement should really be called the "decimeters" place, since ten centimeters is a decimeter. In general, I could make any metric measurement at all using just nine of each type of unit: meters, decimeters, and centimeters. There are other units for the other places as well: • 10 m = 1 decameter (dam) • 10 decameters = 1 hectometer (hm) • 10 hectometers = 1 kilometer (km) • .1 meter = 1 decimeter (dm) • .1 decimeter = 1 centimeter (cm) • .1 centimeter = 1 millimeter (mm) So, for example, the length of the main hallway is: 2 decameters + 9 meters + 2 decimeters + 6 centimeters How many centimeters, in total, would that be? One way to solve this would be to think to yourself, "2 decameters is 20 meters, so that is a total of 29 meters. But 29 meters is 290 decimeters..." 5.2.2. Interpreting metric measurements The cool thing about metric measurements is that every place value in the number can be translated directly into a particular measurement. When you see a number like "2926 cm" or "29.26 m", you can line it up like this: │ │ km │ hm │ dam │ m │ dm │ cm │ mm │ │ 2926 cm = │ │ │ 2 │ 9 │ 2 │ 6 │ │ │ 29.26 m = │ │ │ 2 │ 9 │ 2 │ 6 │ │ │ 0.038124 km = │ 0 │ 0 │ 3 │ 8 │ 1 │ 2 │ 4 │ Notice how, in the first example, it is measured in centimeters, and the 6 is in the ones place, so I put the 6 in the centimeters bin; the second example is measured in meters, and the 9 is in the one's place (note the decimal point!), so I put the 9 in the meters bin. It turns out the two numbers are measuring exactly the same length! The third example looks like a complicated decimal number, but when I lay it out in bins, remembering that the one's place is kilometers, it becomes a lot clearer. If I wanted to, I could measure it in meters instead, moving the decimal place to after the meters bin; it would be just 38.124 m. Even without the table above, you should be able to quickly look at any given metric number and make sense of it. So, for example: • My pencil is 0.0041 hm long. Is that a rather long pencil, or a rather short one? The 4 here is in the decimeters place, so the pencil is about 41 cm long. That's long for a pencil, but not unreasonably so. • I can run 12,493 mm in just three seconds. Is that very impressive, or not? The 2 is in the meters place, so this is equal to 12 m. This is comparable to the example length of the hallway that I gave you; I am saying that I can run a little less than half the length of the hallway in just three seconds. Again, this is really not all that impressive; it just sounded fast when I expressed it in millimeters. 5.2.3. Switching the type of metric units As we saw above, the fact that metric units are all labeled by place value also makes it very easy to convert from one type of metric units to another; it is just a matter of moving the decimal place. I can do this very quickly, and I don't even need to put the number in bins first. Suppose that I want to convert 309.2 cm into meters: The trick here is that if you first identify what unit the ones place is, then identify the places around it using the relationships between the metric units, then you can easily convert into any other sort of units just by moving the decimal point to after the proper place. The one possible point of confusion is what to do if the place value you want doesn't show up in the number you are trying to convert. You can fix this by simply adding zeros to the number. For The classroom is 5.73 m long. How long is that in millimeters? If the 5 is the meters place, then the smallest place I have is centimeters (where the 3 is). That's no problem; I'll just add an extra zero to get 5.730 m, with the 0 in the mm place, and then move the decimal point to get 5.730 m = 5730 mm. I can add zeros in the other direction as well: The classroom is 5.73 m long. How long is that in hectometers? If the 5 is the meters place, then I need to add two zeros to the left of it in order to have one in the hectometers place. My number is then 005.73 m. I move the decimal place to after the 0 in the hectometers place, and then I have 0.0573 hm. 5.3. Metric Units Checkpoint Check that everyone has a sense for the size of metric measurements and can interpret and convert measurements using place value. 1. What is a "checkpoint?" 2. Metric Checkpoint In order to move on to the next point in this unit, we need to make sure that everyone: • Has a good sense of how big centimeters, decimeters, and meters are • Can measure an edge of something in metric by lining up units of different types, and express that measurement as a single number with the correct label • Can lay out units to show any length, given as a number • Can identify which place in a number is which metric unit, based on what the number as a whole is labeled as • Can convert a number from one set of metric units to another by using their understanding of place values In order to give everyone a chance to practice this and get immediate feedback on how you're doing, we'll be working on computers. So, as soon as you are done with the do-now, pack up your metric measures and get ready to head out to the computer lab. 6. How accurately can you measure? Our goal for the last section of this unit was to learn about the metric measurement system, and to understand how place value makes it very easy to work with metric measurements. The measures that you made were useful as a way to understand how measurement works, but of course most of the time when you are measuring, you will use a ruler. The ruler-reading skills that we will develop in this section will, as you will soon discover, be useful for any sort of measurement device you may have cause to use. 6.1. Estimating to a tenth Estimate accurately to within a tenth of what is marked on a ruler. 1. The difficulties of measuring 2. Estimating to a tenth 6.1.1. The difficulties of measurement The first thing that we did in class was to make a table of the perimeter and area that different people found for the do-now. We then tried to figure out, based on that table, exactly how many different shapes of do-nows we had in the classroom. The interesting thing here is that often two people would have numbers that were very close, but not exactly the same, and so we had to decide: did those two people really have two different shapes, or did they just measure a little bit differently? The rulers I gave you were marked out in centimeters, with no more accurate markings in between. One of the dimensions of your rectangle was very close to being exactly on one of the centimeter lines, but the other dimension was somewhere in between two lines. That left you with a difficult choice. If the length is about halfway between 12 and 13, can I write down that it is 12.5, even though there is no 12.5 mark on the ruler? Or should I instead round it to 12 or 13, whichever seems closer? Because different people made different choices about whether to round, or estimated differently when a number was in between, we had to decide that measurements that were fairly close to each other were probably actually the same. Or, perhaps a better way of saying this is that, given the amount of inaccuracy in how we measured, if two numbers were close enough together we could not with any confidence say that they were actually measurements of different things. We decided that there were three groupings of area measurements that were far enough away from each other to be distinguishable, and three groupings of perimeter measurements that fit the same criteria. Then, by looking at the different combinations of area and perimeter measurements, we decided that there were actually only four distinct shapes. This activity demonstrates some of the complications that arise as soon as you ned to measure something in the real world. We've seen that even in a very simple task like finding the dimensions of a rectangle, we have to struggle with the question of how to estimate when a measurement doesn't exactly line up with a mark on your ruler. We had to be aware of the fact that there are always little inaccuracies when someone makes a measurement, and that therefore it is not unreasonable for two people to measure different lengths for the same thing. Because we will be doing a lot of lab activities that require you to measure things, we really need to figure out good solutions to these two problems: how to estimate consistently, and how to deal with inaccuracy in measurement. We'll tackle the first issue today, and come back to the other issue later on. 6.1.2. Estimating to a tenth With a few tricks of the eye, and a lot of practice, it should be possible to estimate the value of any measurement to within a tenth of what is actually marked on the ruler. So, for example, once you learn these tricks, you will be able to confidently report a length of 3.6 cm or 4.3 cm using the rulers you have today, even though the marks go from 3 to 4 to 5 with no marks in between. Suppose that you are trying to identify the point marked with an arrow in the picture to the right. The first step in doing this is to picture where the midpoint is in that segment. I have shown this by drawing a small blue line in the middle. Dividing a line exactly in two is something that your eyes are particularly good at doing, since you can compare visually to see if one side is bigger than the other. Once you can picture where the middle of the space is, it turns out to be fairly easy to decide the exact decimal point value of the mark. If the mark is on one of the lines, or at the midpoint, it is clear that it is at .0 or .5. If it is close to the ends or the midpoint, it will be one of .1, .4, .6, or .9. If it is out in the middle of either the first half or the second half, ask yourself: is it slightly closer to the middle, or to the end? That will tell you if it is .2, .3, .7, or .8. The practice that we did to learn how to interpret the different place values of metric measurements will come in handy to you here. For example, if I am trying to estimate to within a tenth for a centimeter measurement, then I am estimating millimeters. 6.2. Place value and measurement Big idea: I can measure one place beyond what is marked, no matter whether that is the 1's or the 1000's or the .01's. Computer activity for most of class. 6.3. Measurement and accuracy Big idea: Using +/- notation; error is .1 of mark size 7. Using a ruler with any mark spacing At this point, we know how to read any sort of ruler, provided that it is marked off in some power of ten. Unfortunately, in the real world we can't always count on that being the case. 7.1. Using a ruler with any mark spacing This takes quite a lot of care to teach and learn. The idea is to always be saying, "This is 4.3 marks past the 20, and each mark is 4, so..." 7.2. Mark spacing practice Computer activity takes up most of class 7.3. Reporting accuracy of any ruler Some sort of pop quiz or other activity to serve as a checkpoint for the rulers unit The new type of interesting question we can introduce at this point is whether two measurements agree. 7.4. Adding and averaging uncertain numbers 8. Using common lab measurement devices 8.1. Graduated cylinders 8.2. Conservation of volume 8.3. Volume of sand and gravel 8.4. Data tables 8.5. Three-beam balance 8.6. Conservation of mass 8.7. Experimental evidence of mass conservation 8.8. Assessing the accuracy of a measurement device Lab design activity: give each group a stopwatch, and ask them to design an experiment to decide what kind of +/- ought to be used when measuring times with that stopwatch. 9. Review
{"url":"http://www.zahniser.net/~russell/science08/index.php?title=Measurement","timestamp":"2014-04-18T08:31:08Z","content_type":null,"content_length":"84375","record_id":"<urn:uuid:45178d21-9465-4e58-8f34-d0f6fa5b36b7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Notable Properties of Specific Numbers Notable Properties of Specific Numbers These are some numbers with notable properties. (Most of the less notable properties are listed here.) Other people have compiled similar lists, but this is my list it includes the numbers that I think are important (-: A few rules I used in this list: Everything can be understood by a typical undergraduate college student. If multiple numbers have a shared property, that property is described under one "representative" number with that property. I try to choose the smallest representative that is not also cited for another property. When a given number has more than one type of property, the properties are listed in this order: 1. Purely mathematical properties unrelated to the use of base 10 (example: 137 is prime.) 2. Base-10-specific mathematical properties (example: 137 is prime; remove the "1": 37 is also prime; remove the "3": 7 is also prime) 3. Things related to the physical world but outside human culture (example: 137 is close to the reciprocal of the fine-structure constant, once thought to be exact but later found to be closer to 4. All other properties (example: 137 has often been given a somewhat mystical significance due to its proximity to the fine-structure constant, most famously by Eddington) Due to blatant personal bias, I only give one entry each to complex, imaginary, negative numbers and zero, devoting all the rest (27 pages) to positive real numbers. I also have a bit of an integer bias but that hasn't had such a severe effect. A little more about complex numbers, quaternions and so on, is here. This page is meant to counteract the forces of Munafo's Law of Mathematical Discourse. If you see room for improvement, let me know! (1+i)/√2 = 0.707106... + 0.707106...i One of the square roots of i. When I was about 12 years old, my step-brother gave me a question to pass the time: If i is the square root of -1, what is the square root of i?. I had already seen a drawing of the complex plane, so I used it to look for useful patterns and noticed pretty quickly that the powers of i go in a circle. I estimated the square root of i to be about 0.7 + 0.7i. I can't remember why I didn't get the exact answer: either I didn't know trigonometry or the Pythagorean theorem, or how to solve multivariable equations, or perhaps was just tired of doing maths (I had clearly hit on Euler's formula and there's a good chance that contemplating the powers of 1+i would have led me all the way through base-i logarithms and De Moivre's formula to the complex exponential function). But you don't need that to find the square root of i. All you need to do is treat i as some kind of unknown value with the special property that any i^2 can be changed into a -1. You also need the idea of solving equations with coefficients and variables, and the square root of i is something of the form "a+bi". Then you can find the square root of i by solving the equation: (a+bi)^2 = i Expand the (a+bi)^2 in the normal way to get a^2 + 2abi + b^2i^2, and then change the i^2 to -1: a^2 + 2abi - b^2 = i Then just put the real parts together: (a^2-b^2) + 2abi = i Since the real coordinate of the left side has to be equal to the real coordinate of the right, and likewise for the imaginary coordinates, we have two simultaneous equations in two variables: a^2-b^2 = 0 2ab = 1 From the first equation a^2-b^2 = 0, we get a=b; substituting this into the other equation we get 2a^2 = 1, and a=±1/√2 and this is also the value of b. Thus, the original desired square root of i is a+bi = (1+i)/√2 (or the negative of this). (This is the only complex number with its own entry in this collection, mainly because it's the only one I've had much interest in; see the "blatant personal bias" note above :-). The unit of imaginary numbers, and one of the square roots of -1. (This is the only imaginary number with its own entry in this collection, mainly because it stands out way above the rest in notability. In addition, non-real numbers don't seem to interest me The "first" negative number, unless you define "first" to be "lowest" (This is the only negative number with its own entry in this collection, mainly because negative numbers do not interest me much. I suppose this is because I still think of numbers in terms of counting things like "the 27 sheep on that hill" or "the 40320 permutations of the Loughborough tower bells".) The word "zero" is the only number name in English that can be traced back to Arabic (صِفر ʂifr "nothing", "cipher"; which became zefiro in Italian, later contracted by removing the fi). The word came with the symbol, at around the same time the western Arabic numerals came to Europe.^44,^105 The practice of using a symbol to hold the place of another digit when there is no value in that place (such as the 0 in 107 indicating there are no 10's) goes back to 5^th-century India, where it was called shunya or Śūnyatā^107. (This is the only zero number with its own entry in this collection, mainly because a field can have only one additive identity.) This is the Planck time in seconds; it is related to quantum mechanics. According to the Wikipedia article Planck time, "Within the framework of the laws of physics as we understand them today, for times less than one Planck time apart, we can neither measure nor detect any change". One could think of it as "the shortest measurable period of time", and for any purpose within the real world (if one believes in Quantum mechanics), any two events that are separated by less than this amount of time can be considered simultaneous. It takes light (traveling at the speed of light) this long to travel one Planck length unit, which itself is much smaller than a proton, electron or any particle whose size is known. See also 1.416833(85)×10^32. This is the Planck length in meters; it is related to quantum mechanics. The best interpretation for most people is that the Planck length is the smallest measurable length, or the smallest length that has any relevance to events that we can observe. This uses the CODATA 2010 value^50. See also 5.390×10^-44 and 299792458. The "reduced" Planck constant in joule-seconds, from CODATA 2010 values^50. This is the Planck constant in joule-seconds, from CODATA 2010 values^50. This gives the proportion between the energy of a photon and its wavelength. The mass of an electron in kilograms, from CODATA 2010 values^50. The mass of a proton in kilograms, from CODATA 2010 values^50. The mass of a neutron in kilograms, from CODATA 2010 values^50. The approximate time (in seconds) it takes light to traverse the width of a proton. The quantum of electric charge in coulombs, from CODATA 2010 values^50. Protons, electrons and quarks all have charges that are a (positive or negative) integer multiple of this value. The elementary charge or "unit charge", the charge of an electron in coulombs, from CODATA 2010 values^50. This is no longer considered the smallest quantum of charge, now that matter is known to be composed largely of quarks which have charges in multiples of a quantum that is exactly 1/3 this value. Approximate "size" of a proton^71, in meters (based on its "charge radius" of 0.875 femtometers). "Size" is a pretty vague concept for particles, and different definitions are needed for different problems. See 10^40. The vacuum permittivity constant in farads per meter. In older times this was called the "permittivity of free space". Due to a combination of standard definitions, notably the exact definition of the speed of light, this constant is exactly equal to 10^7/(4 π 299792458^2). The gravitational constant in cubic meters per kiogram second squared, from CODATA 2010 values^50. This is one of the most important physical constants in physics, notably cosmology and efforts towards unifying relativity with quantum mechanics. It is also one of the most difficult constants to measure. The fine-structure constant, as given by CODATA 2010 (see ^50). The "(24)" is the error range. See the 137.035... page for history and details. There are a few "coincidences" regarding multiples of 1/127: e/π = 0.865255... ≈ 110/127 = 0.866141... √3 = 1.732050... ≈ 220/127 = 1.732283... π = 3.141592... ≈ 399/127 = 3.141732... √62 = 7.874007... ≈ 1000/127 = 7.874015... e^π = 23.140692... ≈ 2939/127 = 23.141732... There are a few more for 1/7. The √62 coincidence is discussed in the √62 entry, and the π and e^π ones go together (see e^π). 1/100, or "one percent". (eccentricity of Moon's orbit) Mean eccentricity of the Moon's orbit the average variation in the distance of the Moon at perigee (closest point to the Earth) and apogee. Due to the influence of the Sun's gravity the actual eccentricity varies a large amount, going as low as about 0.047 and as high as about 0.070; also the ellipse precesses a full circle every 9 years (see 27.554549878). The eccentricity is greatest when the perigee and apogee coincide with new and full moon. At such times the Moon's distance varies by a total of 14%, and its apparent size (area in sky) varies by 30% when the size at apogee is compared to the size at perigee. This means that the brightness of the full moon varies by 30% over the course of the year. In 2004 the brightest full moon was the one on July 2^nd; due to the orbit's precession the brightest full moon in 2006 was a couple months later, Oct 6^th. This change in size is a little too small for people to notice from casual observation (except in solar eclipses, when the Moon sometimes covers the whole sun but at other times produces an annular eclipse). But the eccentricity is large enough to cause major differences in the Moon's speed moving through the sky from one day to the next. When the Moon is near perigee it can move as much as 16.5 degrees in a day; when near apogee it moves only 12 degrees; the mean is 13.2. The cumulative effect of this is that the moon can appear as much as 22 degrees to the east or west of where it would be if the orbit were circular, enough to cause the phases to happen as much as 1.6 days ahead of or behind the prediction made from an ideal circular orbit. It also affects the libration (the apparent "wobbling" of the Moon that enables us to see a little bit of the far side of the moon depending on when you look). This is the lowest value of z for which the infinite power tower converges to a finite value. (The highest such value is e^(1/e) = 1.444667...; see that entry for more). See also 0.692200.... The value of the Riemann Zeta function with argument of -1 is -1/12. As described by John Baez^100: The numbers 12 and 24 play a central role in mathematics thanks to a series of "coincidences" that is just beginning to be understood. One of the first hints of this fact was Euler's bizarre "proof" 1 + 2 + 3 + 4 + ... = -1/12 which he obtained before Abel declared that "divergent series are the invention of the devil". Euler's formula can now be understood rigorously in terms of the Riemann zeta function, and in physics it explains why bosonic strings work best in 26=24+2 dimensions. Baez, at the end of his "24" lecture, indicates that the significance of 24 is connected to the fact that there are two ways to construct a lattice on the plane with rotational symmetry: one with 4-fold rotational symmetry and another with 6-fold rotational symmetry and 4×6=24. A connection between zeta(-1)=-1/12 and symmetry of the plane makes more sense in light of how the Zeta function is computed for general complex arguments. Also, the least common multiple of 4 and 6 is 12. See also 1.202056... and 1.644934.... The fraction 1/7 is the simplest example of a fraction with a repeating decimal that has an interesting pattern. See the 7 article for some of its interesting properties. Reader C. Lucian points out that many of the well-known constants can be approximated by multiples of 1/7: gamma = 0.5772156... ≈ 4/7 = 0.571428... e/π = 0.865255... ≈ 6/7 = 0.857142... √2 = 1.414213... ≈ 10/7 = 1.428571... √3 = 1.732050... ≈ 12/7 = 1.714285... e = 2.7182818... ≈ 19/7 = 2.714285... π = 3.1415926... ≈ 22/7 = 3.142857... e^π = 23.140692... ≈ 162/7 = 23.142857... Thise are mostly all coincidences without any other explanation, except as noted in the entries for √2 and e^π. See also 1/127. A reader[196] suggested to me the idea that some people might define "zillion" as "a 1 followed by a zillion zeros". This is kind of like the definition of googolplex but contradicts itself, in that no matter what value you pick for X, 10^X is bigger than X. However, this is actually only true if we limit X to be an integer (or a real number). If X is allowed to be a complex number, then the equation 10^X=X has infinitely many solutions. Using Wolfram Alpha[208], put in "10^x=x" and you will get: x ≈ -0.434294481903251827651 W[n](-2.30258509299404568402) with a note describing W[k] as the "product log function", which is related to the Lambert W function (see 2.50618...). This function is also available in Wolfram Alpha (or in Mathematica) using the name "ProductLog[k, x]" where k is any integer and x is the argument. So if we put in "-0.434294481903251827651 * ProductLog[1, -2.30258509299404568402]", we get: 0.529480508259063653364... - 3.34271620208278281864... i Finally, put in "10^(0.529480508259063653364 - 3.34271620208278281864 * i)" and get: 0.52948050825906365335... - 3.3427162020827828186... i If we used -2 as the initial argument of ProductLog[], we get 0.5294805+3.342716i, and in general all the solutions occur as complex conjugate pairs. Other solutions include x= -0.119194...±0.750583...i and x=0.787783...±6.083768...i. In light of the fact that the -illion numbers are all powers of 1000, another reader suggested[200] that one should do the above starting with 10^(3X+3)=X. This leads to similar results, with one of the first roots being: -0.88063650680345718868... - 2.10395020077170002545... i The first fraction in Conway's FRACTRAN program ([146] page 147) that finds all the prime numbers. The complete program is ^17/[91], ^78/[85], ^19/[51], ^23/[38], ^29/[33], ^77/[29], ^95/[23], ^77/ [19], ^1/[17], ^11/[13], ^13/[11], ^15/[2], ^1/[7], ^55/[1]. To "run" the program: starting with X=2, find the first fraction N/D in the sequence for which XN/D is an integer. Use this value NX/D as the new value of X, then repeat. Every time X is set to a power of 2, you've found a prime number, and they will occur in sequence: 2^2, 2^3, 2^5, 2^7, 2^11 and so on. It's not very efficient though it takes 19 steps to find the first prime, 69 for the second, then 281, 710, 2375 ... (Sloane's A7547). 0.20787957635... = e^-π/2 = i^ i This is e^-π/2, which is also equal to i^ i. (Because e^ix = cos(x) + isin(x), e^iπ/2=i, and therefore i^ i = (e^iπ/2)^i = e^i^2π/2 = e^-π/2 .) 0.288788095086602421278899721929... = 1/2 × 3/4 × 7/8 × 15/16 × 31/32 × ... × 1-2^-N × ... This is an infinite product of (1-2^-N) for all N. This is also the product of (1-x^N) with x=1/2. Euler showed that in the general case, this infinite product can be reduced to the much easier-to-calculate infinite sum 1 - x - x^2 + x^5 + x^7 - x^12 - x^15 + x^22 + x^26 - x^35 - x^40 + ... where the exponents are the pentagonal numbers N(3N-1)/2 (for both positive and negative N), Sloane's A1318.^30 0.329239474231204... = acosh(sqrt(2+sqrt(2+4))/2) = ln(2+√3)/4 This is Gottfried Helms' Lucas-Lehmer constant "LucLeh"; see 1.38991066352414... for more. If you take a string of 1's and 0's and follow it by its complement (the same string with 1's switched to 0's and vice versa) you get a string twice as long. If you repeat the process forever (starting with 0 as the initial string) you get the sequence and if you make this a binary fraction 0.0110100110010110...[2] the equivalent in base 10 is 0.41245403364..., and is called the Thue-Morse constant or the parity constant. Its value is given by a ratio of infinite products: 4 K = 2 - PRODUCT[2^2^n-1] / PRODUCT[2^2^n] = 2 - (1 × 3 × 15 × 255 × 65535 × ...)/(2 × 4 × 16 × 256 × 65536 × ...) The odds of losing a game of chance. Flip a coin: if you get heads, your score increases by π, if you get tails, your score diminishes by 1. Repeat as many times as you wish but if your score ever goes negative, you lose. Assuming the player keeps playing indefinitely (motivated by the temptation of getting an ever-higher score), what are the odds of losing? The answer is given by a series sum: 1/2 + 1/2^5 + 4/2^9 + 22/2^13 + 140/2^17 + 969/2^21 + 7084/2^25 + 53820/2^29 + 420732/2^34 + ..., (numerators in Sloane's A181784) which adds up to A more sophisticated analysis using rational numbers like 355/113 converges on the answer more quickly, giving 0.54364331210052407755147385529445... (see [186]). More on my page on sequence A181784. See also 368. This is the Omega constant, which satisfies each of these simple equations (all equivalent): e^x = 1/x x = ln(1/x) = - ln(x) e^-x = x -x = ln(x) x e^x = 1 x+ln(x) = 0 x^1/x = 1/e x/ln(x) = -1 x^-1/x = (1/x)^(1/x) = e ln(x)/-x = 1 Thus it is sort of like the golden ratio. In the above equations, if e is replaced with any number bigger than 1 (and "ln" by the corresponding logarithm) and you get another "Omega" constant. For if 2^x=1/x, then x=0.6411857445... if π^x=1/x, then x=0.5393434988... if 4^x=1/x, then x=1/2 if 10^x=1/x, then x=0.3990129782... if 27^x=1/x, then x=1/3 if 10000000000^x=1/x, then x=1/10 (the Euler-Mascheroni constant) This is the Euler-Mascheroni constant, commonly designated by the Greek letter gamma. It is defined in the following way. Consider the sum: S[n] = 1 + 1/2 + 1/3 + 1/4 + 1/5 + ... + 1/n The sequence starts 1, 1.5, 1.833333..., 2.083333..., etc. As n approaches infinity, the sum approaches ln(n) + gamma. Here are some not-particularly-significant approximations to gamma: 1/(√π - 1/25) = 0.5772159526... gamma = 0.5772156649... 1/(1+ 1/√10)^2 = 0.5772153925... The golden ratio (reciprocal form): see 1.618033.... This is the lowest point in the function y = x^x. See also 1.444667.... The natural logarithm of 2. See 69.3147... and 72. You can create a long string of 1's and 0's by using "substitution rules" and iterating from a small starting string like 0 or 1. If you use the rule: 0 → 1 1 → 10 and start with 0, you get 1, 10, 101, 10110, 10110101, 1011010110110, ... where each string is the previous one followed by the one before that (Sloane's A36299 or A61107). The limit of this is an infinite string of 1's and 0's which you can make this into a binary fraction: 0.1011010110110...[2], you get this constant (0.709803... in base 10) which is called the Rabbit Constant. It has some special relationships to the Fibonacci sequence: • In the iteration described above, the number of digits in each string is the Fibonacci sequence: 1, 1, 2, 3, 5, 8, 13, 21, ... • Expressed as a continued fraction, the constant is 0 + 1/(2^0 + 1/(2^1 + 1/(2^1 + 1/(2^2 + 1/(2^3 + 1/(2^5 + 1/(2^8 + ...))))))) where the exponents of 2 are the Fibonacci numbers. • If you take all the multiples of the Golden Ratio 0.618033 and round them down to integers, you get 1, 3, 4, 6, 8, 9, 11, 12, ...: These numbers tell you where the 1's in the binary fraction are. If you leave off the first two binary digits (10) you get 110101101101011010110110101..., the bit pattern generated by a Turing machine at the end of the Turing machine Google Doodle. As a fraction (0.1101011...) it is 0.8392137714451. Value of x such that x=cos(x), using radians as the unit of angle. You can find the value with a scientific calculator just by putting in any reasonably close number and hitting the cosine key over and over again. Here are a few more digits: 0.7390851332151606416553120876738734040134117589007574649656...^26 A fiendishly engaging approximation to the answer to the "infinite resistor network" problem in xkcd 356, which introduced the world to the sport of "nerd sniping". See ries and 0.773239.... 0.7732395447351... = 4/π - 1/2 The answer to a fiendishly engaging "infinite resistor network" problem in xkcd 356, which introduced the world to the sport of "nerd sniping" ^90. See also 0.772453.... This is 0.1101011011010110101101101011011010110101101101011010110110... in binary, and is the slightly different version of the Rabbit constant generated by a Turing machine Google Doodle from June 2012. More digits: 0.8392137714451652585671495977783023880500088230714420678280105786051... Decimal value of the "regular paperfolding sequence" 1 1 0 1 100 1 1100100 1 110110001100100 1 1101100111001000110110001100100 ... converted to a binary fraction. This sequence of 1's and 0's gives the left and right turns as one walks along a dragon curve. It is the sum of 8^2^k/(2^2^k+2-1) for all k≥0, a series sum that gives twice as many digits with each additional term. The minimum value of the Gamma function, the continuous analogue of the factorial function. This is Gamma(1.461632...). This is 1/2 of the square root of π. It is Gamma(3/2), and is sometimes also called (1/2)!, the factorial of 1/2. See also 0.906402... and 1.329340.... This is Gamma(5/4), or "the factorial of 1/4". While some Gamma function values, like 0.886226... and 1.329340..., have simple formulas involving just π to a rational power, this one is a lot more complcated. It is π to the power of 3/4, divided by (√2+^4√2), times the sum of an infinite series for an elliptic function. Catalan's constant, which can be defined by: G = ∫[(0,1)] [ arctan(x) / x dx ] G = 1 - 1/3^2 + 1/5^2 - 1/7^2 + 1/9^2 - ... If you have a 2n × 2n checkerboard and a supply of 2 n^2 dominoes that are just large enough to cover two squares of the checkerboard, how many ways are there to cover the whole board with the dominoes? For large n, the answer is closely approximated by f'[n] = e^4 G n^2 / π This is the cube root of (^5√27 - ^5√2). Bill Gosper discovered the following identity, which is remarkable because the left side only has powers of 2 and 3, but the right side has a power of 5 in the denominator ^108: (^5√27-^5√2)^(1/3) = (^5√8 ^5√9 + ^5√4 - ^5√2 ^5√27 + ^5√3) / ^3√25 or in his original form: (3^(3/5)-2^(1/5))^(1/3) = (- 2^(1/5)3^(3/5) + 2^(3/5)3^(2/5) + 3^(1/5) + 2^(2/5) ) / 5^(2/3) See also 1.554682... . . . Forward to page 2 . . . Last page (page 25) Quick index: if you're looking for a specific number, start with whichever of these is closest: 0.065988... 1 1.618033... 3.141592... 4 12 16 21 24 29 39 46 52 64 68 89 107 137.03599... 158 231 256 365 616 714 1024 1729 4181 10080 45360 262144 1969920 73939133 4294967297 5×10^11 10^18 5.4×10^27 10^40 5.21...×10^78 1.29...×10^865 10^40000 10^9152051 10^10^36 10^10^10^100 footnotes Also, check out my large numbers and integer sequences pages. s.13
{"url":"http://mrob.com/pub/math/numbers.html","timestamp":"2014-04-18T00:21:35Z","content_type":null,"content_length":"52007","record_id":"<urn:uuid:7ef15128-6e4c-49d9-92f1-143d4e5649cf>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Derwood Prealgebra Tutor ...I have also helped many students prepare for the P/SAT, which is estimated by some to be as high as 60-70% Geometry. I am familiar with many online resources and have posted practice exams to which your child could have access. One of the highlights of my long career as a mathematician was work... 8 Subjects: including prealgebra, statistics, geometry, trigonometry ...And as a research scientist, I am a published author and have conducted research in nonlinear dynamics and ocean acoustics. My teaching focuses on understanding concepts, connecting different concepts into a coherent whole and competency in problem solving. Every student has different needs so my approach is fluid. 9 Subjects: including prealgebra, calculus, physics, geometry ...My strong abilities in reading are evidenced by my score of 780 on the Critical Reading section of the SAT. I have taken numerous courses in Literature (Honors, AP, and college-level). I have also engaged with reading as Managing Editor of my college newspaper, Communications Department Intern f... 16 Subjects: including prealgebra, reading, algebra 1, French ...This approach does not promote conceptual understanding! Students need to be able to think critically and creatively when they face a new problem. They need to be challenged with rich problems, while being provided with the tools to tackle those problems with creativity and confidence. 16 Subjects: including prealgebra, English, writing, calculus ...My goal is to help students with conceptual problems so they can better understand the subject. Mathematics, Accounting, and Economics are my strong suites. I can teach High School Math and college level Accounting and Economics. 19 Subjects: including prealgebra, statistics, economics, accounting
{"url":"http://www.purplemath.com/Derwood_Prealgebra_tutors.php","timestamp":"2014-04-19T07:03:19Z","content_type":null,"content_length":"23948","record_id":"<urn:uuid:422b9c3b-eea7-4bfd-84e5-76b20cc0cba5>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
West Lynn, MA Math Tutor Find a West Lynn, MA Math Tutor ...I have tutored Middle School math, High School Math, secondary H.S. entrance exam test prep, Sat, PSAT, ACT (math and english) and SAT I and SAT II, Math. I have taught mddle school as well as High School. I have tutored in the above mentioned subjects as well as NCLB and MCAS prep. 19 Subjects: including prealgebra, linear algebra, algebra 1, algebra 2 ...I can teach many physical techniques for yoga, including meditation and yoga poses ("asanas"). I'm especially good at guiding students in modifying asanas so they may be able to do them at the beginning, or to accommodate a challenge or disability. I can even help you stand on your head if you want! I have a strong background in SAS. 18 Subjects: including prealgebra, trigonometry, SPSS, English ...I also have an A.S. in Computer Information Systems from Holyoke Community College. I am an experienced, self-motivated, and detail-oriented administrative management and teaching professional with advanced computer proficiency in MS Word, Excel, Power point, and Access. Throughout my employmen... 30 Subjects: including algebra 2, statistics, ESL/ESOL, GRE ...I hold a masters in Early Childhood Education and a PhD in Developmental Psychology. I've worked as a classroom teacher, reading specialist, literacy coach, and educational consultant for over 15 years. I have taught and mentored students at the elementary, undergraduate, and graduate levels as... 19 Subjects: including SPSS, reading, Spanish, writing ...The tutoring is done on a one to one basis. I help students with problem solving in introductory college physics.I have a bachelor's degree in Physics, and I am completing a Master's degree in applied physics. I have taken classes in differential equations mathematics. 15 Subjects: including geometry, physics, SAT math, GRE
{"url":"http://www.purplemath.com/west_lynn_ma_math_tutors.php","timestamp":"2014-04-19T09:50:35Z","content_type":null,"content_length":"24008","record_id":"<urn:uuid:4028c337-f9be-4431-b13f-eccf8fbb9c61>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
otential energy Potential energy From New World Encyclopedia Potential energy can be thought of as energy stored within a physical system. It is called potential energy because it has the potential to be converted into other forms of energy, particularly kinetic energy, and to do work in the process. The standard (SI) unit of measure for potential energy is the joule, the same as for work or energy in general. There are various types of potential energy, each associated with a particular type of force. They include elastic potential energy, gravitational potential energy, electric potential energy, nuclear potential energy, intermolecular potential energy, and chemical potential energy. The term "potential energy" was coined by William Rankine, a nineteenth-century Scottish engineer and physicist.^[1] It corresponds to energy that is stored within a system. It exists when there is a force that tends to pull an object back towards some original position when the object is displaced. This force is often called a restoring force. For example, when a spring is stretched to the left, it exerts a force to the right so as to return to its original, unstretched position. Similarly, when a weight is lifted up, the force of gravity will try to bring it back down to its original position. The initial steps of stretching the spring or lifting the weight both require energy to perform. According to the principle of conservation of energy, energy cannot be created or destroyed; hence this energy cannot disappear. Instead, it is stored as potential energy. If the spring is released or the weight is dropped, this stored energy will be converted into kinetic energy by the restoring force—elasticity in the case of the spring, and gravity in the case of the weight. The more formal definition is that potential energy of a system is the energy of position, that is, the energy a system is considered to have due to the positions of its components in space. For given positions of all other objects of the system, the potential energy is a function of the position of a given object. There are a number of different types of potential energy, each associated with a particular type of force. More specifically, every conservative force gives rise to potential energy. For example, the work of elastic force is called elastic potential energy; work of gravitational force is called gravitational potential energy, work of the Coulomb force is called electric potential energy; work of strong nuclear force or weak nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of mutual positions of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motion of particles and potential energy of their mutual positions. As a general rule, the work done by a conservative force F will be $\,W = -\Delta U$ where ΔU is the change in the potential energy associated with that particular force. The most common notations for potential energy are PE and U. Electric potential (commonly denoted with a V for voltage) is the electric potential energy per unit charge. Chemical potential energy Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or otherwise. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. For example, when a fuel is burned the chemical energy is converted to heat, same is the case with digestion of food metabolized in a biological organism. Green plants transform solar energy to chemical energy through the process known as photosynthesis, and electrical energy can be converted to chemical energy through electrochemical reactions. The similar term chemical potential is used by chemists to indicate the potential of a substance to undergo a chemical reaction. Electrical potential energy An object can have potential energy by virtue of its electric charge and several forces related to their presence. There are three main types of this kind of potential energy: electrostatic potential energy, electrodynamic potential energy (also sometimes called magnetic potential energy), and nuclear potential energy. Electrostatic potential energy In case the electric charge of an object can be assumed to be at rest, it has potential energy due to its position relative to other charged objects. The electrostatic potential energy is the energy of an electrically charged particle (at rest) in an electric field. It is defined as the work that must be done to move it from an infinite distance away to its present location, in the absence of any non-electrical forces on the object. This energy is non-zero if there is another electrically charged object nearby. The simplest example is the case of two point-like objects A[1] and A[2] with electrical charges q[1] and q[2]. The work W required to move A[1] from an infinite distance to a distance d away from A [2] is given by: $W=k\frac {q_1q_2} d$ where k is Coulomb's constant, equal to $\frac 1 {4\pi\epsilon_0}$. This equation is obtained by integrating the Coulomb force between the limits of infinity and d. A related quantity called electric potential is equal to electric potential energy of a unit charge. Electrodynamic potential energy In case a charged object or its constituent charged particles are not at rest, it generates a magnetic field giving rise to yet another form of potential energy, often termed as magnetic potential energy. This kind of potential energy is a result of the phenomenon magnetism, whereby an object that is magnetic has the potential to move other similar objects. Magnetic objects are said to have some magnetic moment. Magnetic fields and their effects are best studied under electrodynamics. Nuclear potential energy Nuclear potential energy is the potential energy of the particles inside an atomic nucleus, some of which are indeed electrically charged. This kind of potential energy is different from the previous two kinds of electrical potential energies because in this case the charged particles are extremely close to each other. The nuclear particles are bound together not because of the coulombic force, but due to strong nuclear force that binds nuclear particles more strongly and closely. Weak nuclear forces prepare the potential energy for certain kinds of radioactive decay, such as beta decay. Nuclear particles like protons and neutrons are not destroyed in fission and fusion processes, but collections of them have less mass than if they were individually free, and this mass difference is liberated as heat and radiation in nuclear reactions (the heat and radiation have the missing mass, but it often escapes from the system, where it is not measured). The energy from the Sun, also called solar energy, is an example of this form of energy conversion. In the Sun, the process of hydrogen fusion converts about 4 million metric tons of solar matter per second into light, which is radiated into space. Thermal potential energy The thermal energy of an object is simply the sum of the kinetic energies of the particles constituting it (which are in random motion) plus the potential energies of their displacements from their equilibrium positions as they oscillate or move around them. In the case of an ideal gas, there is no potential energy due to interactions of particles, but kinetic energy may include a rotational part too (for multiatomic gases)—if rotational levels are excited at a given temperature T. Solar updraft towers use this kind of power. Rest mass energy Albert Einstein was the first to calculate the amount of work needed to accelerate a body from rest to some finite speed using his definition of relativistic momentum. To his surprise, this work contained an extra term that did not vanish as the speed of the accelerated body approached zero: $E_0 = m c^2. \,$ This term (E[0]) was therefore called rest mass energy, as m is the rest mass of the body (c is the speed of light which is constant) (The subscript zero is used here to distinguish this form of energy from the others that follow. In most other contexts, the equation is written with no subscript.) So, the rest mass energy is the amount of energy inherent in the mass when it is at rest. If the mass changes, so must its rest mass energy which must be released or absorbed due to energy conservation law. Thus, this equation quantifies the equivalence of mass and energy. Due to the large numerical value of squared speed of light, even a small amount of mass is equivalent to a very large amount of energy, namely 90 petajoules per kilogram ≈ 21 megaton of TNT per Relation between potential energy and force Potential energy is closely linked with forces. If the work done moving along a path which starts and ends in the same location is zero, then the force is said to be conservative and it is possible to define a numerical value of potential associated with every point in space. A force field can be re-obtained by taking the vector gradient of the potential field. For example, gravity is a conservative force. The work done by a unit mass going from point A with U = a to point B with U = b by gravity is (b − a) and the work done going back the other way is (a − b) so that the total work done from $U_{A \to B \to A} = (b - a) + (a - b) = 0 \,$ If we redefine the potential at A to be a + c and the potential at B to be b + c [where c can be any number, positive or negative, but it must be the same number for all points] then the work done going from $U_{A \to B} = (b + c) - (a + c) = b - a \,$ as before. In practical terms, this means that you can set the zero of U anywhere you like. You might set it to be zero at the surface of the Earth or you might find it more convenient to set it to zero at A thing to note about conservative forces is that the work done going from A to B does not depend on the route taken. If it did, then it would be pointless to define a potential at each point in space. An example of a non-conservative force is friction. With friction, the route you take does affect the amount of work done, and it makes no sense at all to define a potential associated with All the examples above are actually force-field-stored energy (sometimes in disguise). For example in elastic potential energy, stretching an elastic material forces the atoms very slightly further apart. The equilibrium between electromagnetic forces and Pauli repulsion of electrons (they are fermions obeying Fermi statistics) is slightly violated resulting in a small returning force. Scientists rarely talk about forces on an atomic scale. Often interactions are described in terms of energy rather than force. You can think of potential energy as being derived from force or you can think of force as being derived from potential energy (though the latter approach requires a definition of energy that is independent from force which does not currently exist). A conservative force can be expressed in the language of differential geometry as a closed form. Because Euclidean space is contractible, its de Rham cohomology vanishes, so every closed form is exact, i.e., is the gradient of a scalar field. This gives a mathematical justification of the fact that all conservative forces are gradients of a potential field. See also 1. ↑ Crosbie Smith, 1998, The Science of Energy: A Cultural History of Energy Physics in Victorian Britain. (Chicago: University of Chicago Press. ISBN 0226764206). • Serway, Raymond A., and John W. Jewett. 2004. Physics for Scientists and Engineers, 6th ed. Belmont, CA: Thomson-Brooks/Cole. ISBN 0534408427. • Tipler, Paul Allen, and Gene Mosca. 2004. Physics for Scientists and Engineers, Volume 1: Mechanics, Oscillations and Waves, Thermodynamics, 5th ed. New York: W.H. Freeman. ISBN 0716708094 • Tipler, Paul Allen, and Ralph A. Llewellyn. 2003. Modern Physics, 4th ed. New York: W.H. Freeman. ISBN 0716743450 New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed. Research begins here...
{"url":"http://www.newworldencyclopedia.org/entry/Potential_energy","timestamp":"2014-04-17T12:37:00Z","content_type":null,"content_length":"37280","record_id":"<urn:uuid:ef44ff82-0911-4db7-90ac-af2b7e442296>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Tutorial: Using plot() function April 19, 2012 By alstated Hello Readers! This is my first post as a member ofR-bloggers. In this post I'm going to talk about the basic plotting in R, fortwo dimensional. This is a tutorial for beginners in R. To begin with, let's define a vector first. Say wehave vector x, which is a sequence from 1 to 10 with length of 100 points. Thatis, Then, here's the simplest way on plotting a yfunction using points on vector x. Now, you can modify your plot. Say you want to add atitle on your plot. Then you can just add main="Plot of sin(x) + cos(x)", on the code. That is, Moreover, you can also change the style of yourplot, by changing the type of it. And here are the possible types of plot: "p" for points, "l" for lines, "b" for both, "c" for the lines part alone of"b", "o" for both overplotted, "h" for histogram like (or high-density)vertical lines, "s" for stair steps, "S" for other steps, "n" for no plotting. So let's say we choose type "h". Then we have Looking at the plot now, it seems that the lines arevery thin. To make it thicker we need to add an option lwd, which is for the linewidth. And thus, changing the line width and making it thicker, we can give avalue say 2.5. In addition, you can also change the plotting'character', by using the pch option in R. for the author, please follow the link and comment on his blog: ALSTAT R Blog daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/tutorial-using-plot-function/","timestamp":"2014-04-17T21:37:49Z","content_type":null,"content_length":"43613","record_id":"<urn:uuid:238e0d0b-3bda-4f50-bc2d-b585162c1fa4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Speedup a code using apply_along_axis Xavier Gnata xavier.gnata@gmail.... Sun Feb 28 12:51:59 CST 2010 I'm sure I reinventing the wheel with the following code: from numpy import * from scipy import polyfit,stats def f(x,y,z): return x+y+z def foo(M): return polyfit(t,ramp,1)[0] return 0 print apply_along_axis(foo,2,M) In real life M is not the result of one fromfunction call but it does not matter. The basic idea is to compute the slope (and only the slope) along one axis of 3D array. Only the values below a given threshold should be taken into account. The current code is ugly and slow. How to remove the len and the if statement? How to rewrite the code in a numpy oriented way? More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-February/049057.html","timestamp":"2014-04-16T14:07:54Z","content_type":null,"content_length":"3319","record_id":"<urn:uuid:b1a13403-e501-4318-a3ca-755661672464>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
25, 2011 After Cantor recovered from a stint in the insane asylum for mental breakdowns suffered while trying to prove some (we now know) intractable problems in set theory, he had a stroll with his good friend and colleague Dedekind. Dedekind asked Cantor what he pictured in his mind when he thought of sets. Dedekind remarked that when he thought about a set, he pictured a clear bag containing objects inside which are the set's elements. Cantor responded that when he thought about a set, he pictured an abyss. In a previous post I remarked that some abstract objects occupy space and have relevant properties of concrete material objects and thus there may not be any philosophical (epistemological or otherwise) problems in dealing with them such as knowing them or having them cause and be caused by physical phenomenon. In mathematics, we have pure sets built from the null set from iterative or otherwise “mental operations.” The old joke that the mathematical universe is an entire infinite universe begotten from nothing (the contents of the null set) is illustrative of the counter-intuitiveness of this idea. Thus, as the story goes, from the null set, we have everything we need for mathematics. But since these objects of math are all purely abstract without concrete properties (as opposed to abstract objects with concrete elements) which may take part in the causal epistemological chain, how do we know them if e.g., the causal theory of knowledge is correct? If mathematics is built on such foundations, we cannot appeal to impure sets such as sets built on singletons of everyday objects to resolve these epistemological lacunae. Mathematics foundationally built in such a way thus may be metaphysically and epistemologically counter-intuitive to many precisely because of the knowledge problem or some other problems with causally inert properties of pure sets or the begetting of things from nothing. Metaphysically, it is problematic as the old joke suggests, it is a infinitely high house of cards built on top the foundations of an abyss. I want to ask here if there is another alternative system that is built on firmer and more intuitive metaphysical foundations. Now consider the classical Chinese conception of numbers. Many philosophers of the school of names and the neo-Taoist school inspired by the I-Ching (such as Wang Bi) thought that from some object, we may, by some mental operation, form the class (they did not have a modern notion of set obviously but did have a notion of what we would term 'class') containing that object. Now we have two objects, namely the object and its singleton class. One can go on forming more and more objects this way until one has all the objects needed for one's numbers and thus to furnish one's mathematical universe. This would generate the positive integers instead of the natural numbers including zero as modern mathematics would have it. There is something to be said about the Chinese system; it is iterative like the modern conception of numbers but it starts off from one (the singleton set of some concrete object) instead of zero (the null set). This system is ontologically well-founded on something as opposed to nothing and is concrete "from the start." Specifically, what the Taoist founds his theory of numbers on, the original object on which all his classes are built on, is just the More formally, we have according to the Taoist conception: Tao, {Tao}, {Tao, {Tao}}, {Tao, {Tao}, {Tao, {Tao}}}. Now, either the Tao or {Tao} may be arbitrarily designated as the number one. Let's say that we choose the former. One is unlike the other numbers in that 1. it is not a class (set). 2. for all numbers n other than one contains n elements while one, not being a set contains no Or one can designate the Tao's singleton {Tao} as the number one. But then all subsequent numbers will contain the Tao which is not a number to generate all successive numbers. Alternatively, one may have 1=Tao, 2={Tao}, 3={Tao, {Tao}}. This last interpretation seems to be the one Wang Bi favored. Significantly, as Wang Bi makes the point in both his Yijing and Laozi commentaries, in this sense “one” is not a number but that which makes possible all numbers and functions. In the latter (commentary to Laozi 39), Wang defines “one” as “the beginning of numbers and the ultimate of things.” In the former (commentary to Appended Remarks, Part I), he writes, “In the amplification of the numbers of heaven and earth [in Yijing divination] … ‘one’ is not used. Because it is not used, use [of the others] is made possible; because it is not a number, numbers are made complete. This indeed is the great ultimate of change.” However, here, each number will contain n-1 elements instead of the Von Neumann formulation of modern mathematics which has for each number n, n elements. Thus Chinese mathematics can be impure and would have no problems with the epistemological problem of knowing abstract objects built on the singleton of some concrete object. Everything that can be proven in the modern system of mathematics can be proven in the Chinese system per Lowenheim-Skolem theorem or one of its corollary theorems (since the two number systems are equinumorous and isomorphic). Zero thus is redundant for mathematics. There remains only orthographic problems of how to write numbers and mathematical formula down and the technique the classical Chinese used was to use a non referential “placeholder” in place of '0' or the cipher. But this system is just as counter-intuitive as the modern for there are problems of its own. If I start off from the singleton set containing my computer and you start off from the singleton set containing your right foot, we would have two different notions of the numbers one and thus all subsequent numbers built on it. But 1=1 is true and it is necessarily true. Modern mathematics does not have this problem because it is easy to prove that there is one and only one null set and thus it and all numbers built on it are identical. We may be able to obstruct this problem by introducing a single “arbitrary object” which can stand in for any object and starting from the singleton of it and define it as the number one. However, is this arbitrary object concrete or abstract? It might make sense to say of an arbitrary object that it is concrete or abstract with causal properties. I don't know. But if concrete where is this object? It may not have a specific location but since it could “stand in” for any concrete object, it may make some sense to ascribe to it causal properties even if it may be abstract much as the singletons of concrete objects. Kit Fine has defended arbitrary objects has having such common properties of concrete objects. Alternatively, as pointed out above, Wang Bi and some of the other classical Chinese philosophers thought that the original object which one is is just the Tao, which in turn is itself not strictly definable and may be a ineffable "primitive" (I guess maybe like the term 'nothing' or perhaps 'arbitrary object') or its singleton. The Tao certainly seems to have causal properties and may certainly be said to influence causally the world on their conception whatever it may be. Still, the Chinese system may not wholly escape some of the counter-intuitiveness associated modern math. It is only slightly less counter-intuitive than the modern system because instead of begetting the whole mathematical universe from nothing, it begets it from one thing. An infinitely high house of cards is built on the flimsiest of foundations instead of on an abyss.
{"url":"http://lapisphilosophorum333.blogspot.com/2011_03_25_archive.html","timestamp":"2014-04-19T04:20:55Z","content_type":null,"content_length":"102152","record_id":"<urn:uuid:67899dc0-d023-48b6-b080-e2422bb9d330>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating Normal (Gaussian) Distributed Random Numbers Random numbers are an essential part of computer games and simulations. Functions in most software development tools output random numbers with uniform distribution. Simulations often need random numbers in normal distribution. An easy way to approximate normal distribution is to add three random numbers: G = X + X + X X = a uniformly distributed random number between -1 and 1. G ~ a standard normal random number. To adjust the result for your needs, just multiply by your desired standard deviation and add your desired mean. R = Gσ + μ Try it now: Mean = 100, Standard Deviation = 10 Did You Get All That? If not, here is a little more info with definitions, explanations and examples. What Are Normally Distributed Random Numbers and Why Do I Need Them? I hope you agree that it'd be nice to allow an occasional outlying figure, but make it more the exception than the rule. That is usually how the real world works and I have always been a fan of reality. In short, we need a system that has differing probabilities with the most common outcome in the middle and becoming less probable as the values get farther away from the average. The most common way to represent this statistically is called normal distribution or Gaussian distribution. (It is also sometimes called a bell curve but I don't like to use that term after some racist assholes wrote a book called that.) The Basics of Normal Distribution The other important figure is the standard deviation, abbreviated by the Greek letter σ (sigma). This indicates the width of the curve. Smaller standard deviations mean that values close to the mean will be more common, while larger values will make it more likely that results far from the mean can occur. Compare the red, green and blue curves. The rule of thumb is that about 68% of values will be within plus or minus 1 standard deviation from the mean. About 95% will be ± 2σ and about 99% will be ± 3σ. Generating Standard Normal Distribution Programmatically Most of the information out there about normal distribution involves calculating standard deviation and mean from a collection of data points. What we want to do is exactly the opposite. I read that the simplest way of doing this is to invert the standard normal cumulative distribution function. Yes, of course! It's so obvious! Wait, no. I have no idea what that means. Maybe that is the simplest way for a mathematician to think about it, but it does not convert to a computer algorithm easily or efficiently. Several methods have been developed to solve this including the Box-Muller transform and the Ziggurat algorithm. I got a Box-Muller transform function working in JavaScript, but I still wanted to explore a few more methods. During my research, I developed my own method that is faster than Box-Muller and is conceptually so simple that I don't need to save a copy of it. I can just re-write when I need to. (The values it generates are not identical to the Box-Muller transform, but they are really, really close.) Add Multiple Uniform Randoms As you can see, I haven't thought of a snappy name for it yet. An often forgotten thing about normal distribution is that its whole point is to simulate the effect of cumulative multiple random values. Adding a few random numbers together is a fairly simple procedure. I'd like to thank the mathematicians for making it so confusing. function rnd_snd() { return (Math.random()*2-1)+(Math.random()*2-1)+(Math.random()*2-1); } All I have done here is added three random numbers between -1 and 1 together. That will give a normal distribution with mean = 0 and standard deviation = 1. This is also called standard normal distribution. Try it out. That function will return a decimal with an average value of 0, so it is not yet the figures we need to flesh out our fantasy characters. Fortunately, it is easy to get there from here. Multiply the result by the standard deviation that you want, then add your desired mean. Round the final result if you like. Below is a function I wrote to make that process a little easier. function rnd(mean, stdev) { return Math.round(rnd_snd()*stdev+mean); } Back to the fantasy babes. I'd like 95% of them to have chest measurements between 32" and 40" so I'll ask for a mean of 36 and a standard deviation of 2. Now you can generate a fantasy bust size.
{"url":"http://www.protonfish.com/random.shtml","timestamp":"2014-04-16T07:22:37Z","content_type":null,"content_length":"9115","record_id":"<urn:uuid:4f8fab06-0130-485e-9c97-88cddcd0ed3c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00193-ip-10-147-4-33.ec2.internal.warc.gz"}