content
stringlengths
86
994k
meta
stringlengths
288
619
Avon, MA Trigonometry Tutor Find an Avon, MA Trigonometry Tutor ...When I tutor a student, the first thing I do is evaluate what piece of the foundation is missing. Every science and math concept is built upon the foundations laid last week, month, or year. Trying to teach a concept or technique without the right supports beneath is merely postponing the problem until later. 12 Subjects: including trigonometry, chemistry, physics, calculus ...I have a Ph.D. in experimental atomic physics and I have been programming for 30 years. I know Java, C++, MATLAB, and some older less used languages such as Pascal and Mathematica. I've written over 500 routines for graphing and data analysis in MATLAB. 47 Subjects: including trigonometry, reading, chemistry, geometry ...I am confident that you will find that my added experience in all these subjects give me excellent insight in how to best approach the teaching of Algebra 1. I have taught Algebra 2 for a number of years. Also, I have served as a private tutor in this subject for more than 25 years. 6 Subjects: including trigonometry, geometry, algebra 2, prealgebra ...I challenge gifted students to keep them motivated. I started my career as an electronics / software engineer and computers are still my passion. I teach introductory and Advanced Placement Computer Science and have taught computer applications (like Word and Excel) to adults of all backgrounds. 17 Subjects: including trigonometry, geometry, precalculus, statistics ...My previous years as an administrative support person gave me experience at proofreading many types of documents: contracts, reports, scientific publications, instruction manuals, and presentations. I would be happy to help you by proofreading your work! Although I am not a certified instructo... 18 Subjects: including trigonometry, English, writing, statistics Related Avon, MA Tutors Avon, MA Accounting Tutors Avon, MA ACT Tutors Avon, MA Algebra Tutors Avon, MA Algebra 2 Tutors Avon, MA Calculus Tutors Avon, MA Geometry Tutors Avon, MA Math Tutors Avon, MA Prealgebra Tutors Avon, MA Precalculus Tutors Avon, MA SAT Tutors Avon, MA SAT Math Tutors Avon, MA Science Tutors Avon, MA Statistics Tutors Avon, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Avon_MA_Trigonometry_tutors.php","timestamp":"2014-04-17T04:24:28Z","content_type":null,"content_length":"24142","record_id":"<urn:uuid:f54975b3-3a78-476f-b692-950123e8aae3>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 74 , 2000 "... Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic context-free grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a first-order range- ..." Cited by 1057 (71 self) Add to MetaCart Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic context-free grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a first-order range-restricted definite clause. This paper summarises the syntax, distributional semantics and proof techniques for SLPs and then discusses how a standard Inductive Logic Programming (ILP) system, Progol, has been modied to support learning of SLPs. The resulting system 1) nds an SLP with uniform probability labels on each definition and near-maximal Bayes posterior probability and then 2) alters the probability labels to further increase the posterior probability. Stage 1) is implemented within CProgol4.5, which differs from previous versions of Progol by allowing user-defined evaluation functions written in Prolog. It is shown that maximising the Bayesian posterior function involves nding SLPs with short derivations of the examples. Search pruning with the Bayesian evaluation function is carried out in the same way as in previous versions of CProgol. The system is demonstrated with worked examples involving the learning of probability distributions over sequences as well as the learning of simple forms of uncertain knowledge. - JOURNAL OF LOGIC PROGRAMMING , 1994 "... ..." - IEEE Transactions on Systems Science and Cybernetics , 1968 "... e case of location and scale parameters, rate constants, and in Bernoulli trials with unknown probability of success. In realistic problems, both the transformation group analysis and the principle of maximum entropy are needed to determine the prior. The distributions thus found are uniquely determ ..." Cited by 166 (3 self) Add to MetaCart e case of location and scale parameters, rate constants, and in Bernoulli trials with unknown probability of success. In realistic problems, both the transformation group analysis and the principle of maximum entropy are needed to determine the prior. The distributions thus found are uniquely determined by the prior information, independently of the choice of parameters. In a certain class of problems, therefore, the prior distributions may now be claimed to be fully as "objective" as the sampling distributions. I. Background of the problem Since the time of Laplace, applications of probability theory have been hampered by difficulties in the treatment of prior information. In realistic problems of decision or inference, we often have prior information which is highly relevant to the question being asked; to fail to take it into account is to commit the most obvious inconsistency of reasoning and may lead to absurd or dangerously misleading results. As an extreme examp , 1993 "... Predictions about the future and unrestricted universal generalizations are never logically implied by our observational evidence, which is limited to particular facts in the present and past. Nevertheless, propositions of these and other kinds are often said to be confirmed by observational evidenc ..." Cited by 70 (4 self) Add to MetaCart Predictions about the future and unrestricted universal generalizations are never logically implied by our observational evidence, which is limited to particular facts in the present and past. Nevertheless, propositions of these and other kinds are often said to be confirmed by observational evidence. A natural place to begin the study of confirmation theory is to consider what it means to say that some evidence E confirms a hypothesis H. Incremental and absolute confirmation Let us say that E raises the probability of H if the probability of H given E is higher than the probability of H not given E. According to many confirmation theorists, “E confirms H ” means that E raises the probability of H. This conception of confirmation will be called incremental confirmation. Let us say that H is probable given E if the probability of H given E is above some threshold. (This threshold remains to be specified but is assumed to be at least one half.) According to some confirmation theorists, “E confirms H ” means that H is probable given E. This conception of confirmation will be called absolute confirmation. Confirmation theorists have sometimes failed to distinguish these two concepts. For example, Carl Hempel in his classic “Studies in the Logic of Confirmation ” endorsed the following principles: (1) A generalization of the form “All F are G ” is confirmed by the evidence that there is an individual that is both F and G. (2) A generalization of that form is also confirmed by the evidence that there is an individual that is neither F nor G. (3) The hypotheses confirmed by a piece of evidence are consistent with one another. (4) If E confirms H then E confirms every logical consequence of H. Principles (1) and (2) are not true of absolute confirmation. Observation of a single thing that is F and G cannot in general make it probable that all F are G; likewise for an individual that is neither - In Proc. 7th IEEE Symp. on Logic in Computer Science , 1994 "... Given a knowledge base KB containing first-order and statistical facts, we consider a principled method, called the random-worlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can conside ..." Cited by 49 (12 self) Add to MetaCart Given a knowledge base KB containing first-order and statistical facts, we consider a principled method, called the random-worlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can consider all possible worlds, or first-order models, with domain f1; : : : ; Ng that satisfy KB , and compute the fraction of them in which ' is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying ' and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximum-entropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics and artificial intelligence, but is far more general. Of equal interest to the result itself are... , 1993 "... We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and first-order statements. We then assign equal probability to all w ..." Cited by 45 (8 self) Add to MetaCart We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and first-order statements. We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a statement '. The degree of belief can be used to decide whether to defeasibly conclude '. Various natural patterns of reasoning, such as a preference for more specific defaults, indifference to irrelevant information, and the ability to combine independent pieces of evidence, turn out to follow naturally from this technique. Furthermore, our approach is not restricted to default reasoning; it supports a spectrum of reasoning, from quantitative to qualitative. It is also related to other systems for default reasoning. In particular, we show that the work of [ Goldszmidt et al., 1990 ] , which applies maximum entropy ideas t... , 1995 "... Consider the following problem. You are given an alphabet of k distinct symbols and are told that the i th symbol occurred exactly ni times in the past. On the basis of this information alone, you must now estimate the conditional probability that the next symbol will be i. In this report, we presen ..." Cited by 35 (3 self) Add to MetaCart Consider the following problem. You are given an alphabet of k distinct symbols and are told that the i th symbol occurred exactly ni times in the past. On the basis of this information alone, you must now estimate the conditional probability that the next symbol will be i. In this report, we present a new solution to this fundamental problem in statistics and demonstrate that our solution outperforms standard approaches, both in theory and in practice. - International Journal of General Systems , 1977 "... This paper is concerned with establishing broadly-based system-theoretic foundations and practical techniques for the problem of system identification that are rigorous, intuitively clear and conceptually powerful. A general formulation is first given in which two order relations are postulated on a ..." Cited by 34 (23 self) Add to MetaCart This paper is concerned with establishing broadly-based system-theoretic foundations and practical techniques for the problem of system identification that are rigorous, intuitively clear and conceptually powerful. A general formulation is first given in which two order relations are postulated on a class of models: a constant one of complexity; and a variable one of approximation induced by an observed behaviour. An admissible model is such that any less complex model is a worse approximation. The general problem of identification is that of finding the admissible subspace of models induced by a given behaviour. It is proved under very general assumptions that, if deterministic models are required then nearly all behaviours require models of nearly maximum complexity. A general theory of approximation between models and behaviour is then developed based on subjective probability concepts and semantic information theory The role of structural constraints such as causality, locality, finite memory, etc., are then discussed as rules of the game. These concepts and results are applied to the specific problem or stochastic automaton, or grammar, inference. Computational results are given to demonstrate that the theory is complete and fully operational. Finally the formulation of identification proposed in this paper is analysed in terms of Klir’s epistemological hierarchy and both are discussed in terms of the rich philosophical literature on the acquisition of knowledge. 1 - International Journal of Approximate Reasoning , 1994 "... Non-Axiomatic Reasoning System is an adaptive system that works with insu cient knowledge and resources. At the beginning of the paper, three binary term logics are de ned. The rst is based only on an inheritance relation. The second and the third suggest a novel way to process extension and intensi ..." Cited by 33 (25 self) Add to MetaCart Non-Axiomatic Reasoning System is an adaptive system that works with insu cient knowledge and resources. At the beginning of the paper, three binary term logics are de ned. The rst is based only on an inheritance relation. The second and the third suggest a novel way to process extension and intension, and they also have interesting relations with Aristotle's syllogistic logic. Based on the three simple systems, a Non-Axiomatic Logic is de ned. It has a term-oriented language and an experience-grounded semantics. It can uniformly represents and processes randomness, fuzziness, and ignorance. It can also uniformly carries out deduction, abduction, induction, and revision.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=133513","timestamp":"2014-04-17T17:02:54Z","content_type":null,"content_length":"37674","record_id":"<urn:uuid:88f33809-7885-4b4c-9fdb-adf6499bc4e1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
Southlake Math Tutor Find a Southlake Math Tutor ...Prealgebra is the key to success in all future math courses. The basics learned in this course help in many ways determine the student's success in Algebra as well as Geometry. I have over 22 years of secondary mathematics teaching experience and over 5 years of one-on-one tutoring experience. 15 Subjects: including statistics, algebra 1, algebra 2, calculus ...I work very hard to make learning meaningful and fun. As an educational psychologist, I have completed many hours of advanced coursework, and I am well-versed in the current research regarding learning, memory, and instructional practices. I utilize this knowledge to identify underlying process... 39 Subjects: including ACT Math, English, statistics, reading ...I have a Bachelor of Science degree in Multidisciplinary Studies (Dbl Major in Education, Counseling, Minor in Computer Science), and an advanced degree in Biblical Counseling/Life Coaching. Does your student need targeted core subject support, homework assistance, a boost in confidence, intrins... 27 Subjects: including prealgebra, English, reading, writing ...For the visual learner, I would work with various visual illustrations, flash cards and other activities that would aid the student in their studies. Study skills would also include doing research, planning out projects with definite steps towards completion, and learning how to outline and take... 8 Subjects: including algebra 1, vocabulary, grammar, prealgebra ...I love to teach mathematics, not to just solve problems, but also to make sure concepts and fundamentals are clear. I have experience in teaching Pre-Algebra, Algebra 1, Geometry and Algebra 2. I also have an expertise in more complex mathematics such as Statistics, Pre-Calculus, Operations Research. 12 Subjects: including calculus, SAS, linear algebra, algebra 1
{"url":"http://www.purplemath.com/southlake_math_tutors.php","timestamp":"2014-04-21T02:37:33Z","content_type":null,"content_length":"23705","record_id":"<urn:uuid:1e4c3242-69d9-4a65-99a4-b57e91678726>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
8. Independent and Dependent Events If the occurrence or non-occurrence of E[1] does not affect the probability of occurrence of E[2], then P(E[2] | E[1]) = P(E[2]) and E[1] and E[2] are said to be independent events. Otherwise they are said to be dependent events. [Recall from Conditional Probability that the notation P(E[2] | E[1]) means "the probability of the event E[2] given that E[1] has already occurred".] Two Events Let's consider "E[1] and E[2]" as the event that "both E[1] and E[2] occur". If E[1] and E[2] are dependent events, then: P(E[1] and E[2]) = P(E[1]) × P(E[2] | E[1]) If E[1] and E[2] are independent events, then: P(E[1] and E[2]) = P(E[1]) × P(E[2]) Three Events For three dependent events E[1], E[2], E[3], we have P(E[1] and E[2] and E[3]) = P(E[1]) × P(E[2] | E[1]) × P(E[3] | E[1] and E[2]) For three independent events E[1], E[2], E[3], we have P(E[1] and E[2] and E[3]) = P(E[1]) × P(E[2]) × P(E[3]) Example 1 If the probability that person A will be alive in `20` years is `0.7` and the probability that person B will be alive in `20` years is `0.5`, what is the probability that they will both be alive in `20` years? Example 2 A fair die is tossed twice. Find the probability of getting a `4` or `5` on the first toss and a `1`, `2`, or `3` in the second toss. Example 3 Two balls are drawn successively without replacement from a box which contains `4` white balls and `3` red balls. Find the probability that (a) the first ball drawn is white and the second is red; (b) both balls are red. Example 4 A bag contains `5` white marbles, `3` black marbles and `2` green marbles. In each draw, a marble is drawn from the bag and not replaced. In three draws, find the probability of obtaining white, black and green in that order. Didn't find what you are looking for on this page? Try search: Online Algebra Solver This algebra solver can solve a wide range of math problems. (Please be patient while it loads.) Go to: Online algebra solver Ready for a break? Play a math game. (Well, not really a math game, but each game was made using math...) The IntMath Newsletter Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents! Share IntMath! Short URL for this Page Save typing! You can use this URL to reach this page: Math Lessons on DVD Easy to understand math lessons on DVD. See samples before you commit. More info: Math videos
{"url":"http://www.intmath.com/counting-probability/8-independent-dependent-events.php","timestamp":"2014-04-18T10:35:59Z","content_type":null,"content_length":"24923","record_id":"<urn:uuid:43bcb116-8670-4e10-889f-deefb46fdec0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Ruby Object Model The Ruby Object Model Data structure in detail A semi-formal description of the Ruby object model is presented. Data structure of the Ruby programming language is described in series of state transition systems, each built atop of the previous one(s). As a result, a nomenclature is proposed (including such notions as class, eigenclass, metaclass, module, includer or method resolution order) together with the semantics. Document date Initial release March 2, 2011 Ondřej Pavlata Last major release June 21, 2012 Jablonec nad Nisou Last update October 10, 2012 Czech Republic (see Document history) 1. This document has been created without any prepublication review except those made by the author himself. 2. The author is not an experienced Ruby programmer. In fact, he did not write any Ruby program except very simple tests. 3. Most of the author's experience with dynamic programming languages comes from JavaScript. Ruby version This document refers to Ruby 1.9, more specifically, to the Matz's Ruby Interpreter (MRI) 1.9.2. Table of contents Some familiarity with elementary algebra and order theory is assumed. This involves such notions as structure, substructure, sort, generating set/structure, (partial) function, relation, domain, range, restriction, extension, injectivity, reflexivity, transitivity, monotonicity, isomorphism, (order) chain, closure operator. Monounary algebra A (total) monounary algebra (aka functional graph []) is an algebra with a single unary operation, i.e. it is a structure (X, .p¯) such that • X is a set, • .p¯ is a function X → X. A partial monounary algebra is a structure (X, .p) where .p is a partial function on X (i.e. .p : X ↷ X). An element x ∈ X is called a fixed point if x.p = x. There is a one-to-one correspondence between monounary algebras and partial monounary algebras without fixed points: the fixed points become undefined and vice versa. This can be notationally expressed by adding/removing an overline (.p ¯ ↔ .p). We denote x.p(i) the i-th application of .p to x (we put x.p(0) = x). A subset C ⊆ X is called a • cycle if it is of the form C = {x, x.p(1), …, x.p(n)} such that 1 ≤ n and x = x.p(n+1) (in particular, C is finite). • ω-chain if it is of the form C = {x, x.p(1), x.p(2), …, } where x.p(i) ≠ x.p(j) for i ≠ j, i.e. (C, .p) is isomorphic to the succession structure of natural numbers (in particular, C is • component if it is a component of (X, .p) as a relation, i.e. C is maximal such that for every x, y ∈ C there are i, j such that x.p(i) = y.p(j). A (partial) monounary algebra (X, .p) is connected if X itself is a component. A structure is said to be locally finite if its every finitely generated substructure is finite. For a (partial) monounary algebra (X, .p) it means that for every x ∈ X, the set {x, x.p(1), x.p(2), …, } is finite. Proposition: Let (X, .p¯) be a connected (total) monounary algebra. 1. Either (a) or (b) occurs: □ (a) X contains exactly one cycle and no ω-chains. □ (b) X contains no cycle and an ω-chain (consequently, infinitely many ω-chains). 2. The case (a) occurs iff (X, .p¯) is locally finite. (In particular, every finite connected monounary algebra has exactly one cycle.) We call a monounary algebra (X, .p¯) a pseudotree if it is connected and locally finite. Equivalently (by the proposition from the previous subsection), X contains exactly one cycle C and no ω-chains. We call C the pseudoroot. The following picture shows a pseudotree with a 4-element pseudoroot C. An arrow x → y means x.p¯ = y. The brown arrow indicates that after choosing an r ∈ C and a redefinition of .p¯ to .p¯[r] by • r.p¯[r] = r, • x.p¯[r] = x.p¯ whenever x ≠ r, we obtain a structure (X, .p¯[r], r) that is an algebraic tree. Algebraic forest By an algebraic forest (or just forest) we mean a structure which has one of the following equivalent forms: 1. (1) A partial order whose every principal up-set is a finite chain: A partial order (X, ≤) such that for every x ∈ X, the set x.ps = { y ∈ X | x ≤ y } is finite and totally ordered by ≤. 2. (2) A monounary algebra whose every non-empty subalgebra has a fixed point: An algebra (X, .p¯) with just one unary operator, .p¯ : X → X, such that for every x ∈ X, x.p¯(i) = x.p¯(i+1) for some natural i. 3. (3) A partial monounary algebra without total non-empty subalgebras: A partial algebra (X, .p) with just one partial unary operator, the parent partial function .p: X ↷ X, such that for every x ∈ X, x.p(i) is defined for only finitely many i. Because the set x.ps from (1) is a finite chain in ≤, we can regard it as a finite list. Then the correspondence (1)↔(2)↔(3) is established as follows: • x.p¯(i) = x.ps[i] if i ≤ x.ps.length, • x ≤ y iff x.p¯(i) = y for some i, i.e. (2)←(1) • x.p¯(i) = x.ps.last otherwise, (1)←(2) i.e. .p¯ is the smallest function on X (w.r.t. ⊆) satisfying (∗), (X,≤) is the reflexive transitive closure of (X, .p¯), (∗) • x.p(i) = x.ps[i] if i ≤ x.ps.length - 1, • x ≤ y iff x.p(i) = y for some i, i.e. (3)←(1) • x.p(i) is undefined otherwise. (1)←(3) i.e. .p is the smallest relation on X satisfying (∗), the (X,≤) is the reflexive transitive closure of (X, .p) (∗). reflexive transitive reduction of (X,≤) (a.k.a Hasse diagram), The root map .r: X → X is defined by x.r = x.ps.last. Obviously, .r is a closure operator w.r.t. (X, ≤). An element x is a root if it satisfies any of the following equivalent conditions: 1. x is .r-closed, 2. x is a fixed point w.r.t. .p¯, 3. x has undefined parent x.p. 1. We consider the partial order form (1) as the primary one so that we usually prepend the phrase “reflexive transitive closure of” when referring to algebraic forests of form (2) or (3). 2. The form (3) shows that an algebraic forest can be viewed as a special case of a directed acyclic graph (DAG) which in turn is a special case of a digraph (a directed graph). In this context the term “algebraic forest” is equivalent to that of a rooted forest []. Algebraic tree By an algebraic tree (or just tree) we mean an algebraic forest with exactly one root. As an algebra, it is a structure (X, .p¯, r), such that 1. (X, .p¯) is an algebraic forest, 2. r is the only fixed point of .p¯. A structure (X, .p¯, r) is an algebraic tree iff (X, .p¯) is a pseudotree with the pseudoroot being the singleton set {r}. Primorder algebra By a primorder algebra we mean a structure (X, .ec, .pr) where • X is a set, • .ec is a map X → X, x.ec is called the eigenclass of x, • .pr is a map X → X, x.pr is called the primary element of x. Elements from the range X.pr are primary. Elements from the range X.ec are eigenclasses. The structure is subject to the following axioms: • (1) .ec is injective. We denote .ce the inverse of .ec. If defined, x.ce is the (direct) eigenclass predecessor of x. • (2) The partial algebra (X, .ce) is a forest with the root map equal to .pr. We write x.ec(i) for i-th application of .ec to x. The eigenclass index of x, denoted x.eci, is defined as the depth of x in (X, .ce), i.e. it is the unique i such that x.pr.ec(i) = x. 1. Each component of the monounary algebra (X, .ec) is isomorphic (via .eci) to the structure (ℕ, succ) of natural numbers where succ is the successor operator. 2. X = X.pr ⊎ X.ec, i.e. an element is either primary or an eigenclass. 3. For an element x the following are equivalent: □ x is primary. □ x is the primary element of itself. □ x has no eigenclass predecessor. □ x has zero eigenclass index. 4. A primorder algebra (X, .ec, .pr) is uniquely given by its reduct (X, .ec). State transition system By a state transition system we mean structure (C, Act, T, r) where • C is a state domain which is a set-theoretic class of (many-sorted) structures with a common signature, elements of C are states, • Act is an action domain which can be written as Act = Λ × P where □ Λ is a label domain, □ P is a transition parameters domain consisting of finite lists of elements from domains of sorts of C, • T is a transition relation which is a subset of C × Act × C, • r is an element of C called the inital state. The reachability relation, R*, is defined as the reflexive transitive closure of the natural projection R of T to C × C. The class of reachable states is a subclass D of C that equals the image of the initial state r under R*. Data structure description pattern We will describe Ruby program data structure as a system of systems of structures. • The word structure refers to an algebraic structure, or, more precisely, structures form set-theoretic class C of algebraic structures with equal signature. • The word system refers to a state transition system with state domain C. This system induces a subdomain D of reachable states which specifies the data structure for a particular level of • The word system refers to incremental specification. We start with a most abstract description of the Ruby object model and provide subsequent descriptions by a refinement of previous ones. We might think of this pattern as of a two-dimensional evolution of algebraic structures: 1. The outer evolution refers to our incremental description. 2. The inner evolution is relevant to Gurevich's concept of evolving algebras [] [] []. S0: 2x2 nomenclature Ruby objects can be categorized according to 2 boolean attributes called terminality and primordiality. Based on terminality, objects can be either terminative or classive. Based on primordiality, objects can be either primary or secondary. The following table shows most of the nomenclature related to this categorization. primordiality → Primary Secondary objects terminality ↓ Classive objects alias Classes Terminative objects alias Terminals We avoid using the term instances for terminals because we would have to accept one of the following statements: • There are objects that are not instances but are instances of other objects. • The class Class has no instances. (This is in contradiction with most of present Ruby literature.) We avoid using the term singleton classes for eigenclasses or for terminative eigenclasses because otherwise we would have to accept the following: • Singleton classes are not classes. We formalize the 2x2 nomenclature as the S0 structure: an S0 structure is a structure (O, .terminative?, .primary?) where • O is a set of (all) objects, • .terminative? and .primary? are boolean attributes of objects indicating whether an object is terminative resp. primary. S1: Inheritance and primorder An S1 structure is a structure (O, .ec, .pr, .sc, r, c) where • O is a set of (all) objects. • .ec is a total function between objects, x.ec is called the eigenclass of x. • .pr is a total function between objects, x.pr is called the primary object of x. • .sc is a partial function between objects, x.sc (if defined) is called the superclass of x. • r is an object, called the inheritance root. • c is an object, called the instance root or metaclass root (by (S1~7), c is a shorthand for r.ec.sc). We introduce additional notation and terminology. • We say that an object is primary if it is the primary object of itself. • We say that an object x is terminative if x.pr is not r and x.pr.sc is not defined. • Having specified the sets of primary objects and terminative objects, the 2x2 nomenclature is defined (so that S0 is a reduct of a conservative extension of S1). In particular, the partition of objects into terminals, classes and eigenclasses is specified (corresponding to boolean attributes .terminal?, .class? and .eigenclass?, respectively). • x.sc(i) (resp. x.ec(i)) denotes the ith application of .sc (resp. .ec) to x (we allow 0th application whenever the 1st application is defined). • The reflexive transitive closure of the superclass partial function .sc (with the reflexive closure only applied to non-terminals) is denoted ℍ and called the (.sc-) inheritance. • The reflexive transitive closure of the eigenclass function .ec is denoted ℙ and called the primorder. The structure is subject to the following axioms: (S1~1) The structure (O, .ec, .pr) is a primorder algebra. We will apply established notation and terminology, in particular, definition of .ce and .eci. The inheritance ℍ is an algebraic tree on non-terminals, its root is r. More specifically, for every object x, (S1~2) • x.sc is defined iff x is neither the inheritance root r nor a terminal, • if x.sc is defined, then x.sc(n) equals r for some, necessarily unique, n. .sc and .ec are commutative in the following sense: (S1~3) if x.sc is defined, then x.ec.sc is defined and is equal to x.sc.ec. (.sc-links per .ec-chains are “parallel”.) (S1~4) .sc preserves primary objects on non-terminals, i.e. x.sc is a class for every non-root class x. (S1~5) .ec.sc preserves primary objects on terminals, i.e. x.ec.sc is primary for every terminal x. (S1~6) .sc maps to classives, i.e. for every object x, if x.sc is defined then x.sc.pr is a class. (S1~7) c equals r.ec.sc and is different from r (this prevents the degenerate case of r being the only class). (S1~8) r.ec is the only object x satisfying x.sc == c. 1. c is a class. 2. Classes form an algebraic subtree of the inheritance tree, i.e. □ r is a class. □ For every non-root class x, x.sc is a class. 3. Classes together with first eigenclasses of terminals form an algebraic subtree of the inheritance tree which uniquely determines the inheritance tree. The subclass-of relation If x, y are classes such that x.sc(i) == y for some i > 0 then we say that x is a subclass-of y. By a previously mentioned proposition, the reflexive closure of the subclass-of relation is an algebraic tree on classes. Eigenclass index Recall that in a primorder algebra, for any object x, the eigenclass index of x, denoted x.eci, is the unique i such that x == x.pr.ec(i). Proposition: The superclass operator .sc decrements the eigenclass index on (all) eigenclasses of terminals and on (all) eigenclasses of the inheritance root r. On other objects, the index is preserved. I.e. for every object y for which y.sc is defined, • (a) y.sc.eci == y.eci - 1 if y.pr equals either r or some terminal, • (b) y.sc.eci == y.eci otherwise. Informally, .sc-links are “skew” in case (a) and “upright” in case (b). Inheritance ancestor lists For a non-terminal x we denote by x.hancs the list of inheritance ancestors of x. More specifically, .hancs is a partial function from the set O of objects to the set of lists of objects defined by 1. x.hancs is defined iff x is non-terminal, 2. x.hancs[i] == y iff x.sc(i) == y. We also denote the class inheritance-ancestors of a non-terminal x by x.hancestors, i.e. x.hancestors equals x.hancs without eigenclasses. An object is called a metaclass if it is an inheritance descendant of c, i.e. a metaclass is a non-terminal object x such that x.hancs contains c. A metaclass is said to be • explicit if it is a class, and • implicit otherwise (i.e. if it is an eigenclass). • The metaclass root c is the only explicit metaclass. • An object is an implicit metaclass iff it is an eigenclass of a non-terminal object. • Implicit metaclasses form a subtree in the inheritance, rooted at r.ec. Primary inheritance We denote by .ec_sc_pr the restriction of .ec.sc.pr to the set O.pr ∖ {r} of all primary objects except the inheritance root. Obviously, .ec_sc_pr is an algebraic tree. For every primary x, • x.ec_sc_pr == x.sc if x is a class, • x.ec_sc_pr == x.class == x.ec.sc if x is a terminal (*). We call .ec_sc_pr the primary inheritance. The front pseudotree By restricting .ec.sc.pr to just the set O.pr of all primary objects we obtain the Ruby's front pseudotree. Its pseudoroot is the set of helix classes introduced in the next section. The primary inheritance can be then called the front tree. The picture from the Pseudotree introductory section provides a visualization of these 2 structures. The S1[₀₁] substructure By an S1[₀₁] structure we mean the substructure (O[₀₁], .sc, .ec) of an S1 structure such that • O[₀₁] is the set of objects with eigenclass index equal to 0 or 1, • .sc is the restriction of the superclass partial map to O[₀₁], • .ec is the restriction of the eigenclass map to primary objects – so that .ec is only partial in S1[₀₁] (and is a bijection between primary objects and first eigenclasses). Proposition: Up to isomorphism, an S1 structure is uniquely determined by its S1[₀₁] substructure. The Ruby Helix According to the algebraic definition, the least substructure of a structure S is generated from an empty set using S's constants and functions. Applied to an S1 structure, the least substructure is generated from the inheritance root r using .sc, .ec and .pr. We call the minimum S1 substructure Ruby helix. The name comes from the fact that the substructure resembles a helical threading of a right-infinite screw. The inheritance corresponds to the helix curve, the primorder corresponds to imaginary lines parallel to the screw axis. This is stated more precisely as follows. 1. Ruby helix consists of classes c.hancs and their .ec chains. In particular, Ruby helix contains no terminals. 2. When restricted to the helix, .sc is injective. 3. Let n be the number of helix classes, i.e. n = c.hancs.length. Then x.ce == x.sc(n) for every helix eigenclass x. Helix classes As of version 1.9, Ruby provides just 4 helix classes, forming the following chain: Class < Module < Object < BasicObject. In particular, BasicObject is the inheritance root r. The following table shows basic notation and terminology for helix classes. Document notation Description Ruby name r the inheritance root BasicObject ¤ the conventional inheritance-root Object m the metamodule root Module c the instance root / metaclass root Class A sample S1 structure The following table shows a sample restriction of an S1 structure. The .sc function is indicated by arrows ↖ and ↑, the .ec function by the → arrows. Objects A, B and b are assumed to be created by the code class A; end; class B < A; end; b = B.new (cf.[]) Primary objects Secondary objects (alias classes (alias eigenclasses) and terminals) Eigenclass index → 0 1 2 … (Duplicately presented row) •Class → • → • → … 4-row cylinder skew seam → ↖ ↖ ↖ •BasicObject → • → • → … ↑ ↑ ↑ •Object → • → • → … ↑ ↑ ↑ ↑ ↑ ↑ ↑ •Module → ↑ • → ↑ • → … ↑ ↑ ↑ Classives ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ •Class → ↑ • → ↑ • → … •A → • → • → … ↑ ↑ ↑ •B → • → • → … ↖ ↖ ↖ Terminatives •b → • → • → … The .class function alias direct-instance-of relation The .class function maps objects to classes. For every object x, • x.class == x.ec.sc if x is terminal, • x.class == r.ec.sc (== c) otherwise. Proposition: For every object x, x.class equals the “entry-object” of the class subtree when traversed from the eigenclass of x, i.e. x.class == x.ec.hancestors[0]. For objects x, y, if x.class == y then we say that y is the class of x and that x is a direct instance of y. The reflexive transitive closure of .class is an algebraic tree with a 2-3 level structure shown in the following table (we use the name Class for the instance root c). level depth level members 0 (top level) Class 1 Classes except Class Eigenclasses 2 Terminals Note that such a division is only possible due to (S1~8). This axiom ensures that class Class has no (direct) terminal instances. The instance-of relation We define the instance-of relation between objects as the composition .class ◦ ℍ. Equivalently, x is an instance-of y iff x.class.sc(i) == y for some i. Proposition: x is an instance-of y iff x.ec.hancestors contains y. State transitions The S1 structure is a simplification (or coarsement) of Ruby program data state. Ruby program execution can be abstracted to a (finite or infinite) sequence S[0], S[1], …, of abstract states or just states. The sequence itself is called an (abstract) run. S[0] is the initial state. For each state S[i], we denote S[i].S1 its corresponding S1 structure. In addition, we use the following notational conventions: By saying that S → S' is a state transition we mean S and S' are abstract states with S appearing before S' in the run. We use state indices or apostrophes to distinguish between structures for particular states, e.g. O[i] denotes the set of objects of S[i].S1. We drop underscores whenever no confusion is likely to arise, so that O[i] means O[i]. The following rules apply: State transitions are substructural in the S1 structure, i.e. for every states S[i], S[j], the restriction of S[i].S1 and S[j].S1 to their common object set O[i] ∩ O[j] is equal. More specifically, for every object x from O[i] ∩ O[j], (S1-T~1) • x is terminative in S[i] iff x is terminative in S[j]. • r[i] equals r[j]. • x.sc[i], x.ec[i] and x.pr[i] are equal to x.sc[j], x.ec[j] and x.pr[j], respectively. Consequently, transitions preserve the 2x2 nomenclature, eigenclass index, inheritance ancestor lists and the class map. (S1-T~2) For every consecutive states S[i], S[i+1], either O[i] ⊆ O[i+1] or O[i] ⊇ O[i+1]. (S1-T~3) O[0] ⊆ O[i] for every i, i.e. the initial objects are never removed. According to (S1-T~2), there are two types of irreducible transitions affecting the S1 structure: increasing (creating new objects) and decreasing. Creating new objects This transition adds new primary objects x and their .ec chains to the S1 structure. The following must be specified, explicitly or implicitly, for each such x: • The requested value of x.ec.sc.pr (i.e. the parent in the primary inheritance .ec_sc_pr – necessarily a class). • The requested terminality of x - i.e. whether x should be (a) a terminal or (b) a class. In most explicit cases, the transition is accomplished by specifying a class X, different from Class, and then creating a new object x such that x.ec.sc.pr == X either by (a) instantiation of X or (b) subclassing from X, as in the following code: • (a) x = X.new. This creates a new terminal x. • (b) x = Class.new(X). This creates a new class x. Removing existing objects This type of transition is only performed by garbage collection. Instantiable classes We say that a class y is instantiable if it is allowed to have instances, i.e. in some reachable state, • there is an object x such that x is instance-of y. Final classes We say that a class y is final if • y has no subclasses, and • this condition is preserved by transitions. Quasifinal classes We say that a class y is quasifinal if • y does not have instantiable subclasses, and • this condition is preserved by transitions. 1. Final classes are quasifinal. 2. Except for Class, quasifinal classes are those that cannot have indirect instances. S2: Module inclusion lists An S2 structure is an S1 structure equipped with (m, .incs) where • m is a class, called the root metamodule, • .incs is a partial function from the set O of objects to the set of (finite) lists of objects. Lists from the range of .incs are called (own) inclusion lists. Descendant classes of m different from Class are called metamodules. Terminals that are instances of m are modules. An S2 structure has the following (additional) axioms: (S2~1) The root metamodule m equals the already presented helix class Module. (S2~2) x.incs is defined iff x is a module or a non-terminal. (S2~3) Only modules can appear in inclusion lists. (S2~4) A module cannot appear in its own inclusion list. (S2~5) All inclusion lists are non-repetitive (injective), i.e. no object appears more than once in the same inclusion list. We say than an object is an includer if it is a module or a non-terminal. Thus .incs maps includers to lists of includers. Note that this terminology admits includers with no own includees. Proposition: An object is an includer iff it is an instance of Module. The own-includer-of relation For includers x, y, if y occurs in x.incs then y is called an own includee of x and x is called an own includer of y. We also say that y is directly included in x. The reflexive closure, restricted to includers, of the own-includer-of relation is denoted by Μ. Pure instances We introduce an additional term for non-includers: pure instances. This means that an object x is a pure instance if it is an “end user” of the type system: x is neither a class nor an eigenclass nor a module. The HM descendancy We define a relation ≤ between includers as (ℍ ◦ Μ) ∪ Μ. Recall that 1. ℍ is the .sc-inheritance, and 2. Μ is the reflexive closure of the own-includer-of relation. Equivalently, x ≤ y iff x and y are includers such that at least one of the following is satisfied: 1. x == y, 2. x.sc(i) == y for some i, 3. x.incs[i] == y for some i, 4. x.sc(i).incs[j] == y for some i, j. We call this relation HM descendancy or simply descendancy. We use the symbol < for the strict version of ≤ (and symbols ≥ and > for inverses of ≤ and <, respectively). 1. The restriction of descendancy to non-terminals equals the .sc-inheritance. 2. Contrary to the semantic convention for the ≤ symbol, HM descendancy is not a partial order in general. Neither transitivity nor acyclicity (antisymmetry of the transitive closure) is guaranteed. (See Inclusion anomalies for examples.) 3. For every includers x, y such that x ≤ y, □ x not being an eigenclass implies y not being an eigenclass, □ x being a module implies y being a module. The includer-of relation The includer-of relation is defined as the range-restriction of < to modules, i.e. • x is an includer of y iff x < y and y is a module. We also say that y is an includee of x or that y is included in x. Proposition: The includer-of relation is an extension of the own-includer-of relation. The kind-of relation The kind-of relation is defined between all objects as the composition .ec ◦ ≤. Equivalently, • x is kind of y iff x.ec ≤ y. 1. An object cannot be kind of a pure instance. 2. An object is kind of its eigenclass. 3. The instance-of relation is a range-restriction of the kind-of relation to classes. If X is an includer then Xs means the set of objects that are kind of X. This convention is usually applied to named classes or modules. • The word “kind” in the phrase “(l) is kind of (r)” construes with the right side of the phrase, so that we cannot express Xs as “kinds” of X. • If X is a class then Xs means instances of X. Basic kinds The following table shows basic kinds of objects using the kind-of relation and helix classes. kind of BasicObject Pure instances kind of Object kind of Module Modules kind of Class Classes and Eigenclasses The Module subtable provides the following nomenclature of includers: Classes ⊂ Classes ⊂ Modules ⊃ Modules ∥ ∥ ∥ ∥ Classes − Eigenclasses Non-terminals Includers Modules − Classes Note in particular that this explains the relationship between classes and modules: • A class is not a module and vice versa. (Modules ∩ Classes == ∅.) • Every class is kind of Module. (Classes ⊂ Modules.) • No module is kind of Class. (Modules ∩ Classes == ∅.) Objects versus Objects Objects that are not kind of Object are considered blank slate objects. Objects == BasicObjects == Objects ⊎ (blank slate objects). An S2 structure example The following picture shows an example of an S2 structure in a 3D perspective. The structure can be created as follows. class S < String; end; class A; end; class B < A; end; class X < Module; end module R; end; module M; end; N = X.new s = S.new; i = 20; j = 30; k = 2**70; b = B.new class BasicObject; include ::R end class B; include M end class << B; include M end class X; include M end class << X; include M end module N; include M, Comparable end class << s; include N end The code first builds classes (S, A, B, X), then modules (R, M, N), then non-includers (pure instances s, i, j, k, b). Finally, inclusion lists are created. The diagram shows that inclusion lists refine the sc-inheritance. This refined structure is refered to as MRO in the next section. Ancestor lists in the structure can be reported (without eigenclasses) by the following code (). class Object; def ec; singleton_class rescue self.class end end %w{s i j k b }.each { |x| puts "%s.ec: %s" % [x, eval(x).ec.ancestors] } %w{B.ec X.ec N X}.each { |x| puts "%s: %s" % [x, eval(x).ancestors] } • In the example, inclusion lists of built-in classes and modules (and their eigenclasses) are not modified except for the inclusion of R into BasicObject. • The red outline indicates that R (more precisely, the pair (BasicObject, R)) is the root in the example's MRO structure. • Inclusions into BasicObject are untypical. Our inclusion has been made to show that BasicObject is not necessarily the MRO root. • R can be superceded by a new root after e.g. module R; include (R = Module.new) end class BasicObject; include ::R end p Object.ancestors #--> [Object, Kernel, BasicObject, R, R::R] The method resolution order (MRO) MRO domain We define the method resolution order (MRO) as a relation between elements of Μ. The domain Μ consists of the following two disjoint sets of elements: 1. Pairs (x,x) with x being an includer (“includer elements”). 2. Pairs (x,y) with y being a member of x.incs (“iclass elements”, y is necessarily a module). Informally, we just equipped each member of an inclusion list with the context of its includer, making the includee into a unique “inclusion class”. MRO .super The MRO super or MRO parent is a partial function .super defined on the MRO domain Μ as follows: 1. For pairs (x,x), 1. (x,x).super == (x.incs[0], x) if x.incs is nonempty, else 2. (x,x).super == (x.sc, x.sc) if x.sc is defined, else (x,x).super is undefined. 2. For pairs (x,y) with x different from y, 1. (x,y).super == (x, x.incs[i+1]) if y equals x.incs[i] for some i < x.incs.length-1, else 2. (x,y).super == (x.sc, x.sc) if x.sc is defined, else (x,y).super is undefined. As usual, we denote (x,y).super(i) the ith application of .super to (x,y). The method resolution order (Μ, ≤) is defined by • x < a and a is not a module (equivalently, x.sc(i) equals a for some i > 0 ), or (x,y) ≤ (a,b) iff • x == a and y appears before b in x.incs, or • x == a == y. 1. (x,y) ≤ (a,b) iff (x,y).super(i) equals (a,b) for some i, i.e. (Μ, ≤) is a reflexive transitive closure of (Μ, .super). 2. The method resolution order is an algebraic forest. 3. MRO consists of two types of components: 1. (A) There is a unique component containing (r,r), called the MRO tree. Its root equals either (r,r) or (r, r.incs.last). 2. (B) All the remaining components are chains corresponding to inclusion lists of a module includer extended by the includer itself. MRO ancestor lists We denote by .ancs the function from the set of includers to the set of lists of includers defined by 1. x.ancs[i] == y iff (x,x).super(i) == (w,y) for some w. If x.ancs[i] is defined then it is said to be the ith (MRO) ancestor of x. The Ruby .ancestors built-in method filters out eigenclasses: For each class or module x, x.ancestors equals x.ancs without eigenclasses. Proposition: Let x, y be includers. 1. x ≤ y iff x.ancs contains y. □ x.ancs == [x] + x.incs + x.sc.ancs if x.sc is defined, □ x.ancs == [x] + x.incs otherwise (i.e. if x is a module or equals r). 3. (x,y) is in Μ iff y is in x.ancs and only modules appear before the first occurrence of y in x.ancs. 4. (x,y) is in ℍ iff y is in x.ancs and is not a module. 5. For any non-terminal x, the inheritance ancestor list x.hancs is obtained from x.ancs by removing modules. 6. If x is not an eigenclass then x.ancs does not contain an eigenclass (i.e. x.ancs == x.ancestors). Note that by (2) and (4), an S2 structure can be uniquely specified by a member set with (.sc, .incs) replaced by (.ancs, .module?) where .module? is a boolean attribute indicating whether an object is a module. A note about terminology The method resolution order / MRO denomination used for (Μ, ≤) reflects the fact that the relation is used in Ruby's method resolution. However, only the MRO tree is used in this respect. On the other hand, qualified constant resolution uses all components of (Μ, ≤) so that the term “qualified constant resolution order / QCRO” would be more appropriate. The reason we have chosen MRO is because this term has already been established in the Python programming language [] []. S2 transitions The following rules apply: State transitions S → S' are substructural in the S2 structure. (S2-T~1) • The metamodule root m is fixed. • Inclusion lists are modified by insertions, i.e. for every class or module x, x.incs is a (possibly non-contiguous) sublist of x.incs'[i]. Consequently, transitions preserve being a module. 1. Transitions preserve HM descendancy, MRO, includer-of and kind-of relations in the following weak sense: □ If x ≤ y in S then x ≤ y in S'. □ Similarly for the includer-of and kind-of relations. 2. Transitions do not preserve the above relations in the strong sense in general - the relations in S are not necessarily restrictions of their counterparts in S'. In particular, for an includer x, □ the MRO parent (x,x).super can differ between S and S'. (Note that it is in contrast to the superclass (partial) function .sc.) Module inclusion A single module inclusion is a transition S → S' with two parameters: • A requested includer p. • A requested includee q. Transition request is “accepted” iff the following holds: • (a) q is a module – otherwise an error wrong argument type is raised. • (b) p does not occur in [q] + q.incs (== q.ancs) – otherwise an error cyclic include detected is raised. The structure S' equals S (possibly) except for the inclusion list p.incs' which is defined as follows. Denote A = p.incs & ([q] + q.incs) so that A = [a[1], …, a[n]] is the (possibly empty) list of common elements of p.incs and [q] + q.incs, ordered according to p.incs. Then the p.incs' list is defined as the concatenation of lists l[0], …, l[n] where • l[0] = (s[0] - p.ancs) + t[0] where s[0] and t[0] are maximum prefixes (initial sublists) of [q] + q.incs and p.incs, respectively, that are disjoint from A. • l[i] = [a[i]] + (s[i] - p.ancs) + t[i], i = 1, …, n, where [a[i]] + s[i] and [a[i]] + t[i] are maximum slices (interval sublists) of [q] + q.incs and p.incs, respectively, that are disjoint from A - [a[i]]. This means that includee chunks s[i], without any modules that appear along the p.ancs ancestor chain (note that this can only happen if p is a class or eigenclass, for modules p the “subtraction” is already made by excluding elements from A) are prepended before the start for i == 0 and inserted after a[i] for 1 ≤ i ≤ n. In most cases, A is empty, so that • p.incs' == [q] + (q.incs - p.ancs) + p.incs, i.e. q's inclusion list, without the modules already appearing in p.ancestors, is prepended to p's inclusion list. Subsequently, the single element list [q] is prepended. The following example illustrates the above description. T0, T1, T2, S0, S1, S2, A1, A2 = Array.new(8).map { Module.new } module P; include T0, A1, T1, A2, T2 end module Q; include S0, A2, S2, A1, S1 end module P; include Q end p P.included_modules #--> [Q, S0, T0, A1, S1, T1, A2, S2, T2] Inclusion methods The standard way to perform module inclusion is via the include method of Module, e.g. class X; include M end. The extend method of Kernel provides a shorthand for inclusion into eigenclasses: x.extend(M) is roughly equivalent to class << x; include M end. Each of include and extend might be considered “outer” methods of the form • <outer-method> == <inner-method> + <hook-method> so that there are actually 4 different methods of module inclusion together with 2 “hooks”: eigenclass shift → 0 1 outer/inner ↓ Outer method include extend Inner method append_features extend_object Hook included extended The following code demonstrates the differences. module M; end class << M def append_features(x); puts "inner: #{self} as includee of #{x}"; super end def included(x); puts "hook: #{self} as includee of #{x}" end def extend_object(x); puts "inner: #{self} as extender of #{x}"; super end def extended(x); puts "hook: #{self} as extender of #{x}" end class X; end x = X.new class << x; include M end # inner: M as includee # hook: M as includee x.extend(M) # inner: M as extender # hook: M as extender • Outer methods have the includer as a receiver. • Inner methods have the includee as a receiver. • Outer methods allow for multiple arguments. As already demonstrated in the previous subsection, the argument order is inverse to the performed inclusion order so that the argument order “coincides” with the inclusion list order []. Inclusion anomalies According to the definition, module inclusion allows anomalies known as double inclusion problem or dynamic inclusion problem []. In particular: 1. Module inclusion is not necessarily commutative. 2. The includer-of relation can be non-transitive. i.e. HM descendancy is not transitive in general. 3. The includer-of relation can have cycles, i.e. HM descendancy is not antisymmetric in general. 4. MRO ancestor lists can be repetitive (in contrast to sc-inheritance ancestor lists). • An example code of module inclusion non-commutativity. module A; end; module B; end; module C; end ↙ ↘ module A; include B end module B; include C end module B; include C end module A; include B end ↘ ↙ p A.included_modules #=> [B] #=> [B,C] • An example code of non-transitive inclusion. A non-transitive triple (A,B,C) is created. (The red-framed case of the previous example.) module A; end; module B; end; module C; end module A; include B end module B; include C end puts "%p, %p, %p" % [A < B, B < C, A < C] #=> true, true, nil • An example code of an inclusion cycle. module L; end module M; include L end module A; end module L; include A end # M does not know about this module A; include M end # inclusion cycle A < M < L < A created puts "#{A.include? L}, #{L.include? A}" #=> true, true puts "#{A < L}, #{L < A}" #=> true, true • An example code of a repetitive ancestor list. module M; end class X; end class Y < X; include M end class X; include M end p Y.ancestors # [Y, M, X, M, Object, Kernel, BasicObject] V1: Value domain A V1 structure is of the form (O, Φ, Φ[A], FALSE, TRUE, NULL, UNDEF, ℤ, ℱ, ℬ, .φvalue) where • O is the already introduced set of objects. • Φ is a value domain, disjoint from the set O of objects. • Φ[A] is a subset of Φ, elements of Φ[A] are called atomic. • FALSE and TRUE are distinct elements of Φ[A], having the semantics of boolean values. • NULL and UNDEF are distinct elements of Φ[A], indicating undefinedness. • ℤ is a subset of Φ[A] representing the set of integer numbers. • ℱ is a subset of Φ[A] (disjoint from ℤ) representing the set of floating point numbers. • ℬ is a subset of Φ[A]. It is a set of bytestrings which are finite sequences of bytes. A byte can be considered a copy of an integer within the 8-bit range 0, …, 255. Bytes corresponding to the 7-bit range of 0, …, 127 are called ascii. We assume the usual free monoid structure (ℬ, +, '') where □ + is bytestring concatenation, □ '' is the empty bytestring. • .φvalue is a partial function from O to Φ assigning objects their semantic value. Objects x for which x.φvalue is defined are called φvalued. The following condition is required: (V1~1) Tuples of atomic elements belong to the value domain, i.e. all finite products Φ[A] × Φ[A] × ⋯ × Φ[A] are subsets of Φ. • (V1~1) allows certain objects to be φvalued: □ Symbols and Strings are φvalued by pairs from ℬ × ℬ (via estrings). □ Rationals are φvalued by pairs from ℤ × ℤ. S3: Immediate values An S3 structure is an S2 structure equipped with the V1 structure and with the following structures: • (A) (FalseClass, TrueClass, NilClass, false, true, nil) where □ The first 3 members are classes, direct descendants of Object. □ The latter 3 members are their respective direct instances. • (B) (Fixnum, Integer, Numeric) where □ Fixnum, Integer and Numeric are classes such that Fixnum < Integer < Numeric < Object is a direct inheritance chain. Instances of Fixnum are caled fixnums. • (C) (Symbol, Υ) where □ Symbol is a class, a direct descendant of Object. □ Υ denotes the set of Symbol instances (Symbols). Elements of Υ are called symbols. The terminals false, true, and nil together with fixnums and symbols are called immediate values. The structure is subject to the following axioms: (S3~1) Immediate values are φvalued according to the following table: (S3~2) The restriction of .φvalue to the set of all immediate values is injective. (S3~3) The terminals false, true, and nil are the only instances of their respective classes. (S3~4) Classes of immediate values are quasifinal. • (S3~2) means that identity of immediate values is given by their φvalues. (S3-T~1) .φvalue is preserved on immediate values. S4: Actuality An S4 structure is an S3 structure equipped with (.actual?) where • .actual? is a boolean attribute of objects. Objects with .actual? set to true are actual(s), otherwise are non-actual(s). We also provide a set-alternative for .actual? by denoting O[a] the set of all actual objects. The structure is subject to the following axioms: (S4~1) Only finitely many objects can be actual. (S4~2) Only eigenclasses can be non-actual. (S4~3) .ce preserves actuals. For every eigenclass x, if x is actual then x.ce is actual. (S4~4) .sc preserves actuals. For every eigenclass x, if x is actual then x.sc is actual. (S4~5) If r.ec(i) is actual then so is r.class.ec(i). (S4~6) r.class.ec, the Class's eigenclass, is actual. (S4~7) Only actuals can be included or have includees. (S4~8) Eigenclasses of immediate values are non-actual. Conventional extent(s) We say that actuality has • conventional extent, or simply that actuality is conventional, if O[a] ⊆ O[₀₁], i.e. if only primary objects and (some) first eigenclasses are actual (equivalently, eigenclasses of eigenclasses are not actual), • (the) Smalltalk extent, if O[a] equals the set (primary objects) ⊎ ((first) eigenclasses of classes). Note that this a special case of conventional actuality. We can also speak about minimal extent if O[a] == (primary objects) ⊎ ((first) helix eigenclasses). Actual lists For a primary object x, we denote x.actuals the list corresponding to the finite .ec-subchain of actual objects starting at x. Note that under conventional actuality, x.actuals.length ≤ 2 for every primary object x. Helix actuals Axioms (S4~5) and (S4~6) can be equivalently stated as follows. (S4~5) Helix actual lists are equally sized. (S4~6) Eigenclasses of helix classes are actual. • The metaclass r.class.actuals.last is the least actual helix object in the inheritance. • Under conventional actuality, helix actual lists are of length 2 (r.actuals.length == 2). The actualclass map The .aclass function maps objects to actual non-terminals (classes or actual eigenclasses). For every object x, it is defined recursively by 1. x.aclass == x.ec if x.ec is actual, else 2. x.aclass == x.ec.sc (== x.class) if x is terminal, else 3. x.aclass == x.sc.aclass. We call x.aclass the actualclass of x. 1. For every actual object x, 1. x.aclass equals the first actual member of x.ec.hancs, 2. (x.class ==) x.ec.hancestors[0] == x.aclass.hancestors[0]. 2. The .aclass map forms an algebraic tree – the actualclass tree. 3. The root of the actualclass tree equals r.class.actuals.last. □ Under conventional actuality, it equals Class.ec. 4. The depth of the tree equals r.actuals.length + 1 (3 under conventional actuality). a → A → Object.ec → Class.ec a → A → A.ec → Class.ec a → a.ec → A.ec → Class.ec a → a.ec → Object.ec → Class.ec The .ec ≤ .aclass ≤ .class refinement The proposition from the previous subsection shows that the actualclass map, .aclass, can be considered a refinement of the class map, .class. Further observation shows that .aclass can be considered a coarsement of the eigenclass map, .ec, so that we have the following refinement chain: .ec ≤ .aclass ≤ .class. This can be stated precisely as follows. (Recall that the HM descendancy ≤ restricted to non-terminals equals the inheritance ℍ which is an algebraic tree.) 1. For every object x, x.ec ≤ x.aclass ≤ x.class. (The maps .ec, .aclass and .class form a chain when ordered pointwise by the HM descendancy.) 2. For every object x, x.ec is the first member of x.ec.hancs, x.aclass is the first member of x.ec.hancs that is actual, x.class is the first member of x.ec.hancs that is a class. 3. Any .f of the .ec, .aclass or .class maps is monotone with respect to the inheritance ℍ, i.e. for every non-terminals x, y, x ≤ y implies x.f ≤ y.f. 4. For every object x and every i > 0, x.ec(i) ≤ x.aclass(i) ≤ x.class(i). In order to describe how the .aclass map is related to MRI 1.9 implementation, we introduce a function .saec, meaning semiactual eigenclass, partially defined for primary objects as follows: 1. x.saec is defined iff x is 1. a class or 2. a terminal such that x.actuals.length > 2. 2. If x.saec is defined then it equals x.actuals.last.ec. Eigenclasses from the range of .saec are called semiactual(s). Thus, semiactuals are (right) covers of actual lists except for 1-2 sized actual lists starting with terminals – such lists are We also introduce a 3-valued attribute of .actuality which is defined on all objects according to the table below. Object set x.actuality non-actuals 0 semiactuals 1 actuals 2 An extra condition is imposed by MRI: (S4X~1) For every terminative eigenclass x, if x is actual or semiactual then x.sc.ec is actual or semiactual. The .klass map The .klass map provides a “virtual connection” to object's eigenclass. We consider two versions: 1. (A) Restricted version: .klass is a partial map between actual objects. For every actual object x, 1. (1) x.klass == x.ec if x.ec is actual, else 2. (2) x.klass == x.ec.sc (== x.class) if x is terminal, else 3. (3) x.klass is undefined. 2. (B) Full version: .klass is a map between actual or semiactual objects defined according to the MRI 1.9 implementation. For every actual or semiactual object x, 1. (1) x.klass == x.ec if x.ec is actual or semiactual, else 2. (2) x.klass == x.ec.sc (== x.class) if x is terminal, else 3. (3) x.klass == x.sc.ec if x.pr is terminal and x is actual or semiactual, else 4. (4) x.klass == c.ec(x.eci) (if x.pr is a class and x semiactual – the value is probably unimportant). 1. In its restricted version, the .klass map is a restriction of both the .aclass and the full version of .klass. 2. The restricted version of .klass is a generator of .aclass in the following sense: For every actual x, □ x.aclass == x.klass if x.ec is actual or x terminal, □ x.aclass == x.sc.aclass otherwise. State transitions S → S' are incremental in the S4 structure in the following sense. For every object x from O ∩ O', 1. if x is actual in S then it is so in S' Eigenclass actualization An eigenclass actualization is a transition S → S' with a single parameter • x - an actual object whose eigenclass x.ec is requested to be made actual. The structure S' equals S (possibly) except for the set of actual objects. The difference between sets of actual objects, denoted x.acdelta, is defined as follows (all functions are taken in S). 1. (A) With (S4X~1) imposed: 1. (1) x.acdelta == [] if x.ec is actual, else 2. (2) x.acdelta == [x.ec] if x is terminal, else 3. (3) x.acdelta == [x.ec] + x.sc.acdelta if x.pr is a non-helix class, else 4. (4) x.acdelta == r.class.hancs.map{|c| c.ec(x.eci+1)} if x.pr is a helix class, else 5. (5) x.acdelta == [x.ec] + x.sc.ec.acdelta if (x.pr is terminal and) x.eci != 1, else 6. (6) x.acdelta == [x.ec] + x.sc.acdelta + x.sc.ec.acdelta (if x is the eigenclass of a terminal). 2. (B) Without (S4X~1): 1. (1)–(4) as in (A). 2. (5) x.acdelta == [x.ec] + x.sc.acdelta (if x.pr is terminal) – the same prescription as in (3) applies. Proposition: In case (B), x.acdelta, as a set, equals the union of the following (possibly empty) sets Δ1 and Δ2 (all functions are taken in S): 1. Δ1 equals x.ec.hancs - x.aclass.hancs. 2. Δ2 equals 1. r.class.actuals.last.ec.hancs - y.hancs if y is the .sc-least helix object of Δ1, or 2. empty set, if Δ1 does not contain a helix object. Informally, we first actualize eigenclasses up to x.aclass and then equalize helix actual lists. The following examples of Ruby code show several ways of x's eigenclass actualization. 1. (A) Opening or explicitly referencing x.ec. 1. (1) class <&lt x; end 2. (2) x.singleton_class 2. (B) Extending x.ec's inclusion list (or attempting extension). 1. (1) x.extend(Module.new) 2. (2) x.extend(Kernel) (including an already included module) 3. (C) Defining x.ec's own method (aka “singleton method of x”) 1. (1) def x.dummy; end 2. (2) x.instance_eval { def dummy; end } 3. (3) x.define_singleton_method("", lambda{}) 4. (4) x.module_function(:m) (if x is a module having own method m) • None of (A)–(C) is applicable to immediate values (an error can't define singleton is raised). • In addition, (C) is not applicable to Numerics that are not Fixnums (an error can't define singleton method is raised). The built-in T_CLASS counter As of MRI 1.9, ObjectSpace.count_objects[:T_CLASS] counts the following objects in total: 1. classes, 2. actual eigenclasses, and 3. semiactual eigenclasses. It means in particular, that defining a class increases the counter by 2, one for the class and one for its semiactual eigenclass: module Kernel def nnt_delta # number of non-terminals delta @nnt ||= 0 nnt = ObjectSpace.count_objects[:T_CLASS] delta = nnt - @nnt @nnt = nnt def nnt_delta_report; puts "nnt_delta: #{nnt_delta}" end nnt_delta_report # nnt_delta: 387 class X; end nnt_delta_report # nnt_delta: 2 Proposition: Assuming (S4X~1), for every actual x which is not an immediate value, x.acdelta.length equals 1. nnt_delta - 1 if x.eci == 1, x.pr is terminal and x.pr.actuals.length == 2, 2. nnt_delta otherwise. S5: Includer containment An S5 structure is an S4 structure equipped with (.cparent, .cname) where • .cparent is a partial function between includers, • .cname is a partial function from includers to symbols. We introduce the following additional notation / terminology. • The reflexive transitive closure of .cparent is called includer containment or just containment and denoted Ϲ. • x.cparent, if defined, is called the containment parent of x. • x.cname, if defined, is called the (containment) name of x. • Includers with a containment name are named, the remaining are anonymous. • Includers without containment parents are containment roots. • x.cparent(i) denotes the i-th application of .cparent to x. An S5 structure is subject to the following axioms: (S5~1) The containment Ϲ is an algebraic forest (with .cparent undefined on roots). (S5~2) There is exactly one containment root x such that x.cname is defined, namely the Object class. (S5~3) If x.cparent is defined then x.cname is defined. (Non-roots are named.) (S5~4) For every eigenclass x, x.cname is undefined. (Eigenclasses are anonymous, therefore roots.) (S5~5) Anonymous classes and modules have no containment descendants. (Thus, being roots, they are containment singletons.) (S5~6) Non-actuals have no containment descendants. (Thus, being roots, they are containment singletons.) (S5~7) Containment names are constant names. (This statement comes into effect as soon as the meaning of being a “constant name” is (S5~8) If x.cparent is Object then x.cname differs from Object.cname. Proposition: Components of the containment forest are trees of the following 3 types: 1. (A) The unique “main” tree rooted at Object. 2. (B) Single-element trees of anonymous classes or modules. 3. (C) Trees rooted at eigenclasses (usually also of cardinality 1). • There is some similarity between MRO and includer containment. Both forests consist of a single main tree and zero or more “offshoots”. □ In the MRO case, the “offshoots” are chains corresponding to module-in-module inclusion lists. □ In the containment case, the “offshoots”, if any, are usually of minor importance. Containment ancestors Similarly to inheritance ancestor lists, we denote x.cancs the list of containment ancestors of x, i.e. 1. x.cancs is defined iff x is an includer, 2. x.cancs[i] == y iff x.cparent(i) == y. We also denote 1. x.croot the containment root of x, x.croot == x.cancs.last, 2. x.cpathname the (proper) containment path-name of x defined by x.cpathname == (x.cancs - [x.croot]).reverse.map{|y| y.cname}.join("::"), i.e. non-root ancestor names are reversed and joined by "::". Note that for every containment root x (in particular, Object), x.cpathname is an empty string. The following rule applies: (S5-T~1) State transitions S → S' preserve x.cparent and x.cname whenever x.cparent is defined. A single containment binding is a transition S → S' with three parameters: • A requested container p which is an actual includer. • A requested containment name n. • A requested containee q which is an anonymous class or module to be named by n. The following table shows examples of Ruby code with “requests” for containment binding. • (A) Accepted requests with p being a named class or module. • (B) Rejected (ignored) requests with p being an anonymous class or module. • (C) Requests with p being an eigenclass. □ (C1) Accepted request: class definition in an opened eigenclass. □ (C2) Rejected (ignored) request: qualified constant assignment does not work for eigenclasses (as of Ruby 1.9). Group Container p Name n Ruby code Containee q (after transition) q.croot q.cpathname Object class Q; end Object Q A X class X; class Q; end end Object X::Q X X::Q = Class.new Object X::Q C1 X.ec "Q" class << X; class Q; end end X.ec Q q = Class.new C2 X.ec X.singleton_class::Q = q B x x::Q = q unchanged x x.class_eval { self::Q = q } Object relation summary Notation / Expression Terminology / Description Domain Generating map Relation characteristics ℍ == (Classes, ≤) inheritance non-terminals .sc algebraic tree ℙ primorder all objects .ec component-wise isomorphic to the linear order of natural numbers ℍ ↾ (classes) self-or-subclass-of classes .sc ↾ (classes) finite algebraic tree Μ self-or-own-includer-of includers — reflexive and antisymmetric ≤ == (ℍ ◦ Μ) ∪ Μ HM descendancy includers — reflexive (Μ, ≤) MRO Μ .super algebraic forest instance tree all objects .class (aka direct-instance-of) algebraic tree of depth 2 .class ◦ ℍ instance-of all objects — complete (any-to-any) on helix classes, irreflexive and antisymmetric otherwise .ec ◦ ≤ kind-of all objects — actualclass tree all objects .aclass algebraic tree (of depth 3 under conventional actuality) Ϲ includer containment includers .cparent algebraic forest S6: Frozenness, taintedness and trust An S6 structure is an S5 structure equipped with (.frozen?, .tainted?, .trusted?) where • .frozen?, .tainted? and .trusted? are boolean attributes of objects. According to these attributes, objects can be frozen/non-frozen, tainted/untainted and trusted/untrusted. The structure is subject to the following axioms: (S6~1) If x is frozen then x.ec is frozen. (S6~2) If x is tainted then x.pr is tainted. (.ec-chains are tainted as a whole.) (S6~3) If x is trusted then x.pr is trusted. (.ec-chains are trusted as a whole.) (S6~4) Immediate values are trusted. (S6-T~1) Transitions S → S' preserve being frozen, i.e. if x is frozen in S then it is so in S'. (S6-T~2) Inclusion lists of frozen includers cannot be extended. (S6-T~3) Transitions preserve taintedness of frozen objects. (S6-T~4) Transitions preserve trust of frozen objects. (S6-T~5) Transitions preserve .φvalue of frozen objects. Transitions affecting frozennes and taintedness are accomplished via the freeze and taint/untaint methods, respectively. S7: Object cursors An S7 structure is an S6 structure equipped with (nestlists, selfs) where • nestlists is a non-empty finite list of (possibly empty) finite lists of actual includers, • selfs is a non-empty finite list of actual objects. Additional notation/terminology is introduced: • The list nestlists.last.reverse is denoted nesting and called the current nesting. • The object selfs.last is denoted self and called the current object. • The object selfs[0] is denoted main and called the main context. The structure is subject to the following axioms: (S7~1) Classes and modules that appear in nesting are named. (S7~2) main is a direct instance of Object. Containment pseudorule The following condition often happens to be satisfied: • If nesting[0] exists and is a class or module, then nesting[0].cancs equals nesting. We call this condition a “pseudorule” because it only reflects a common pattern of a nested class/module definition, e.g. class A class B class C # current nesting corresponds to A::B::C Writing class ::C instead of class C would break the pseudorule. (S7-T~1) The main context main remains unchanged across transitions. Includer opening/closing There are two (basic) ways how nesting gets changed: explicit and implicit. An explicit change is performed via class/module/eigenclass (re)definition. In this case, nesting changes correspond to prepending/removing one object to/from the front of nesting, i.e. they are equivalent to nesting.unshift(y) and nesting.shift, respectively. The unshift operation is includer opening and can have one of the following forms: • class X for a class X, • module M for a module M, • class << x for an eigenclass x.ec. The shift operation is includer closing and is accomplished by end. Evoked nesting An implicit nesting change is obtained via method invocation as demonstrated by the following code: class X class Y; def self.nest1; Module.nesting end end # def-nesting is [X::Y, X] def Y .nest2; Module.nesting end # def-nesting is [X] class A p Module.nesting # [A] p X::Y.nest1 # [X::Y, X] p X::Y.nest2 # [X] The example shows that each method has its own nesting which becomes the current nesting after method invocation. The method's nesting equals the current nesting at the time of method definition. S8: Object data-representatives An S8 structure is an S7 structure equipped with (Ω, ω(), OID, .oid, IVN, .invals) where • Ω is a finite set of object data-representatives, which we also call ω-objects. Ω is disjoint from O. • ω() is a partial injective map from O &times O onto Ω. • OID is an object-identifier domain, a subset of integers. • .oid is an injective map from Ω to OID. • IVN is a finite set of internal value names, a subset of the value domain Φ. • .invals is a function from non-includers to the set IVN ↷ Φ of partial maps from internal value names to the value domain Φ. The following conditions are required: ω(a,b) is defined iff the following are satisfied: (S8~1) 1. Both a and b are actual. 2. If a is an includer then (a,b) belongs to the MRO domain. 3. If a is a non-includer (pure instance) then it equals b. We call elements ω(a,b) with a different from b iclasses (“inclusion-classes”). Condition (S8~1) says that Ω is a copy of all actual MRO domain pairs extended by all pairs (x,x) with x being a pure instance (non-includer). Equivalently, Ω is a copy of the set of all actual objects extended by the set of iclasses. This is illustrated by the following diagram. Extended actual O Data representatives Extended actual Μ actual MRO pairs actual includer representatives actual objects non-includer representatives Induced maps We define additional (partial) functions on Ω: • ω(x,y).module ≝ ω(x,x) whenever ω(x,y) is an iclass. • ω(x,y).super ≝ ω((x,y).super) whenever (x,y).super (the MRO parent of (x,y)) is defined. • ω(x,x).func ≝ ω(x.func, x.func) whenever .func is a (partial) function on O and x.func is defined. • ω(x,x).func ≝ x.func whenever .func is a (partial) function from O to Φ and x.func is defined. Data fields Because ω-objects are identifiable with OIDs we can regard functions defined on Ω as data fields. The following table presents a distinguished data-field subset. Field Domain Applicability Description Relevant field(s) in Name (Type) non-includer includer iclass Ruby implementation oid OID ● ● ● ω-object id super OID ● ● MRO parent super ce OID ● eigenclass predecessor __attached__ klass OID ● ● actualclass generator klass module OID ● iclass originator cparent OID ● containment parent __classid__ and/or cname “string” ● containment name __classpath__ frozen? boolean ● ● frozenness tainted? boolean ● ● taintedness part of flags trusted? boolean ● ● trust type ENUM_T ● ● ● basic type invals IVN ↷ Φ ● internal value(s) 1. Underline in oid indicates a “primary key”. 2. There is no equivalent of cparent in Ruby's implementation. 3. The quotation marks in “string” indicate that the domain of of cnames should be more precisely described as φvalues of Symbols. The type field indicates the basic type with the following enumeration domain (we assume that ENUM_T is a subset of Φ): Value Meaning of ω(x,y) T_ICLASS iclass T_CLASS class or eigenclass T_MODULE module T_NONINC non-includer (pure instance) The ωobjects data table Using the fields from the previous subsection we obtain an ωobjects data table, with one-to-one correspondence between rows and object data-representatives. oid super ce klass module cparent cname frozen? tainted? trusted? type invals Proposition: The ωobjects data table uniquely determines the S6 structure, up to isomorphism and except for the .φvalue function. Class nomenclature summary Prefix Notation Terminology Description class A primary non-terminal object. e eigenclass A secondary object. metaclass A (non-strict) inheritance descendant of the Class class. explicit metaclass A metaclass that is a class. In Ruby, the Class class is the only explicit metaclass. The set of explicit metaclasses equals the set of classes of implicit metaclass A metaclasss that is an eigenclass. The set of implicit metaclasses equals the set of eigenclasses of Classes. e .ec the eigenclass map A map from objects to eigenclasses. s .sc the superclass map A partial map between non-terminal objects. .class the class map A map from objects to classes. The second application, .class(2), is constant. direct-subclass-of A relation which equals the restriction of .sc to classes. subclass-of The transitive closure of direct-subclass-of. direct-superclass-of, Inverses of direct-subclass-of and subclass-of, respectively. Not used in this document to avoid terminological conflicts with the superclass map. direct-instance-of Equivalent to .class (i.e. x direct-instance-of y iff x.class == y). instance-of Composition of direct-instance-of and self-or-subclass-of. class-of Inverse of direct-instance-of. Class (A) the Class class The instance tree root, and, simultaneously, the metaclass tree root. Denoted c, equal to r.class. (B) a Class A class or an eigenclass. An instance of the Class class. An object that is kind-of Class. Classes Class instances Classes and eigenclasses. helix classes Classes that belong to the Ruby helix. Members of c.hancs, i.e. Class, Module, Object, and BasicObject. a .aclass the actualclass map A map from objects to actual eigenclasses or classes. The n-th application, .aclass(n), is constant, where n equals r.actuals.length + 1 (n == 3 under conventional actuality). i iclass An “inclusion class”, abstraction of an includer-includee pair. • (A) A partial map between actual objects. .klass the actualclass generator • (B) A map between actual or semiactual objects. • (C) An “extension” of (B) to iclasses. singleton class Equivalent to eigenclass. Not used in this document to avoid a conflict with the term class. D0: Estrings A D0 structure is an S8 structure equipped with (Encoding, ℇ, ℇ[¢], ℇ[$], .name, e8b, e7b, ⅀, ⅀[⋄], ≘) where • Encoding is a class, a direct descendant of Object. • ℇ denotes the set of Encodings, called just encodings. • ℇ[¢] is a subset of ℇ, containing encodings with proper character handling support. • ℇ[$] is a subset of ℇ[¢], containing ascii-compatible encodings. • .name is an injective function from Encodings to ascii bytestrings. (Even .name.upcase is injective.) • e8b is a distinguished ascii-compatible encoding, called binary and named 'ASCII-8BIT'. • e7b is a distinguished ascii-compatible encoding, called ascii and named 'US-ASCII'. • ⅀ denotes the set ℬ × ℇ – the set of pairs (s,e) where s is a bytestring and e an encoding. Elemens of ⅀ are called estrings, meaning “encoded strings”. • ⅀[⋄] is a subset of ⅀, containing valid estrings. • ≘ is an equivalence relation on the set ⅀[⋄] of valid estrings. Note that we already have axiomatized the inclusion chain {e8b, e7b} ⊆ ℇ[$] ⊆ ℇ[¢] ⊆ ℇ. The structure is subject to the following axioms: (D0~1) ('',e) is a valid estring for every encoding e. (D0~2) (s,e8b) is a valid estring for every bytestring s. (D0~3) (s,e7b) is a valid estring iff s is an ascii bytestring. (D0~4) (s,e7b) ≘ (s,e8b) whenever (s,e7b) is a valid estring. (D0~5) (s,e) ≘ (t,e) implies s == t, i.e. valid ≘-equivalent estrings with the same encoding are equal. For every ascii-compatible encoding e and every valid estring (s,e7b), (D0~6) • (s,e7b) ≘ (t,e) for some (necessarily unique) estring (t,e). For every encoding e from ℇ[¢] and every bytestrings s, t such that (s,e) is valid, • (s + t, e) is valid iff (t,e) is valid. If a, b are leading characters ^∗ of x, y, then • x ≘ y implies a ≘ b. • u is non-empty, • u + v = s for some bytestring v, • u is the smallest bytestring satisfying the previous condition. The following table shows Ruby built-in boolean-attribute correspondents for encoding or estring set-membership. Set membership Ruby boolean-attribute reflection Expression Terminology / description x ∈ ℇ[¢] encoding x supports proper character handling !x.dummy? x ∈ ℇ[$] encoding x is ascii-compatible x.ascii_compatible? x ∈ ⅀[⋄] estring x is valid x.valid_encoding? estring x is ascii-only x.ascii_only? We also denote • ⅀[¢] the set of all valid estrings with encoding from ℇ[¢]. We call such estrings char-decomposable. • ⅀[e] the set of all valid estrings with encoding e. The .encode() map The .encode() function maps valid estrings to their ≘-equivalents in given encodings, i.e. it is a partial function from ⅀[⋄] × ℇ to ⅀[⋄] such that • (s,e).encode(f) == (t,f) iff there is a bytestring t such that (s,e) ≘ (t,f). • For every valid estring (s,e) and every encoding f, □ (s,e).encode(f).encode(e) == (s,e) whenever (s,e).encode(f) is defined. Ascii and ascii-only estrings Valid estrings x are called • ascii-only if x.encode(e7b) is defined, • ascii if they have the e7b encoding, equivalenty x == x.encode(e7b). For an ascii estring x and an ascii bytestring s, we might write x == s for x == (s,e7b). (This is later applied, for instance, for x == 'method_missing'.) Strict concatenation and character decomposition The strict concatenation ∔ is a partial binary operator on the set ⅀[¢] of char-decomposable estrings defined by • (s,e) ∔ (t,f) == (s + t, e) if e == f, • (s,e) ∔ (t,f) is undefined, otherwise. The character decomposition (split) .chars is a function from ⅀[¢] to finite lists over ⅀[¢] defined recursively by • (s,e).chars == [] if s is empty, else • (s,e).chars == [(u,e)] + (v,e).chars where u, v are the unique bytestrings such that u + v == s and u is the smallest non-empty such that (u,e) is valid ((u,e) is the leading character of (s,e)). For an estring s from ⅀[¢], • the length s.length is the length of s.chars, • s[i] denotes the i-th member of s.chars. An estring character is an estring of length 1. 1. The structure (⅀[e], ∔, ('',e)) is a free monoid for every encoding e from ℇ[¢]. 2. s == s[0] ∔ s[1] ∔ ⋯ ∔ s[n-1] for every char-decomposable estring s of length n. 3. For every encodings e, f from ℇ[¢], □ .encode(f) is an isomorphism between (⅀[e], ∔, ('',e)) and (⅀[f], ∔, ('',f)). In particular, □ .encode(f) and .chars commute, □ .encode(f) preserves .length. Loose concatenation We partially specify loose concatenation as a partial binary operator + on valid estrings satisfying the following: • x + y == x ∔ y whenever x ∔ y is defined, • ('',e) + x ≘ x ≘ x + ('',e), • (s,e) + x ≘ (s,e) ∔ x.encode(e) whenever x is ascii-only, similarly • x + (s,e) ≘ x.encode(e) ∔ (s,e) whenever x is ascii-only. In particular an estring x can “start” with an ascii estring even if x is not ascii. (D0-T~1) For transitions S → S', all of the following are fixed for encodings and estrings existing both in S and S': the ℇ[¢]- and ℇ[$]- set memberships, validity of estrings, the ≘-equivalence. (D0-T~2) Encoding names are preserved across transitions. D1: Strings A D1 structure is a D0 structure equipped with (String, .estr) where • String is a class, a direct descendant of Object. • .estr is a function from Strings and Symbols to the set ⅀ of estrings. The structure is subject to the following axioms: Strings and Symbols are φvalued by pairs of bytestrings as follows: (D1~1) • x.φvalue == (s, e.name) where (s,e) == x.estr. (D1~2) For every symbol x, the estring x.estr is valid. (D1~3) For every symbol x, if x.estr ≘ (s,e7b) for some s then x.estr == (s,e7b). (I.e. ascii-only symbols are ascii.) • (D1~2) does not apply to Strings. Proposition: The following are consequences of the already introduced conditions: 1. .estr is preserved on symbols. 2. .estr is preserved on frozen strings. D2: Arrays A D2 structure is a D1 structure equipped with (Array, ._list) where • Array is a class, a direct descendant of Object. • ._list is a function from the set of all Arrays to finite lists of objects. Arrays are called arrays. 1. Recall that a finite list is a function with the domain being an index set of the form 0, …, n-1 for some natural n. 2. By the introduced terminology, arrays can be considered as maps to lists, but not lists themselves. For instance, there is only one empty list, but possibly many non-identical empty arrays. Let a be an array and i an integer. Then • a.length denotes the cardinality of (the domain of) a._list. • a[i] is defined as follows: a[i] == a._list(i) if 0 ≤ i < a.length, a[i] == a._list(a.length - i) if -a.length ≤ i < 0, a[i] == nil otherwise. (D2-T~1) Transitions S → S' preserve ._list on frozen arrays, i.e. if a is a frozen array, then a[i] equals a[i]' and a.length equals a.length'. D3: Hashes A D3 structure is a D2 structure equipped with (Hash, .hcodes, .keys, .values, .compare_by_identity?, .dflt, .dflt_call?, Proc) where • Hash and Proc are classes, direct descendants of Object. Hashes are called hashes. The remaining members are functions on the set of all hashes. For every hash x, • x.hcodes is a finite list of integers called hash codes. • x.keys is a finite list of objects called keys. • x.values is a finite list of objects called values. • These 3 lists are of equal length. • x.compare_by_identity? is a boolean attribute. • x.dflt is an object determining the default value. • x.dflt_call? is a boolean attribute determining whether x.dflt is interpreted indirectly – as the default's value evaluator, or if it is interpreted directly as the default value itself. Every hash x is subject to the following conditions: (D3~1) The list x.hcodes.zip(x.keys) (x.hcodes paired with x.keys) is non-repetitive. (D3~2) If x.compare_by_identity? is true then even the list x.keys is non-repetitive. (D3~3) If x.dflt_call? is true then x.dflt is an instance of Proc. (D3~4) If s is a direct instance of String contained in x.keys and x.compare_by_identity? is false then s is frozen. For a hash x, the triple (x.hcodes, x.keys, x.values) has the following equivalent forms: 1. (A) The “single-list” form. The hash is a list of triples of the form (hash-code, key, value). The triples have unique (hash-code, key) pairs. If x.compare_by_identity? is true then even keys are unique. 2. (B) The “slot” form. The hash is a map from hash-codes (“slots”) to lists of triples of the form (idx, key, value) such that □ idxs are unique not only within “slot”-lists but within the whole hash, □ “slot”-lists respect the idx order (so that ordering in individual “slots” is “induced”), □ keys are unique within “slot”-lists and if x.compare_by_identity? is true then even within the whole hash. idx hcode key value 0 Ⅲ m 1 Ⅱ a 2 Ⅱ r 3 Ⅰ c Ⅰ Ⅱ Ⅲ Ⅳ 4 Ⅲ a 3 c 1 a 0 m 5 b 5 Ⅳ b 8 u 2 r 4 a 6 q 6 Ⅳ q 9 a 7 d 11 g 7 Ⅱ d 10 b 8 Ⅰ u 9 Ⅰ a 10 Ⅰ b 11 Ⅳ g (D3-T~1) Transitions S → S' preserve the 6 member maps .hcodes, …, .dflt_call? on frozen hashes. (D3-T~2) True values of .compare_by_identity? are preserved. (Once being set to true, this boolean attribute cannot be set to false.) Hash resolution By an immutable hash resolution context we mean a structure (.hash, .eql?(), .call()) where • .hash is a function from objects to integers (x.hash is called the hash code of x). • .eql?() is a boolean-valued function on O × O encoding equality between objects. • .call() is an object-valued function on Procs × Hashes × O. For a hash x and an object k we say that i is a resolution key-index of k in x if either of the following is satisfied: 1. (A) x.compare_by_identity? is false and i is the smallest index such that 1. k.hash == x.hcodes[i] and either k == x.keys[i] or k.eql?(x.keys[i]), 2. (B) x.compare_by_identity? is true and i is the unique index such that 1. k == x.keys[i]. The (hash) resolution operator [] is an object-valued function on Hashes × O assigning each hash x and each object k a value x[k] as follows: 1. (1) x[k] == x.values[i] if i is the resolution key-index of k in x, else, if there is no such i, 2. (2) x[k] == x.dflt if x.dflt_call? is false, else 3. (3) x[k] == x.dflt.call(x,k). D4: Numbers A D4 structure is a D3 structure equipped with (Bignum, Float, Rational, Complex, .real, .imag) where • Float, Rational and Complex are classes, direct descendants of Numeric. • Bignum is a direct descendant of Integer. Bignums are called bignums. • .real and .imag are functions from Complexes to Numerics that are not Complexes. The structure is subject to the following axioms: (D4~1) Integers are either Fixnums or Bignums. (D4~2) Bignum, Float, Rational and Complex are quasifinal (in addition to Fixnum). (D4~3) Integers are φvalued by the integers ℤ. (D4~4) Bignums and Fixnums have disjoint .φvalue images. (D4~5) Floats are φvalued by the floating point numbers ℱ. Rationals are φvalued by pairs (n,d) from ℤ × ℤ such that (D4~6) • d is positive, and • n and d are relatively prime. (D4-T~1) The .real and .imag maps are preserved on all Complexes. D5: Ranges A D5 structure is a D4 structure equipped with (Range, .start, .end, .exclude_end?) where • Range is a direct descendant of Object. Ranges are called ranges. • .start and .end are functions from Ranges to objects. • .exclude_end? is a boolean attribute of Ranges. (D5-T~1) The .start, .end and .exclude_end? maps are preserved on all Ranges. Object internal value data We specify the invals data (sub)fields in the ωobjects data table as follows. Applied to Field Domain Description Name (Type) false value {FALSE} true value {TRUE} nil value {NULL} name ℬ encoding name Encodings dummy? boolean indicates “stateful” encoding without proper character ascii_compatible? boolean indicates ascii-compatible encoding bytes ℬ byte-sequence together with encoding Symbols and Strings encoding “ENUM” valid_encoding? boolean indicates whether bytes is a valid sequence w.r.t encoding Arrays length ℤ array length length ℤ hash length Hashes compare_by_identity? boolean indicates hash resolution mode dflt OID the hash's default object or evaluator dflt_call? boolean indicates dflt interpretation mode Fixnums value ℤ the φvalue of a fixnum Bignums value ℤ the φvalue of a bignum Floats value ℱ the φvalue of a float Rationals numerator ℤ the first component of the φvalue denominator ℤ the second component of the φvalue, needs to be positive Complexes real OID the real-part object imag OID the imaginary-part object start OID range start Ranges end OID range end exclude_end? boolean range end exclusion indicator 1. Gray color of a field name indicates that field value is derived. 2. Array or hash lists are not considered to be part of invals data. A0: Lexical identifiers An A0 structure is a D5 structure equipped with (.idcat) where • .idcat is a partial function on estrings. Elements from the domain of .idcat are called (lexical) identifiers, x.idcat is x's identifier category. The .idcat partial function categorizes estrings according to the following table: Estring s s.idcat Partition s starts with an uppercase letter 'constant-identifier' (A0~1) s starts with "@" but not with "@@" 'instance-variable-identifier' s starts with "@@" 'class-variable-identifier' β s starts with "$" 'global-variable-identifier' s starts with a lowercase letter or with _ 'local-variable-identifier' s starts with a letter or with _ 'method-identifier' α (A0~2) Identifiers are subject to additional restrictions which are not specified in this document. The last column in the definition table indicates two partitions of strings induced by .idcat. We will also use the word “name” for non-method identifiers and apply the categorization directly to symbols, so that e.g. for a symbol s, s.estr.idcat == 'constant-identifier' means “s is a constant A1: Methods An A1 structure is an A0 structure equipped with (Π, .met(), μ) where • Π is a set of anonymous methods, disjoint from the set O of objects, • π is a distinguished element of Π, called the/a whiteout method, or simply whiteout (the term adopted from []), • .met() is is a partial function from O × Υ to Π, called own method map, • μ is a symbol such that μ.estr == "method_missing". Thus, .met() can be viewed as a subset of O × Υ × Π. If x.met(s) is defined then we say that x has own method s or that x is a method-owner of s. If, in addition, • x.met(s) is a whiteout, then we say that x has own wo-method s or that x is a wo-method-owner of s, • x.met(s) is not a whiteout, then we say that x has own nwo-method s or that x is an nwo-method-owner of s. An A1 structure has the following axioms: (A1~1) Only actual objects can have own methods. (A1~2) Only includers can have own methods. If in initial state, an A1 structure satisfies the following: (A1~3) The inheritance root r is an nwo-method owner of μ (the method_missing method). 1. No restriction applies to a symbol s to become a method name. In particular, if x.met(s) is defined then it is not required that s.estr.idcat == 'method-identifier', see notes to Transitions. 2. As of MRI 1.9, the following condition is also satisfied: (A1~4*) Eigenclasses x.ec of Numerics x cannot have own methods. Method owner We define a partial function .mowner() from O × Υ × {true, false} to O, called method owner map, by • (1) x is an includer, x.mowner(s, wo) == y iff • (2) y equals the least-indexed member of x.ancs that is a method owner of s, • (3) if wo is false then y is also an nwo-method owner of s. We allow false to be the default value of wo, so that x.mowner(s) means x.mowner(s, false). 1. x.mowner(s) == x iff x is an nwo-method-owner of s. 2. x.mowner(s,wo).mowner(s,wo) == x.mowner(s,wo) whenever x.mowner(s,wo) is defined. Method inheritor We say that x is a method-inheritor of s if x.mowner(s, true) is defined, i.e. if some of x.ancs has own method s. We say that x is an nwo-method-inheritor of s if x.mowner(s) is defined. By an inherited method map we mean the partial function .met_h() from O × Υ × {true, false} to Π defined by x.met_h(s,wo) == x.mowner(s,wo).met(s) whenever x.mowner(s,wo) is defined. Again, the default value for wo is false. Simplified method resolution By a simplified method resolution map we mean a (partial) function smr() from O × Υ to O × Υ defined as follows: 1. (1) smr(x,s) == (x.ec.mowner(s), s) if x.ec.mowner(s) is defined, else 2. (2) smr(x,s) == (x.ec.mowner(μ), μ) if x.ec.mowner(μ) is defined, else 3. (3) smr(x,s) is undefined. • The expression x.ec.mowner(s) expresses the Ruby “method call mantra” []: One to the right, then up. This is applied in both “search phases”, (1) and (2). • By convention, each eigenclass is an nwo-method inheritor of μ, so that (3) never occurs. Interchangeability of .ec and .aclass Proposition: For every object x and every symbol s, • x.ec.mowner(s) == x.aclass.mowner(s) (either both sides are undefined or both defined and equal). This means that the method resolution for a receiver x can actually start from the actualclass of x. (A1-T~1) Transitions S → S' preserve .met() on frozen objects, i.e. if x is a frozen includer, then x.met(s) equals x.met'(s). This means that the own method map .map() can be arbitrarily (re)defined on non-frozen objects. Modifications of .map() are accomplished via transitions S → S' of types (A)–(E) according to the following table. Transition parameters Requested output condition Ruby's correspondent(s) (A) New method definition 1. x, the requested owner, 1. (a) def s; m end, (1) 2. s, the requested method name, x.met'(s) == m 2. (b) define_method(s,m). 3. m, the requested method (body). 1. x, the object with x.ec the requested owner, 1. (a) def x.s; m end, (2) 2. s, m as in (1). x.ec.met'(s) == m 2. (b) x.define_singleton_method(s,m). (B) Aliasing an inherited method 1. x, the requested owner, 1. (a) alias a, s, 2. a, the requested alias name, x.met'(a) == x.met_h(s) 2. (b) alias_method(a,s). 3. s, the name of the method that is requested to be aliased. (C) “Whiteouting” an inherited method 1. x, the requested owner, 1. (a) undef s, 2. s, the name of the method that is requested to be whiteouted. x.met'(s) == π 2. (b) undef_method(s). (D) Removing an own non-whiteout method 1. x, the requested owner, 1. (a) —, 2. s, the name of the method that is requested to be removed. x.met'(s) is undefined 2. (b) remove_method(s). (E) Changing method visibility – see Method visibility transitions 1. If x is not specified in the rightmost column, then it is assumed that x equals self unless self == main – in this case x == Object. 2. As of Ruby 1.9, direct transitions for removing a whiteout method are not supported. 3. In (a) cases, the restriction s.estr.idcat == 'method-identifier' applies to the method name s (or to the alias a). In (b) cases, no such restriction applies, as shown in the following example. class X define_method ("") { self } define_method ("1 + 1") { 3 } alias_method "@a", "" p X.new.send("").send("@a").send("1 + 1") #-> 3 p X.instance_methods(false) #-> [:"", :"1 + 1", :@a] class X undef_method "1 + 1" remove_method "", "@a" p X.instance_methods (false) #-> [] A2: Properties An A2 structure is an A1 structure equipped with (.pty(), κ) where • .pty() is a partial function from O × Υ to O, called property map, • κ is a symbol such that κ.estr == "const_missing". Thus, .pty() can be viewed as a subset of O × Υ × O. We call elements of the set Υ × O properties. We say that an object x has own property (s,y) if x.pty(s) == y. We might just say that x has own property s. We categorize properties according to .idcat: Symbol s Property (s,y) category s.estr starts with an uppercase letter constant s.estr starts with "@" but not with "@@" instance variable s.estr starts with "@@" class variable s.estr starts with "$" global variable s.estr starts with a lowercase letter or with _ local variable • Gray color in the last row indicates that local variables are not properties in the above sense. They cannot be explained without introducing additional concepts, in particular, blocks, so that they appear to be beyond the scope of a document titled “Ruby object model”. An A2 structure has the following axioms: (A2~1) Only actual objects can have own properties. (A2~2) Only includers can have own constants. (A2~3) Only includers can have own class variables. (A2~4) Only the Kernel module can have global variables. If in initial state, an A2 structure satisfies the following: (A2~5) The metamodule root m is an nwo-method owner of κ (the const_missing method). Class variables Roughly speaking, class variables get synchronized across includers comparable either in HM descendancy or in primorder (i.e. belonging to the same eigenclass chain). More specifically, some “weak form” of the following condition applies: For every includers x, y, and every class variable symbol s such that both x.pty(s) and y.pty(s) are defined, each of the following conditions in its own right is sufficient to imply x.pty(s) == y.pty(s): 1. x ≤ y or y ≤ x. 2. x.pr == y.pr. How exactly this “weak form” looks like is not specified in this document. Neither are specified transition rules or resolution rules which seem to be even more complicated. Constant owner Analogous to method owner map we introduce the constant owner map as a partial function .cowner() from O × Υ to O defined by x.cowner(s) == y iff • (1) x is an includer, • (2) y equals the least-indexed member of x.ancs that has own constant s. 1. x.cowner(s) == x iff x is a constant-owner of s. 2. x.cowner(s).cowner(s) == x.cowner(s) whenever x.cowner(s) is defined. Constant inheritor We say that x is a constant-inheritor of s if x.cowner(s) is defined, i.e. if some of x.ancs has own constant s. By an inherited constant map we mean the partial function .cst_h() from O × Υ to O defined by x.cst_h(s) == x.cowner(s).pty(s) whenever x is a constant-inheritor of s. Property owner and inheritor For the sake of uniformity, we also (partially) define .powner(), the property owner map, and .pty_h(), the inherited property map, as extensions of .cowner() and .cst_h(), respectively. • .powner() is a partial map from O × Υ to O defined by (1) x.powner(s) == x.cowner(s) if s is a constant name and x.cowner(s) is defined, (2) x.powner(s) == x if s is an instance variable name and x.pty(s) is defined, (3) (“value not specified”) if s is a class variable name and (“condition not (4) x.powner(s) == Kernel if s is a global variable name and Kernel.pty(s) is defined, (5) x.powner(s) is undefined otherwise. • .pty_h() is a partial map from O × Υ to O defined by □ x.pty_h(s) == x.powner(s).pty(s) whenever x.powner(s) is defined. 1. .pty_h() is only related to inheritance in cases (1) and (3). 2. We assume that .pty_h() “factors through” .powner() even for class variables. Qualified constant resolution By a qualified constant resolution map we mean a partial function qcr() from O × Υ to O × Υ defined as follows: 1. (0) qcr(x,s) is undefined if x is not an includer or s is not a constant name, else 2. (1) qcr(x,s) == (x.cowner(s), s) if x.cowner(s) is defined, else 3. (2) qcr(x,s) == (x.ec.mowner(κ), κ) if x.ec.mowner(κ) is defined, else 4. (3) qcr(x,s) is undefined. If qcr(x,s) == (y,t) then we say that (y,t) is a qualified constant resolution of (x,s). • If qcr(x,s) == (y,t) is defined in phase (1), then y.pty(t) equals the evaluation x::<s.estr> (for example, if s.estr == "A", then y.pty(t) equals x::A). • If qcr(x,s) == (y,κ) is defined in (2), then the method y.met(κ) gets called with x as the receiver and s as the argument. • By convention, each includer is an nwo-method inheritor of κ, so that phase (3) never occurs. • Constants owned by the Object class are considered “toplevel constants” not designed for referencing by descendants. If qcr(x,s) == (Object,s) and x != Object then (in non-nil $VERBOSE mode) a warning is issued. • For a symbol s the expression ::<s.estr> is equivalent to x::<s.estr> with x being equal to Object. The qcr map can be expressed as a method of Module as follows: class String def constant_name?; !!match(/^[A-Z]\w*$/) end class Module def cowner(s) ancs.find{|x| x.const_defined?(s, false)} def qcr(s) !s.to_s.constant_name? ? nil : (o = cowner(s)) ? [o, s.to_sym] : respond_to?(t = :const_missing, true) ? [method(t).owner, t] : nil Unqualified constant resolution By an unqualified constant resolution map we mean a partial function uqcr() from Υ to O × Υ. The start point x is obtained by • x == nesting[0] if the nesting cursor nesting is nonempty, else • x == main.ec otherwise, i.e. if the current nesting is empty. The lookup sequence q is defined by • q == nesting + x.ancs + Object.ancs if x is a module, • q == nesting + x.ancs otherwise, i.e. if the start point is a class or an eigenclass. The uqcr map is then defined as follows: 1. (0) uqcr(s) is undefined if s is not a constant name, else 2. (1) uqcr(s) == (w, s) if w is the least-indexed member of q such that w.pty(s) is defined, else, if no such w exists, 3. (2) uqcr(s) == (x.ec.mowner(κ), κ) if x.ec.mowner(κ) is defined, else 4. (3) uqcr(s) is undefined. If uqcr(s) == (y,t) then we say that (y,t) is an unqualified constant resolution of s. • uqcr(s) corresponds to the evaluation of <s.estr> – i.e. there is no double-colon (::) before <s.estr>. • We can assume the uniq operation (removing duplicate members) to be applied to the lookup sequence q. • If nesting == [r] then constants owned by the Object class are not visible via the unqualified constant resolution – they must be referenced using the :: prefix. Example (using another class that is not a descendant of Object): class A < BasicObject def self.const_missing(s); :missing end p [Module, ::Module] # [:missing, Module] The uqcr map can be expressed as a method of Array as follows: $main = self class Array def uqcr(s) return nil if !s.to_s.constant_name? x = first || $main.ec q = self + x.ancs + (::Class === x ? [] : ::Object.ancs) o = q.find{|y| y.const_defined?(s, false)} o ? [o, s.to_sym] : x.respond_to?(t = :const_missing, true) ? [x.method(t).owner, t] : nil Immediate values revisited Even immediate values can have “state”, in particular, they can • be frozen/non-frozen (by contrast, they cannot be tainted or untrusted), • have instance variables. “Statefulness” of Fixnums is demonstrated in the following code. class Fixnum def false(i = 1) @n_falsified ||= 0 @n_falsified += i def report "#{self} falsified #{@n_falsified || 0} times" puts 5.report # 5 falsified 0 times puts 6.report # 6 falsified 0 times puts 5.false.report # 5 falsified 1 times puts 5.false.report # 5 falsified 2 times puts 6.false.report # 6 falsified 1 times (A2-T~1) Transitions S → S' preserve .pty(s) on frozen objects, except for global variables s, i.e. if x is a frozen object and s is a symbol that is not a global variable name, then x.pty(s) equals This means that the own property map .pty() can be arbitrarily (re)defined on non-frozen objects. Modifications of .pty() are accomplished via transitions S → S' according to the following table: Transition parameters Requested output condition Ruby's correspondent(s) (A) Property assignment 1. x, the requested owner, 1. <s> = y, 2. s, the requested property name, 2. x.instance_variable_set(s,y), 3. y, the requested property value. x.pty'(s) == y 3. x.class_variable_set(s,y), 4. x.const_set(s,y). (B) Property removal 1. x, the requested owner, 1. x.remove_instance_variable(s), 2. s, the name of the property that is requested to be removed. x.pty'(s) is undefined 2. x.remove_class_variable(s), 3. remove_const(s). 1. If x is not specified in the rightmost column, then 1. x equals the artificial owner Kernel if s is a global variable name, else 2. x equals Object if self == main, else 3. x equals self. 2. Gray color indicates that for class variables, the owner is not exactly specified in this document. Containment revisited Includer containment is not a first-class structure in Ruby. The .cparent function is not directly supported, and, in general, is not even obtainable. The built-in Module's method x.name provides a “global containment path” of x in one of the following forms ^∗: 1. A::B::C, if x.croot equals Object, 2. #<Class:0xaafc40>::A::B::C, otherwise (i.e. if x.croot is an eigenclass). Getting x.cparent means resolving the path without the last segment. Such a resolution might not yield consistent results, as discussed in the following subsections. Encoding compatibility We say that containment naming is encoding compatible if (CNC~0) the pathname estrings x.cancs.map{ |a| a.cname.estr } are (loosely) concatenable for every includer x. Unfortunately, Ruby 1.9.2 allows incompatibly encoded names of containment ancestors: X = Object.const_set("X\u00e1".encode('utf-8'), Class.new) class X Y = const_set("Y\u00e1".encode('iso-8859-2'), Class.new) p X::Y.name # raises Encoding::CompatibilityError Sibling consistency We say that containment naming is sibling consistent if .cname is injective on each sibling set, i.e. (CNC~1) for every includers x, y with a containment parent, x.cparent == y.cparent and x.cname == y.cname implies x == y. Unfortunately, as of Ruby 1.9, sibling consistency is not satisfied in general, as demonstrated below. class X class Y; end Z = Y class_eval { remove_const :Y } class Y; end x = X y = X::Y z = X::Z puts "#{y.object_id}-->#{y}" puts "#{z.object_id}-->#{z}" The code produces three different classes, x, y and z, such that 1. y.cparent == z.cparent == x, and 2. y.cname == z.cname == "Y". This is due to Ruby allowing a rather unrestricted constant removal. The following code shows a possible modification of Module's method remove_const which would prevent, at least to some extent, sibling inconsistencies. class Module alias toS to_s alias __remove_const remove_const; private :__remove_const def remove_const(sym) c = const_defined?(sym, false) ? const_get(sym,false) : nil if c.kind_of?(Module) && c.toS.match(Regexp.new("((\\:\\:)|(^))#{sym}$")) raise NameError.new( "Cannot remove module/class :#{sym} from #{toS}", sym) Property consistency We say that containment naming is (own) property consistent if the following is satisfied: (CNC~2) x.cparent.pty(x.cname) == x for every includer x with a containment parent. It is straightforward to see that property consistency implies sibling consistency. There are (at least) 2 ways to break property consistency: 1. (a) removing a constant (see Sibling consistency), 2. (b) reassignment of a constant. While (a) can be prevented by remove_const modification, (b) seems to be preventable only by convention. (When in $VERBOSE mode, Ruby issues a warning about an already initialized constant.) Resolution consistency We say that containment naming is resolution consistent if the following is satisfied: (CNC~3) x ≤ y and y == b.cparent implies x.cst_h(b.cname) ≤ b. This condition is important if an inner-class coding pattern is applied as outlined by the following example: class X def a; @a end def initialize; @a = self.class::A.new end class A def report; self.to_s end class Y < X def initialize; super end class A < A; end puts X.new.a.report #--> #<X::A:0xab80d0> puts Y.new.a.report #--> #<Y::A:0xab8028> A3: Method visibility An A3 structure is an A2 structure equipped with (.mvisibility()) where • .mvisibility() is a partial function from O × Υ to the set {:private, :protected, :public}. The following axioms are required to hold: (A3~1*) x.mvisibility(s) is defined iff x.met(s) is defined and is not a whiteout. For x.mvisibility(s) == v we say that the own visibility of s in x is v. We define inherited visibility .mvisibility_h() by • x.mvisibility_h(x) == x.mowner(s).mvisibility(s) whenever x.mowner(s) is defined. Unqualified method resolution By an unqualified method resolution map we mean a (partial) function uqmr() from Υ to O × Υ defined as the partial application smr(self, _), i.e. • uqmr(s) == smr(self, s) whenever the right side is defined. If uqmr(s) == (y,t) then we say that (y,t) is an unqualified method resolution of s. The uqmr map can be expressed as a method of Object as follows: class Object def uqmr(s) t = :method_missing respond_to?(s, true) ? [method(s).owner, s] : respond_to?(t, true) ? [method(t).owner, t] : nil Qualified method resolution By a qualified method resolution map we mean a (partial) function qmr() from O × Υ to O × Υ defined as follows. 1. (1) qmr(x,s) == (x.ec.mowner(s), s) if 1. (a) x.ec.mvisibility_h(s) is :public, or 2. (b) x.ec.mvisibility_h(s) is :protected and self.ec ≤ x.ec.mowner(s), else 2. (2) qmr(x,s) == (x.ec.mowner(μ), μ) if x.ec.mowner(μ) is defined, else 3. (3) qmr(x,s) is undefined. If qmr(x,s) == (y,t) then we say that (y,t) is a qualified method resolution of (x,s). • (2) is applied iff □ x.ec.mvisibility_h(s) is :private or □ x.ec.mvisibility_h(s) is undefined, i.e. x.ec.mowner(s) is undefined. • x.ec.mvisibility_h(μ) is irrelevant. The qmr map can be expressed as a method of Object as follows: class Object def qmr(x,s) if x.respond_to?(s, false) # public or protected o = x.method(s).owner if o.protected_instance_methods(false).include?(s) && !kind_of?(o) o = nil if o then return [o, s] end t = :method_missing x.respond_to?(t, true) ? [x.method(t).owner, t] : nil In contrast to the .met() and .pty() partial maps, .mvisibility() is not guaranteed to be preserved on frozen objects: class X; def m; end p X.private_instance_methods(false) # [] class X; private :m p X.private_instance_methods(false) # [:m] Modifications of .mvisibility() are accomplished according to the following table. Transition parameters Requested output condition Ruby correspondents (visibility specifiers) Method visibility setting 1. x, the requested owner, 1. private(s), (1) 2. s, the requested method name, x.mvisibility'(s) == v 2. protected(s), 3. v, the requested method visibility. 3. public(s). (2) 1. x, the object with x.ec the requested owner, x.ec.mvisibility'(s) == v 1. x.private_class_method(s), 2. s, v as in (1). 2. x.public_class_method(s). 1. If x is not specified in the rightmost column, then x equals self unless self == main – in this case x == Object. 2. v is implied by the method (visibility specifier) used. Disowned visibility The asterisk in (A3~1*) indicates that the condition is not guaranteed in MRI 1.9. It can be broken using private_class_method or public_class_method visibility specifiers: class X; end class << X private; def m; end ec = (x = X.new).singleton_class p ec.respond_to?(:m) # false p ec.respond_to?(:m) # true p ec.method(:m).owner # X.ec (#<Class:X>) p ec.singleton_methods(false) # [:m] p ec.singleton_class.public_instance_methods(false) # [] p ec.singleton_methods(false) # [] The visibility of :m in x.ec.ec has been changed from private to public but the owner of :m remains X.ec (according to ec.method(:m).owner). In addition, the last three line show inconsistency in :m's ownership. A4: Arrows An A4 structure is an A3 structure equipped with (Α, ϙ(), ϻ(), ι(), ϰ(), α(), β(), αɦ(), βɦ()) where • Α is a set of arrow data-representatives or just arrows, disjoint from hitherto introduced sets, • ϙ(), …, βɦ() are injective maps to arrows with mutually disjoint ranges denoted Α[ϙ], …, Α[βɦ]. ϙ() is a partial map from O × ℬ to Α, ϙ(x,s) is defined according to the table in Internal property arrows. ϻ() is a partial map from O × ℤ to Α, ϻ(x,i) is defined iff x.incs[i] is defined. ι() is a partial map from Arrays × ℤ to Α, ι(x,i) is defined iff x._list[i] is defined. ϰ() is a partial map from Hashes × ℤ to Α, ϰ(x,i) is defined iff x.keys[i] is defined. α() is a partial map from O × Υ to Α, α(x,s) is defined iff x.met(s) is defined. β() is a partial map from O × Υ to Α, β(x,s) is defined iff x.pty(s) is defined. αɦ() is a partial map from O × Υ to Α, αɦ(x,s) is defined iff x is actual and x.ec.mowner(s) is defined. βɦ() is a partial map from O × Υ to Α, βɦ(x,s) is defined iff x is actual and x.powner(s) is defined. The following table provides a summary of arrow nomenclature together with applicability of arrow constituents. Symbol arrow constituents → Source Name (Key) Owner Attributes Target s Terminology for the set Α[s] source idx hcode key / name owner mvisibility target ϙ internal-property-arrows internal ● ● ● ϻ inclusion-list-arrows arrows ● ● ● ι array-arrows ● ● ● ϰ hash-arrows (external) ● ● ● ● ● α own-method-arrows own-arrows ● ● ● ● β own-property-arrows ● ● ● αɦ preview-method-arrows preview-arrows ● ● ● ● ● βɦ preview-property-arrows ● ● ● ● Internal arrows We will not use ω-objects for our arrow formalization. Instead, we use the set O of objects directly, so that the ωobjects table can be considered split again into (A) objects and (B) a table of module inclusion lists. This yields • (A) internal property arrows, • (B) inclusion lists arrows, together considered as internal arrows. Internal property arrows We consider internal property arrows be abstractions of triples (source, name, target) according to the following table. Terminology source name target Description Domain Domain self-arrows objects self (*) objects identity arrow objects terminative? boolean basic type primary? boolean non-terminals sc non-terminals superclass type-system-arrows eigenclasses ce objects eigenclass predecessor objects ec eigenclasses eigenclass successor includers cparent includers containment parent cname symbols (Υ) containment name frozen? frozenness common-value-arrows objects tainted? boolean taintedness trusted? trust some value, special-value-arrows non-includers bytes, Object value fields, see Object internal value data. dflt, … Induced maps • ϙ-arrows and ϻ-arrows: Object internal properties and module inclusion lists are represented by the following functions on ϙ-arrows and ϻ-arrows: • ϙ(x,s).source ≝ x • ϻ(x,i).source ≝ x • ϙ(x,s).name ≝ s • ϻ(x,i).idx ≝ i • ϙ(x,s).target ≝ x.s • ϻ(x,i).target ≝ x.incs[i] • ι-arrows and ϰ-arrows: Array and hash members are represented by the following functions on ι-arrows and ϰ-arrows: • ϰ(x,i).source ≝ x • ι(x,i).source ≝ x • ϰ(x,i).idx ≝ i • ι(x,i).idx ≝ i • ϰ(x,i).hcode ≝ x.hcodes[i] • ϰ(x,i).key ≝ x.keys[i] • ι(x,i).target ≝ x._list[i] • ϰ(x,i).target ≝ x.values[i] • α-arrows and β-arrows: Method and property maps are represented by the following functions on α-arrows and β-arrows: • α(x,s).source ≝ x • β(x,s).source ≝ x • α(x,s).name ≝ s • β(x,s).name ≝ s • α(x,s).target ≝ x.met(s) • β(x,s).target ≝ x.pty(s) • α(x,s).mvisibility ≝ x.mvisibility(s) Own data fields Maps introduced in the previous subsection correspond to the following data fields: Applies to Field Domain Description Relevant field(s) in MRI Name (Type) source OID the object ϙ-arrows name ℬ internal property name target “mixed” internal property value source OID the includer ϻ-arrows idx ℤ member index target OID ith-includee source (owner) OID the Array instance (owner) ι-arrows idx ℤ member index target (value) OID the target object (value) source (owner) OID the Hash instance (owner) idx ℤ member index ϰ-arrows hcode ℤ hash code key OID key target (value) OID the target object (value) source (owner) OID the source object (owner) α -arrows name string method name mvisibility ENUM_V method visibility part of flag (in rb_method_entry_t) target (value) Π the target method (value) def (in rb_method_entry_t) source (owner) OID the source object (owner) β-arrows name string property name target (value) OID the target object (value) Preview arrows The following maps are induced on preview arrows. (like with own arrows) • αɦ(x,s).owner ≝ x.ec.mowner(s) • βɦ(x,s).owner ≝ x.powner(s) • αɦ(x,s).target ≝ x.ec.met_h(s) • βɦ(x,s).target ≝ x.pty_h(s) • αɦ(x,s).mvisibility ≝ x.ec.mvisibility_h(s) This gives rise to the αɦarrows and βɦarrows data tables which are sort of built-in database view. Gray color indicates that except for the owner column, αarrows (resp. βarrows) and αɦarrows (βɦarrows) have the same field set. αɦarrows βɦarrows source (alias receiver) name owner mvisibility target Object data stratification Grouping arrows a emanating from a given object x (i.e. such that a.source == x) we obtain a picture about object data stratification. Arrow set Subset Description identity arrow Object identity ϙ-arrows type-system common values Internal properties special values ϻ-arrows Inclusion list ι-arrows and ϰ-arrows Array/hash members α-arrows Own methods β-arrows Own properties αɦ-arrows Respondent methods βɦ-arrows Inherited properties A5: Object copy An A5 structure is an A4 structure equipped with (.dyn_class?) where • .dyn_class? is a boolean attribute of classes indicating whether the class is dynamic. The following is required: (A5~1) All subclasses of a dynamic class are dynamic. (A5~2) Of the already introduced classes, only Proc is dynamic. Using the .dyn_class? attribute, we define the following attributes of objects: • .clontype is an attribute of objects with possible values 'clone' or 'none'. • .dumptype is an attribute of objects with possible values 'clone', 'ref' or 'none'. The attributes are defined according to the following table. objects x x.clontype x.dumptype classes and modules such that x.croot != Object 'none' (in particular, anonymous classes and modules) immediate values (false, true, nil, Fixnums, Symbols) 'none' Encodings 'ref' the inheritance root r (i.e. the BasicObject class) Numerics except Fixnums 'clone' includers such that x.croot == Object except r 'ref' (i.e. “full-named” classes and modules except BasicObject) 'clone' instances of dynamic classes (Procs, Methods, UnboundMethods, …) 'none' other objects (objects that are not immediate values and not instances of Module, Class, Encoding, Numeric and of dynamic classes) 'clone' Object cloning / duplication The clone and dup methods of Kernel create siblings in the primary inheritance. Given a primary object x (a class or, more typically, a terminal), y = x.clone (or y = x.dup) creates a primary object y such that .terminative? and .ec_sc_pr coincide on x and y. The following table shows a correspondence between clone/dup and new. The new method The clone method The dup method Application x is a class Class.new(x.sc) x.clone x.dup x is a terminal x.class.new x.clone x.dup Hooks initialize initialize_clone initialize_dup 1. Create an eigenclass chain represented by 1 actual object. Procedure without hooks 2. Establish .sc-links (via .class-link if x is terminal). 3. Make a shallow copy: Copy the arrow set x.arrows specified below. In the case of clone, this causes “arrowed” eigenclasses to become actual. The copied arrow set x.arrows is defined as follows: clone dup • x.arrows is undefined if x.clontype == 'none', else • x.arrows is the set of all own arrows a such that a.source.pr == x a.source == x except self-arrows and except the following: • [S:.ec / .ce arrows:S], and • [S:.cparent / .cname arrows:S] (i.e. clones of includers are anonymous), • [S:.frozen? arrows:S]. 1. Cloning / duplicating objects x with x.clontype == 'none' is not supported. 2. The inheritance root cannot have siblings, thus r.clontype == 'none'. 3. There seems to be a difference between clone versus dup support for instances of Method and UnboundMethod. This difference is not expressed by the above description. 4. Hash cloning/duplication is based on sequential insertion of hash members. This might result in a modified set of ϰ-arrows. 5. (∗) In addition, for each a from x.arrows that is an own method arrow (an α-arrow) the method a.target is copied too. This allows for proper copy of method de-aliasing. To describe this would require introducing arrows for .orig_owner and .orig_name. Object dumping We describe Marshal.dump logic by specifying sets x.objects and x.arrows of stored objects and arrows, respectively, for each “dumpable” object x. We proceed inductively by defining sets x.objects (0), …, x.objects(n) and x.arrows(0), …, x.arrows(n) so that x.objects(n) (resp. x.arrows(n)), if defined, is some subset of objects (resp. arrows) reachable from x by at most n arrows. The sets x.objects and x.arrows are then either undefined (in the case of a “non-dumpable” object x) or they are defined by • x.objects == x.objects(n) and x.arrows == x.arrows(n) where n is such that x.objects(n) == x.objects(n+1). • x.objects(0) is undefined if x.dumptype == 'none', else • x.objects(0) is the empty set if x.dumptype == 'ref', else • x.objects(0) is undefined if x.ec has an own method or property (i.e. an α- or β-arrow a such that a.source == x.ec), else • x.objects(0) is the single-element set {x}. • x.arrows(0) is undefined if x.objects(0) is undefined, else • x.arrows(0) is the the single-element set {ϙ(x,'self')} if x.objects(0) is the empty set, else • x.arrows(0) is the set of all arrows a such that either (A) or (B) is satisfied: □ (A) a.source == x and a is of one of the following types: ☆ a special-value ϙ-arrow, ☆ a ι-arrow (array member), ☆ a ϰ-arrow (hash member), ☆ a β-arrow (own property, necessarily instance variable, because x is a pure instance (non-includer)), □ (B) a.source.ce == x and a is one of the following types: ☆ a ϙ-arrow named sc (so that a corresponds to a .class-link – recall that x.ec.sc equals x.class for terminal x), ☆ a ϻ-arrow, (so that a corresponds to an .ec.incs[i]-link between x and a module included in the eigenclass x.ec). Let n > 0. • x.objects(n) is undefined if □ x.objects(n-1) is undefined, else if □ y.objects(0) is undefined for some object y that is the target of an arrow from x.arrows(n-1), • x.objects(n) is the set of all objects y such that □ y ∈ x.objects(n-1) or □ y.dumptype == 'clone' and y is the target of an arrow from x.arrows(n-1). • x.arrows(n) is undefined if x.objects(n) is undefined, else • x.arrows(n) is the set of all arrows a such that □ a ∈ x.arrows(n-1), or □ a ∈ y.arrows(0) for some y ∈ x.objects(n-1). 1. Some arrow paths starting with an .ec-arrow are considered as a single “short-cut” arrow (case (B)). 2. If x.dumptype == 'ref' then x.objects is defined as empty set but x.arrows is defined as a singleton set containing just the identity arrow of x. This means that just a “reference” to x is 3. The set x.objects can only contain pure instances (non-includers). Includers are dumped “by reference”. 4. Dumping of x is unsupported whenever there is an arrow path ending in an object y with y.objects(0) undefined, in particular, if y.dumptype == 'none'. A6: Method de-aliasing An A6 structure is an A5 structure equipped with (.orig_owner, .orig_name) where • .orig_owner is a function from the set Π ∖ {π} to includers assigning each non-whiteout method its original owner. • .orig_name is a function from the set Π ∖ {π} to Υ assigning each non-whiteout method its original name. The following is required: For every non-whiteout method α, • x.met(s) == α implies x ≤ α.orig_owner. • If (x,s) == (α.orig_owner, α.orig_name) then any of the following cases may occur: 1. x.met(s) == α (the most typical case), 2. x.met(s) is undefined or equals π, 3. x.met(s) is an nwo-method different from α. (A6-T~1*) The .orig_owner and .orig_name are preserved. As of MRI 1.9.2, this condition is not 100% guaranteed, due to implementation bugs []. An example of “overwriting” the original method owner is shown in the following code. def def_report class_eval { def report; super end } class A; def report; '-A-' end end class B; def report; '-B-' end end class X < A; def_report; end class Y < B; def_report; end p Y.new.report #-> -B- p X.new.report #-> NotImplementedError Parent method call (super) By the parent method resolution we mean a partial function pmr() from O × O × Υ to O × O × Υ. The assignment (x,o,s) ↦ (x,p,t) has the following semantics: • x is the receiver, • x is the preserved receiver, • o is the current method owner, • p is the parent method owner, • s is the current method name, • t is the parent method name. Given a triple (x,o,s), the triple (x,p,t) == pmr(x,o,s) is defined as follows: • (A) pmr(x,o,s) is undefined if either o.met(s) is undefined or a whiteout or o does not occur in x.ec.ancs. • (B) Denote α = o.met(s). □ (1) Let i be the smallest such that x.ec.ancs[i] == α.orig_owner. If no such i exists then apply (C). □ (2) Let j be the smallest such that i < j and x.ec.ancs[j] is a method owner of α.orig_name. If no such j exists then apply (C). □ (3) If x.ec.ancs[j].met(α.orig_name) is a whiteout then apply (C). □ (4) pmr(x,o,s) = (x, x.ec.ancs[j], α.orig_name). • (C) pmr(x,o,s) == (x, x.ec.mowner(μ), μ) if x.ec.mowner(μ) is defined, else • (D) pmr(x,o,s) is undefined. 1. (B4) shows that the parent method call involves de-aliasing. 2. Repetitive inclusion lists can induce cycles in the pmr map. • De-aliasing of C#report results in “skipping” of B#report. class O; def report; "O" end end class A < O; def report; "A" + super end end class B < A; end class C < B; alias report report end class B; def report; "B" end end class A; remove_method :report end puts C.new.report #--> AO • Multiple occurence of M in B's ancestor list causes a cycle of supers. module M def report(i=0); "-M(#{i})" + (i < 4 ? super(i+1) : " ...") end class A; end class B < A; include M end class A; include M end p B.ancestors #--> [B, M, A, M, Object, Kernel, BasicObject] p B.new.report #--> "-M(0)-M(1)-M(2)-M(3)-M(4) ..." • In this example, A inherits the report method from N. The original method owner of N#report is M. This module does NOT appear among A's ancestors until the re-inclusion of N into A. module M; def report; super end end module N; end class O; def report; '-O-' end end class A < O; include N end module N; include M end module N; alias report report end A.new.report rescue puts $!.inspect #--> super: no superclass method p A.ancestors.take(4) #--> [A, N, O, Object] class A; include N end p A.ancestors.take(5) #--> [A, N, M, O, Object] puts A.new.report #--> -O- Actuality revisited As of MRI 1.9, Ruby allows hacks to create semiactual objects which do not satisfy “blankness” conditions imposed by our axioms. In particular, it is possible to have semiactual objects x such that • x has own includees, • x has own methods, • x has own properties (constants or instance variables). The .ec_ref hack The code below defines a method ec_ref which allows to reference an eigenclass x.ec of a class x without x.ec's actualization. The method makes a “guess” of x.ec's object identifier and uses ObjectSpace._id2ref, see The inverse to .object_id. class Class EC_ID_DELTA = (x = Class.new).singleton_class.object_id - x.object_id def ec_ref ObjectSpace._id2ref(object_id + EC_ID_DELTA) Though being platform/environment dependent by nature, the code probably works in most cases. The _class_method hack Having semiactual method-owners can also be accomplished via the private_class_method and public_class_method visibility specifiers: class X; end; class Y < X; end def X.m; end nnt_delta_report # 0 ec = Y.singleton_class nnt_delta_report # 1 p ec.private_instance_methods(false) # [:m] Built-in Ruby reflection Blank slate objects Except for very few methods like !, ==, != or equal?, Ruby does not provide reflection for blank slate objects. Most methods that are stated to be provided for objects are actually provided for A test whether an object x is not a blank slate object is performed by Object === x (it evaluates to true if x is an Object, i.e. x is not a blank slate object). The object-to-integer map .object_id The built-in method object_id of Object provides an injective map from actual objects to integers represented by Fixnum or Bignum instances. Immediate values except Symbols have a fixed value-to-id Object x Class of x x.object_id Class of x.object_id false FalseClass 0 true TrueClass 2 Fixnum nil NilClass 4 x Fixnum 2*x + 1 Fixnum if 2*x + 1 is in Fixnum's range, Bignum otherwise All other objects No fixed prescription Fixnum The inverse to .object_id The built-in method ObjectSpace._id2ref(x) provides an inverse to y.object_id: • ObjectSpace._id2ref(x) == y if y.object_id == x for some, necessarily unique, object y. • If there is no such y, then an exception is raised. The .toX map The .toX function is an injective map from objects to hexadecimally encoded integers. It is of the form • x.toX ≝ "%#08x" % [x.oid2] where .oid2 is a variant of .object_id. The string "0x2fe00e" is an example of a .toX value. The conversion between .object_id and .oid2 is partially described in the following table. x.oid2 Applicability condition x.object_id x is false, true, nil or a non-negative Fixnum ? x is a negative Fixnum ? x is a symbol 2 * x.object_id x is NOT an immediate value The object-to-string map .toS The built-in methods to_s of Object and Module provide a map from objects to strings represented by String instances. If (CNC~1) is satisfied then the map is injective. In contrast to object_id, the to_s method is – by convention – subject to override. We therefore refer to an alias method .toS which can be established by class Object; alias toS to_s end class Module; alias toS to_s end The .toS function is defined according to the following rules: x.toS Applicability condition "Object" x == Object x.cpathname x.croot == Object and x != Object "#<Class:%s>:%s" % x is a named class or module and x.croot an eigenclass [x.croot.toX, x.cpathname] "#<%s:%s>" % [x.class.toS, x.toX] x is a non-includer (pure instance) x is an anonymous module "#<Class:%s>" % [x.toX] x is an anonymous class "#<Class:%s>" % [x.ce.toS] x is an eigenclass The table below contains representative examples of .toS. Each row contains a representantive x of a partition of primary objects according to the following criteria: 1. Is x a named module? 2. The includer containment component type (A, B, or C) of x or of x.class if x is a non-includer. 3. Is x terminal? We assume that X::Y and X.ec::A are classes and X::M and X.ec::N are modules. Containment type x.ec(i).toS (string representation of i-th eigenclass of a primary object x) ↓ x↓ i → 0 1 A X::Y X::Y #<Class:X::Y> X::Y.new #<X::Y:0xab4590> #<Class:#<X::Y:0xab4590>> B Class.new #<Class:0xab43f8> #<Class:#<Class:0xab43f8>> ↳ .new #<#<Class:0xab43f8>:0xab4200> #<Class:#<#<Class:0xab43f8>:0xab4200>> C X.ec::A #<Class:0xad3d00>::A #<Class:#<Class:0xad3d00>::A> X.ec::A.new #<#<Class:0xad3d00>::A:0xaaf9a0> #<Class:#<#<Class:0xad3d00>::A:0xaaf9a0>> named modules A X::M X::M #<Class:X::M> C X.ec::N #<Class:0xaae968>::N #<Class:#<Class:0xaae968>::N> Observations: Denote y = x.ec(i). 1. Denote n the number of trailing '>' characters of y.toS. Then for the eigenclass index i the following holds: □ i == n - 1 if y.toS contains the string ":0x" not followed by a string containing "::", □ i == n otherwise. 2. For i > 0, the substring of y.toS delimited by the last occurrence of a single ':' and the first occurrence of trailing '>' equals □ x.toS if x is a named class or module with x.croot equal to Object, □ x.croot.toX + ">::" + x.cpathname if x is a named class or module with x.croot an eigenclass, □ x.toX otherwise. 3. Whether y is terminative cannot be detected from y.toS alone (X.ec::A.toS and X.ec::N.toS have same format). These observations allow us, for arbitrary object y, to obtain the eigenclass index y.eci and the primary object y.pr from the string representation y.toS. This can be realized by the following code: class Object def eci (s = toS).length - s.index(/[>]*$/) - (s.match(/:0x(?!.*\:\:)/)? 1:0) def pr_chunk (m = toS.match(/[^:]\:(?!\:)(?!.*[^:]\:[^:])(.*[^>])[>]+$/)) ? m[1] : toS def pr if eci == 0 then return self end c = pr_chunk if c.include?(">") # the complicated case: pr.croot is an eigenclass r = (m = c.match(/^(0x.*)[>]::(.*)$/)) ? m[1] : raise("???") r = ObjectSpace._id2ref(eval(r)/2) return r.class_eval("self::" + m[2]) (p = ::Object.class_eval(c)).instance_of?(Fixnum) ? ObjectSpace._id2ref(p/2) : p Unfortunately, the .pr implementation requires (CNC~2). Containment reflection Relying on (CNC~2), we can implement includer containment reflection as follows. class Module def cname (m = toS.match(/(^|(::))([^:>]+)$/)) ? m[3] : nil def cpathname (m = toS.match(/(^Object|^|(::))([^<>]*)$/)) ? m[3] : "" def croot_toS ((s = toS).match(/^[^>]*$/)) ? "Object" : (m = s.match(/(.*?)::.*[^>]$/)) ? m[1] : toS def croot if eci > 0 then return self end n = croot_toS m = n.match(/\:(0x[^>]*)/) m ? ObjectSpace._id2ref(eval(m[1])/2) : eval("::" + n) def cancs a = [croot] cpathname.split("::").each { |x| a << a.last.const_get(x, false) } def cparent(i = 1); cancs[i] end Enumerating objects Ruby's facilities for enumerating the object set O are described in the following table. Subset of O Way of enumeration Immediate false, true, and nil “Literal” enumeration values Fixnums Not supported (?)^∗ Symbols Using Symbol.all_symbols Classes and terminals that are not immediate values Using ObjectSpace.each_object Actual eigenclasses Not supported (?) Reference to built-in Ruby reflection Relations, (partial) functions and constants defined in this document Ruby 1.9 built-in or semi-built-in correspondents Notation Description x.sc the superclass of a non-terminal x x.superclass x.singleton_class with the following limitations: the eigenclass of x x.ec (the successor of x in the eigenclass chain) 1. Not defined or not equal to x.ec if x is an immediate value. 2. With a possible side-effect of making an eigenclass actual. x.ec(i) i-th eigenclass successor of x i > 0 ? x.singleton_class.ec(i-1) : x x.ce the predecessor of x in the eigenclass chain Obtainable from x.toS, see x.ce(i). x.ce(i) i-th eigenclass predecessor of x (j = x.eci - i) > 0 ? x.pr.ec(j) : nil x.pr the primary object of x Obtainable from x.toS using ObjectSpace._id2ref. x.eci the eigenclass index of x Obtainable from x.toS. r the inheritance root BasicObject c the instance root, Class (equal to r.class) the metaclass root m the metamodule root Module • the conventional inheritance root ¤ • the (named) containment root Object (equal to r.croot) • the default creation-superclass ¤.incs[0] the conventional MRO root Kernel if having its initial value r.ec the implicit-metaclass root BasicObject.singleton_class c.ec the usual actualclass root Class.singleton_class x.class the class of x x.class x direct-instance-of y x.class equal to y x.instance_of? y x instance-of y x.class is a subclass of or equal to y x.class ≤ y && y.class == Class x kind-of y x.ec ≤ y x.kind_of? y Obtainable via included_modules. Equals x.incs own inclusion list of an includer x 1. x.included_modules if x is a module or r, 2. x.included_modules.reverse.drop( x.superclass.included_modules.length).reverse otherwise. 1. x.include? y if x is a module or r, x own-includer-of y includer x is an own includer of a module y 2. x.incs.include? y in general. Note: x.include?(y) && !x.superclass.include?(y) is not “reliable” with respect to repetitive ancestor lists. x includer-of y x.include? y (method of Module) x ≤ y HM descendancy x <= y (method of Module) Similarly, methods <, >=, > are defined with obvious meaning. Obtainable using x.superclass. Equals inheritance ancestors of a non-terminal x, x.hancs starting with x itself • [x] if x is r, • [x] + x.superclass.hancs otherwise. x.hancestors x.hancs without eigenclasses x.ancestors - x.included_modules Obtainable using x.superclass and x.incs. Equals MRO ancestors of an includer x, starting with x.ancs x itself 1. x.ancestors if x is a module or r, 2. [x] + x.incs + x.superclass.ancs otherwise. x.ancestors x.ancs without eigenclasses x.ancestors (method of Module) x.terminal? is an object x terminal? x.class != Class x.class == Class && x == x.ancestors[0] x.class? is an object x a class? • using .toS: x.class == Class && x.eci == 0 x.class == Class && x != x.ancestors[0] x.eigenclass? is an object x an eigenclass? • using .toS: x.eci > 0 x.terminative? is an object x terminative? • using .toS: x.pr.terminal? x.metaclass? is an object x a metaclass? Class == x.class && !!(Class >= x) x.module? is an object x a module? x.kind_of?(Module) && !x.kind_of?(Class) x.metamodule? is an object x a metamodule? x <= Module && x != Class && x.class? x.blank_slate? is an object x a blank slate object? !(Object === x) x.frozen? is an object x frozen? x.frozen? x.tainted? is an object x tainted? x.tainted? x.trusted? is an object x trusted? !x.untrusted? x.klass the virtual connection to x's eigenclass Not supported (?) x.aclass the actualclass of x Not supported (?) x.actuals actual eigenclasses-or-self of a primary Not supported (?) object x x.cparent the containment parent of an includer x x.cname the containment name of an includer x x.cpathname the containment path-name of an includer x See Containment reflection x.croot the containment root of an includer x x.cancs containment ancestors of an includer x nesting the current nesting Module.nesting self the current object self main the main context Not directly supported (?) but “recordable” by $main = self at the start of a Ruby program x nwo-method-owner-of s includer x has own method s which is not a methods.include?(s) where methods equals x.private_instance_methods(false) + x.protected_instance_methods(false) + x whiteout .public_instance_methods(false) (methods of Module). x nwo-method-inheritor-of s includer x owns or inherits a non-whiteout Same as with nwo-method-owner-of but with (false) omitted (or replaced by (true)) method s x wo-method-owner-of s (x.met(s includer x has own whiteout method s Not supported (?) ) == π) x wo-method-inheritor-of s (x includer x inherits a whiteout method s Not supported (?) .met_h(s) == π) x.mowner(s) owner of a non-whiteout method s inherited by x.instance_method(s).owner an includer x x.ec.mowner(s) owner of a non-whiteout method s inherited by x.method(s).owner (methods of Object and Method, respectively) an eigenclass x.ec x constant-owner-of s includer x has own constant s x.const_defined?(s, false) x constant-inheritor-of s includer x owns or inherits constant s x.const_defined?(s, true) x.cowner(s) owner of a constant s inherited by an includer x.ancs.find{ |y| y.const_defined?(s, false)} [:private,:protected,:public].find{|v| x.send( x.mvisibility(s) own method visibility of s in an includer x "#{v}_instance_methods",false).include?(s) x.mvisibility_h(s) inherited method visibility of s in an Same as with .mvisibility but with false replaced by true. includer x uqcr(s) unqualified constant resolution of s See Unqualified constant resolution qcr(x,s) qualified constant resolution of (x,s) See Qualified constant resolution uqmr(s) unqualified method resolution of s See Unqualified method resolution qmr(x,s) qualified method resolution of (x,s) See Qualified method resolution pmr(x,o,s) parent method resolution of (x,o,s) Not supported (?) x.hcodes list of hash-codes of a hash x Not supported (?) x.keys list of keys of a hash x x.keys x.values list of values of a hash x x.values x.dflt the default value/evaluator of a hash x x.default_proc || x.default x.dflt_call? the .dflt interpretation switch of a hash x !!x.default_proc Ruby reflection semantics The following table describes built-in methods for which the semantics has not been (directly) described in the previous subsection. Method name Owner Semantics • (A) x.singleton_methods(false).include?(s) iff x.ec.mvisibility(s) is defined and is not :private (in particular, x.ec is an nwo-method-owner of s). singleton_methods Kernel • (B) x.singleton_methods(true).include?(s) iff □ x.ec.mowner(s) is defined and appears before x.class in x.ec.ancs (in particular, x.ec.mowner(s) is not a class), and □ x.ec.mvisibility_h(s) (is defined and) is not :private. instance_methods Module x.instance_methods(inherit) equals x.protected_instance_methods(inherit) + x.public_instance_methods(inherit) up to member order Referred Ruby methods Owner Methods BasicObject ! == != equal? initialize instance_eval Object class clone define_singleton_method dup eql? extend freeze frozen? hash initialize_dup initialize_clone initialize_copy instance_of? kind_of? method object_id send singleton_class (Kernel) singleton_methods taint tainted? to_s untaint untrusted? ancestors class_eval const_defined? const_get const_missing const_set constants define_method extended include include? included_modules instance_method instance_methods Module method_defined? module_eval module_function name private private_class_method private_instance_methods private_method_defined? protected protected_instance_methods protected_method_defined? public public_class_method public_instance_method public_instance_methods public_method_defined? remove_const remove_method to_s undef_method Module.ec nesting [S:new:S] (∗[b]) Class new superclass Class.ec [S:new:S] (∗[a]) Kernel global_variables lambda p puts Array [] + << drop each empty? first include? index join last length map map! pop push replace reverse select shift slice uniq unshift zip String [] % + bytes chars encode encoding length match to_sym Encoding ascii_compatible? dummy? name Hash [] compare_by_identity? default default_proc keys values Method name owner receiver unbind UnboundMethod bind name owner receiver Symbol to_s Symbol.ec all_symbols ObjectSpace.ec _id2ref count_objects each_object Rational denominator numerator Complex imag real Range begin end exclude_end? Marshal.ec dump • Gray color in Object indicates that this class is stated to be method owner only by convention. • (∗[a]) As of version 1.9 and contrary to [], there is no singleton method new of Class. In the expression Class.new, the instance method of the Class class is invoked. This can be introspected by Class.method(:new).owner == Class. • (∗[b]) Similarly to (∗[a]) and contrary to [], there is no singleton method new of Module. Comparison with Smalltalk-80 A comparison of the Ruby object model with the Smalltalk-80 object model is provided in a separate document []. S1 superstructure representation Set-theoretic representation of the S1 structure is provided in a separate document []. Objects are embedded into a superstructure of sets so that the S1 structure is modelled by set membership. In • The eigenclass map corresponds to the powerset operator. • Inheritance ℍ corresponds to set inclusion ⊆. • The kind-of (aka is-a) relation .ec ◦ ℍ (*) corresponds to set membership ∈. The document also provides alternative axiomatization of S1 structures. The S1 structure Since the S1 structure is of fundamental importance not only for Ruby but to object-oriented programming in general, a PDF document [] has been elaborated that is specially dedicated to this structure. In order to describe the implementation of the S1 structure, the document also introduces object actuality. This allows for a brief comparison with the Smalltalk-80 object model. Object membership As a related standalone document rather than an appendix, the document [] provides a generalization of S1 and S2 structures that applies to many programming languages. As a result, the fundamental part of the object model is described in a general setting so that [] is a special case. The S1 structure appears as the canonical reduct of Ruby object membership. The kind-of relation is the extended membership which arises by composing the canonical membership with the Μ relation (self-or-own-includer-of). The document [] uses the ≤ symbol for .sc-inheritance (which is called simply inheritance). In addition, both ≤ and Μ are reflexive over the whole set of objects, including all terminals. Bibliographic references , Ruby Hacking Guide, 2004, , Hawthorne Press 2008, http://www.hawthorne-press.com/WebPage_RHG.html) , Union mounts/writable overlays design, 2009, LWN.net http://lwn.net/Articles/355351/ , The Well-Grounded Rubyist, Manning Publications 2009 , Abstract State Machines: A Method for High-Level System Design and Analysis, Springer 2003 , Introduction to lattices and order, Cambridge University Press 2002 , The double inclusion problem, 2005, http://eigenclass.org/hiki/The+double+inclusion+problem , Ruby and otherwise, http://www.klankboomklang.com/category/ruby-internals/ , The Ruby Programming Language, O'Reilly 2008 , Putting Metaclasses to Work, Addison Wesley 1998 , Evolving algebras: An attempt to discover semantics, in Current trends in theoretical computer science: essays and tutorials, Grzegorz Rozenberg, Arto Salomaa (eds), World Scientific 1993, http:// , Evolving Algebras 1993: Lipari Guide, in Specification and Validation Methods, E. Börger (ed.), Oxford University Press 1995 , The Ruby Object Model - Structure and Semantics, 2009, http://www.hokstad.com/ruby-object-model.html , Ruby Draft Specification, 2009, http://ruby-std.netlab.jp/ , Ruby's Implementation Does Not Define its Semantics, 2010, http://yehudakatz.com/2010/02/25/rubys-implementation-does-not-define-its-semantics/ , The Secret Life Of Singletons, 2008, http://banisterfiend.wordpress.com/2008/10/25/the-secret-life-of-singletons/ , Ruby objects, classes and eigenclasses, 2008, http://mccraigmccraig.wordpress.com/2008/10/29/ruby-objects-classes-and-eigenclasses/ , nLab tree, 2011, http://ncatlab.org/nlab/show/tree#as_digraphs_6 , The Linux VFS Model: Naming structure, 2011, http://www.atalon.cz/vfs-m/linux-vfs-model/ , The Ruby Object Model: Comparison with Smalltalk-80, 2012, http://www.atalon.cz/rb-om/ruby-object-model/co-smalltalk/ , The Ruby Object Model: S1 superstructure representation, 2012, http://www.atalon.cz/rb-om/ruby-object-model/s1-rep/ , Ruby Object Model – The S1 structure, 2012, http://www.atalon.cz/rb-om/ruby-object-model/rb-om-s1.pdf , Object Membership: The core structure of object-oriented programming, 2012, http://www.atalon.cz/om/object-membership/ , Metaprogramming Ruby, Pragmatic Bookshelf 2010 , Read Ruby, http://ruby.runpaint.org Ruby Doc, http://ruby-doc.org Ruby Forum, http://www.ruby-forum.com Bug #2502 include X, Y , The Python 2.3 Method Resolution Order, 2003, http://www.python.org/2.3/mro.html , The Ruby Object Model and Metaprogramming, 2008, http://pragprog.com/screencasts/v-dtrubyom/the-ruby-object-model-and-metaprogramming Wikipedia: The Free Encyclopedia, http://wikipedia.org Functional graph Browser compatibility To be viewed correctly, this document requires advanced browser features to be supported, including • HTML entities (ϰ, Ϲ, ϻ, |, ¯, ř, ɦ, ₀, ₁, ℇ, ℍ, ℕ, ℙ, ℤ, ℬ, ℱ, ⅀, Ⅰ, Ⅱ, Ⅲ, Ⅳ, ↑, →, ↔, ↖, ↘, ↙, ↦, ↳, ↷, ↾, ∔, ∖, ∗, ∥, ∩, ≘, ≝, ≠, ⊆, ⊇, ⊎, ⋄, ⋯, □, ●, ◦, ϙ, Α, Δ, Λ, Μ, Ω, Φ, Π, Υ, α, &, ä, β, •, ∩, ¢, ∪, ¤, ↓, ∅, ≥, >, ↔, …, ι, ∈, κ, ←, ≤, <, —, −, μ, , –, ω, ö, φ, π, ", →, ⊂, ⊆, ⊃, ×), • JavaScript, • Scalable Vector Graphics (SVG). Document history March 2 2011 The initial release. Major update. Main changes: • Subtitle changed from [S:Basic:S] structure in detail to Data structure in detail. • Built-in data types (Strings, Arrays, Hashes, Numerics, Ranges) introduced. September 2 2011 • Value domain introduced. • The [S:.culturality:S] attribute abandoned. • The concept of arrows further developed. • The trusted? attribute added. • Object copy introduced. October 18 2011 Improved description of module inclusion. January 2 2012 • New section: Method de-aliasing, including the super logic. • Extended description of includer containment transitions. January 4 2012 An example of parent method resolution added. • An elaborated example of an S2 structure added, including SVG pictures. January 11 2012 • The orientation of inclusion list order changed to coincide with that of ancestor lists. • The term “pure instance” introduced as a synonym to “non-includer”. • Notes to inclusion methods added. Febuary 10 2012 • Added: monounary algebra, pseudotree, the S1[₀₁] structure, .ec-.aclass interchangeability, conventional actuality. • New appendix: Comparison with Smalltalk-80. March 2 2012 • The .terminative? attribute removed from S1 signature, (S1~1) and (S1~2) interchanged. • New appendix: S1 superstructure representation. April 3 2012 Renumbering of S4~ conditions so that conditions that only depends on S1 are listed first. April 23 2012 New appendix: The S1 structure (a PDF article). April 27 2012 Note about Class.ec not being an owner of new. June 21 2012 • Enhanced terminology for metaclasses (distinguishing between explicit a implicit). • Introducing the c symbol for the Class class. June 28 2012 Corrected & improved description of constant resolution (qcr/uqcr). June 29 2012 The definition of a primorder algebra introduced explicitly (moved from []). October 10 2012 A reference to object membership [] added. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License.
{"url":"http://www.atalon.cz/rb-om/ruby-object-model/","timestamp":"2014-04-19T09:26:16Z","content_type":null,"content_length":"371380","record_id":"<urn:uuid:5bd798ed-a726-454c-850c-422e84cd5247>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
MATLAB:Inline Function From PrattWiki MATLAB has a command that lets you develop an analytical expression of one or more inputs and assign that expression to a variable. The inline command lets you create a function of any number of variables by giving a string containing the function followed by a series of strings denoting the order of the input variables. This method is good for relatively simple functions that will not be used that often and that can be written in a single expression. It is similar to creating an MATLAB:Anonymous Function with some significant differences. The syntax is to put the expression to be evaluated in single quotes, followed in order by the variables of the function with each of these also surrounded by single quotes. For example, if you want $c(a,b,\theta)$, to return $\sqrt{a^2+b^2-2ab\cos(\theta)}$, you could create an inline function as follows: c = inline('sqrt(a.^2+b.^2-2*a.*b.*cos(theta))', 'a', 'b', 'theta') MATLAB will respond with: c = Inline function: c(a,b,theta) = sqrt(a.^2+b.^2-2*a.*b.*cos(theta)) indicating that the variable c is now actually an inline function object that takes three arguments. You can now use the function by putting numbers in for the arguments - for example: SideThree = c(2, 3, pi/6) will return You can also use that function to return entire matrices. For example, the commands: [x,y] = meshgrid(0:.1:2, 0:.1:2); mesh(x, y, c(x, y, pi/4)); xlabel('Side 1'); ylabel('Side 2'); zlabel('Side 3'); title('Triangle Third Side vs. Sides Surrounding a 45^o Angle (mrg)') print -depsc InlineExamplePlot will produce the graph: MATLAB Help File The MATLAB help file for inline is^[1] INLINE Construct INLINE object. INLINE(EXPR) constructs an inline function object from the MATLAB expression contained in the string EXPR. The input arguments are automatically determined by searching EXPR for variable names (see SYMVAR). If no variable exists, 'x' is used. INLINE(EXPR, ARG1, ARG2, ...) constructs an inline function whose input arguments are specified by the strings ARG1, ARG2, ... Multicharacter symbol names may be used. INLINE(EXPR, N), where N is a scalar, constructs an inline function whose input arguments are 'x', 'P1', 'P2', ..., 'PN'. g = inline('t^2') g = inline('sin(2*pi*f + theta)') g = inline('sin(2*pi*f + theta)', 'f', 'theta') g = inline('x^P1', 1) 1. Inline functions cannot access variables in the workspace at any time, even if those variables are global. Assume that the space between the quotes in the first argument exists in its own special MATLAB universe. This is different from anonymous functions, in that anonymous functions can see the workspace at the time they are created. 2. Inline functions can only have one expression and can only return a single variable (though that variable can be a matrix). Post your questions by editing the discussion page of this article. Edit the page, then scroll to the bottom and add a question by putting in the characters *{{Q}}, followed by your question and finally your signature (with four tildes, i.e. ~~~~). Using the {{Q}} will automatically put the page in the category of pages with questions - other editors hoping to help out can then go to that category page to see where the questions are. See the page for Template:Q for details and examples. External Links 1. ↑ Quoted from MATLAB help file for inline
{"url":"http://pundit.pratt.duke.edu/wiki/MATLAB:Inline_Function","timestamp":"2014-04-16T04:10:41Z","content_type":null,"content_length":"25268","record_id":"<urn:uuid:7a8fe5b4-9248-4d8a-8f15-b9ff98af2600>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Sky Diver Word Problem January 26th 2009, 03:13 PM #1 Jan 2009 San Fran! Sky Diver Word Problem I need help on how to solve this one. A sky diver jumped from an airplane and fell freely for several seconds before releasing her parachute. Her height, h, in meters about the ground at any time given by: h= -4.9tē + 5000 before she released her parachute, and, h= -4t + 4000 after she released the parachute. How long after jumping did she release her parachute? How high was she above the ground at that time? Last edited by megs_world; January 26th 2009 at 03:48 PM. h= -4.9tē + 5000 h= -4t + 4000 set the two equations equal to each other and solve for t ... then go back and calculate h using either equation ... using both to calculate h is a good check of your solution for t. January 26th 2009, 05:59 PM #2
{"url":"http://mathhelpforum.com/math-topics/70051-sky-diver-word-problem.html","timestamp":"2014-04-17T19:15:57Z","content_type":null,"content_length":"32629","record_id":"<urn:uuid:9f24c67f-cad2-4e1c-8fa5-a3ff60512ed1>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Redondo Beach, Los Angeles, CA Palos Verdes Peninsula, CA 90274 Always applying a holistic approach when tutoring! ...Throughout the years, I have had the most rewarding experience when my students, besides turning their grades around, feel empowered to the point they become a math resource to their classmates. I take my classes very personally and work together with parents to... Offering 10+ subjects including algebra 2
{"url":"http://www.wyzant.com/Redondo_Beach_Los_Angeles_CA_Algebra_2_tutors.aspx","timestamp":"2014-04-16T11:19:38Z","content_type":null,"content_length":"63746","record_id":"<urn:uuid:ee47b1c9-23a6-4d75-8d22-11f0eb54d1bf>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Waltham, MA Math Tutor Find a Waltham, MA Math Tutor ...I'd love to see students' faces lighten up after my explanations clear all the confusion and questions they may have. I'd love to see students gaining confidence in solving problems. I had chances to reinforce my math skills while I have done research all these years. 5 Subjects: including algebra 1, algebra 2, precalculus, physics ...You will also get the benefit of my experience as a former Kaplan and Princeton Review GMAT and LSAT instructor, as well as Summit one-on-one SAT private tutor. I graduated magna cum laude from Brandeis University with a Bachelor in Philosophy and minors in Math, Russian Literature, Fine Arts Hi... 67 Subjects: including precalculus, marketing, logic, geography ...You have been fabulous, and he has really enjoyed working with you. "I would like to thank you for your diligent tutoring sessions, which paid off with much better results. I went up by 270 points to a 1960! M. also saw a major improvement as he went up to a score of 1800. 38 Subjects: including geometry, grammar, prealgebra, algebra 1 ...I pride myself on having a high sense of empathy which allows me relate to and understand the student’s perspective, identify any barriers to learning, and find ways to work around them. I hope to continue to teach and inspire others, as well as play a greater part in my local community along th... 22 Subjects: including calculus, prealgebra, geometry, algebra 2 ...I have a Master of Arts in Teaching in English as a Second Language from Salem State University. I work as an ESL teacher in the public schools at Horace Mann Elementary School. I have also been involved in English and writing tutoring individually and with groups in schools. 9 Subjects: including SAT math, ACT Math, writing, French Related Waltham, MA Tutors Waltham, MA Accounting Tutors Waltham, MA ACT Tutors Waltham, MA Algebra Tutors Waltham, MA Algebra 2 Tutors Waltham, MA Calculus Tutors Waltham, MA Geometry Tutors Waltham, MA Math Tutors Waltham, MA Prealgebra Tutors Waltham, MA Precalculus Tutors Waltham, MA SAT Tutors Waltham, MA SAT Math Tutors Waltham, MA Science Tutors Waltham, MA Statistics Tutors Waltham, MA Trigonometry Tutors Nearby Cities With Math Tutor Arlington, MA Math Tutors Auburndale, MA Math Tutors Belmont, MA Math Tutors Brighton, MA Math Tutors Brookline, MA Math Tutors Lexington, MA Math Tutors Medford, MA Math Tutors Newton Center Math Tutors Newton Centre, MA Math Tutors Newton, MA Math Tutors Newtonville, MA Math Tutors North Waltham Math Tutors South Waltham, MA Math Tutors Watertown, MA Math Tutors West Newton, MA Math Tutors
{"url":"http://www.purplemath.com/waltham_ma_math_tutors.php","timestamp":"2014-04-17T15:43:08Z","content_type":null,"content_length":"23789","record_id":"<urn:uuid:f3ba0b75-965f-4642-928c-10508edd34a0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
Our Assumptions Basic Assumptions: For every kilowatt-hour (kWh) of energy consumed, 1.6 pounds of carbon dioxide are released into the atmosphere. For every therm, 11.7 pounds or carbon dioxide are released into the atmosphere. The average price for electricity is $0.106 per kWh and the average price for natural gas is $1.21 per therm. For every gallon of gasoline consumed, 20 pounds of carbon dioxide are released into the atmosphere. All Energy Star appliance calculations assume average Energy Star models, average conventional models, and average appliance usage. Change Your Light Bulbs I have replaced __ 60-watt standard light bulbs with 13-watt CFL bulbs I have replaced __ 75-watt standard light bulbs with 20-watt CFL bulbs I have replaced __ 100-watt standard light bulbs with 23-watt CFL bulbs Over the course of its lifetime, a 13-watt compact fluorescent light bulb (CFL) will emit 752 pounds of CO[2] less than a standard 60-watt incandescent light bulb, a 20-watt CFL will emit 880 pounds of CO[2] less than a standard 75-watt light bulb, and a 23-watt CFL will emit 1232 pounds of CO[2] less than a standard 100-watt light bulb. These calculations assume the replacement of a 1,000-hour standard light bulb with a 10,000-hour CFL. Also assumes is that for every kWh of energy consumed, 1.6 pounds of carbon dioxide are released into the atmosphere. Source: Energy Star "Compact Fluorescent Light Bulbs Savings Calculator" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing Plant Trees I have planted __ trees Over the course of its 40-year lifetime the average tree can absorb around one ton of CO[2]. Source: www.climatecrisis.org I have saved __ acres of tropical rainforest The slashing and burning of one acre of rainforest releases 80 tons of CO[2] into the atmosphere. Buy ENERGY STAR Appliances I have purchased an ENERGY STAR refrigerator Over the course of the average Energy Star rated refrigerator's 13-year lifespan, it will emit 1,661 pounds of CO[2] less than an average conventional model. This calculation assumes that you are choosing to install an Energy Star refrigerator with a top mount freezer into your home instead of a conventional model and that for every kWh of energy consumed, 1.6 pounds of carbon dioxide are released into the atmosphere. Source: Energy Star "Residential Refrigerator Savings Calculator" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing I have purchased an ENERGY STAR dishwasher and have an electric water heater I have purchased an ENERGY STAR dishwasher and have a natural gas water heater Assuming that the average lifespan of a dishwasher is 10 years, that 1.6 pounds of carbon dioxide are emitted for every kWh of energy consumed, and that the average household runs four loads of dishes each week, an Energy Star rated dishwasher that utilizes an electric water heater saves an average of 1,155 lbs of CO[2] over the course of its lifetime. The average Energy Star rated dishwasher that utilizes a natural gas powered water heater saves an average of 508 lbs of CO[2] over the course of its lifetime. Source: Energy Star "Residential Dishwasher Savings Calculator" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing I have purchased an ENERGY STAR washing machine and have an electric water heater I have purchased an ENERGY STAR washing machine and have a natural gas water heater Assuming that the lifespan of an average washing machine is 11 years, that 1.6 pounds of carbon dioxide are emitted for every kWh of energy consumed, and that the average household runs four loads of wash each week, an Energy Star rated washing machine that utilizes an electric water heater saves an average of 2,672 lbs of CO[2] over the course of its lifetime. An Energy Star rated washing machine that utilizes a natural gas powered water heater saves an average of 504 lbs of CO[2] over the course of its lifetime. Source: Energy Star "Residential Dishwasher Savings Calculator" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing I have purchased an ENERGY STAR oil-fired furnace I have purchased an ENERGY STAR natural gas-fired furnace During its 17-year lifespan, an average Energy Star rated oil furnace will emit approximately 65,513 pounds of CO[2] less than an average conventional unit. During the 18-year lifespan of an average Energy Star gas-fired furnace, it will emit 55,698 pounds of CO[2] less than an average conventional unit. This calculation assumes that you are replacing the furnace of a 2,500 square foot home located in one of the Mid-Atlantic States with a new Energy Star furnace that uses a programmable thermostat. Also assumed is that 159.5 pounds of carbon dioxide are emitted for every million British Thermal Units (MMBtu) of oil consumed and that 116.3 pounds of carbon dioxide are emitted for every MMBtu of gas consumed. Source: Energy Star "Residential Furnace Savings Calculator" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing I have purchased an ENERGY STAR oil-fired boiler I have purchased an ENERGY STAR natural gas-fired boiler During its 20-year lifespan, an average Energy Star rated oil-fired boiler will emit around 150,910 pounds of CO[2] less than an average conventional unit. During the 20-year lifespan of an average Energy Star rated gas-fired boiler, it will emit 110,036 pounds of CO[2] less than an average conventional unit. These calculations assume that the house is located in a place with a climate similar to that in Washington DC and that the new furnace uses a programmable thermostat. Also assumed is that 159.5 pounds of carbon dioxide are emitted for every MMBtu of oil consumed and that 116.3 pounds of carbon dioxide are emitted for every MMBtu of gas consumed. Source: Energy Star "Residential Boiler Savings Calculator" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing I have purchased __ ENERGY STAR air conditioner(s) The average Energy Star rated air conditioner saves 2,195 lbs of CO[2] from being emitted over its 11-year lifecycle. This calculation assumes that an Energy Star model is used instead of a conventional air conditioner, in average US climatic conditions, and that 1.6 pounds of carbon dioxide are emitted for every kWh of energy consumed. Source: Energy Star "Residential Room Air Conditioner Savings Calculator" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing I have purchased __ ENERGY STAR desktop computer(s) and monitor(s) Using an Energy Star rated desktop computer and monitor can reduce CO[2] emissions by 1,274 lbs over their 4-year lifespan. Most of the savings—1,154 pounds CO[2]—come from the monitor. These calculations assume the replacement of an average conventional computer with a 17" monitor with an average Energy Star rated computer with a 17" LCD monitor. Also assumed is that 1.6 pounds of carbon dioxide are emitted for every kWh of energy consumed. Source: Energy Star "Computer and Monitor Savings Calculators" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing I have purchased __ ENERGY STAR television(s) Over the 9-year lifetime of an average Energy Star rated television, it produces 619 pounds of CO[2] less than an average conventional unit. This calculation assumes that 1.6 pounds of carbon dioxide are emitted for every kWh of energy consumed. Source: Energy Star "Television Savings Calculator" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing Conserve Energy I will adjust my thermostat down __ °F in the winter I will adjust my thermostat up __ °F in the summer Adjusting your thermostat down in the winter or up in the summer will reduce CO[2] emissions by 500 pounds a year for each degree. Source: www.climatecrisis.org I will keep my computer off for __hours per day This calculation assumes that each hour pledged will be adhered to every day of the year. Calculation is for a desktop computer with a 17" monitor that is used an average of 250 days a year. Also assumed is that 1.6 pounds of carbon dioxide are emitted for every kWh of energy consumed. Source: Energy Star "Computer and Monitor Savings Calculators" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing I do not own a computer If you do not own or use your own computer you can save approximately 5,107 pounds of CO[2] from being produced. This calculation assumes average computer usage and a 4-year lifespan for a desktop computer with 17" regular monitor. Also assumed is that 1.6 pounds of carbon dioxide are emitted for every kWh of energy consumed. Source: Energy Star "Computer and Monitor Savings Calculators" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing I do not own a TV If you do not have a TV in your home you can save approximately 2,700 pounds of CO[2] from being produced. This calculation assumes a 9-year lifespan for a TV and that the average TV in the United States is on for 5.5 hours a day. Also assumed is that 1.6 pounds of carbon dioxide are emitted for every kWh of energy consumed. Source: Energy Star "Television Savings Calculator" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing I do not own an air conditioner If you do not use an air conditioner in your home you can save approximately 23,707 pounds of CO[2] from being produced over its 11-year lifespan. Assumes a location with a climate similar to that of Washington DC. Also assumed is that 1.6 pounds of carbon dioxide are emitted for every kWh of energy consumed. Source: Energy Star "Residential Room Air Conditioner Savings Calculator" at www.energystar.gov/index.cfm?c=bulk_purchasing.bus_purchasing Use Hot Water Efficiently I have turned down my hot water heater to 120 degrees If you turn your hot water heater down to 120 degrees you can save 8,250 pounds of CO[2] over its 15-year lifespan. Source: www.climatecrisis.org I have wrapped my water heater in insulation If you wrap your hot water heater in an insulating blanket you can save 15,000 pounds of CO[2] from being produced over is 15-year lifespan. Source: www.climatecrisis.org I have installed a low-flow showerhead By installing a low-flow showerhead and reducing the amount of hot water being produced, you can save 2,450 pounds of CO[2] from being produced over the showerhead's 7-year lifespan. Source: www.climatecrisis.org I have installed low-low sink heads or faucet aerators By installing low-flow sink heads or faucet aerators and reducing the amount of hot water being produced, you can save 3,150 pounds of CO[2] from being produced over their 7-year lifespan. I will wash my clothes with cold or warm water instead of hot By washing clothes in cold or warm water instead of hot, 500 pounds of CO[2] can be saved each year. Assumes 2 loads of wash completed each week. Source: www.enviroliteracy.org/article.php/1345.html Buy Clean Energy I have purchased renewable energy for __% of my electric bill Assumes the purchase of wind or solar generated power and no direct CO[2] emissions associated with the purchase of this renewable energy. The renewable energy purchased for this action can be from a utility offering energy from renewable power sources or through the purchase of renewable energy credits. I have installed a solar or wind power system that provides __% of my electricity Assumes no direct CO[2] emissions associated with the use of wind and solar power. Drive Less—Drive Smart I will take public transportation __ days per week Assumes that taking public transportation cuts CO[2] emissions by 90%. Also assumes the average person commutes 50 weeks per year, that 20 pounds of CO[2] are emitted for every gallon of gasoline consumed, and that the mileage for the car not driven is 22 miles per gallon. I will walk or bike __ days per week Assumes no direct CO[2] emissions associated with walking or biking. Also assumes the average person commutes 50 weeks per year, that 20 pounds of CO[2] are emitted for every gallon of gasoline consumed, and that the mileage for the car not driven is 22 miles per gallon. I will carpool or carshare __ days per week Assumes sharing a ride with one other person, that the average person commutes 50 weeks per year, that 20 pounds of CO[2] are emitted for every gallon of gasoline consumed, and that the mileage for the car not driven is 22 miles per gallon. Also assumes that the weight of one extra person in the car does not increase the amount of gasoline consumed. I will telecommute from a home office __ days per week Assumes no additional CO[2] emissions associated with telecommuting from home. Also assumes the average person commutes 50 weeks per year, that 20 pounds of CO[2] are emitted for every gallon of gasoline consumed, and that the mileage for the car not driven is 22 miles per gallon. I will keep my car properly tuned Keeping your car properly tuned reduces CO[2] emissions by an average of 545 pounds a year. This calculation assumes that the average car is driven 12,000 miles a year and has a fuel efficiency of 22 mpg; an average of 20 pounds of CO[2] are produced per gallon of gasoline consumed; and, there is a 5% reduction in gas use as a result of keeping the car tuned. Sources: www.enviroliteracy.org/article.php/1345.html; www.epa.gov/otaq/climate/420f05004.htm I will ensure there is sufficient tire pressure in cars I drive Ensuring that there is sufficient pressure in car tires reduces CO[2] emissions by an average of 327 pounds a year. This calculation assumes that the average car is driven 12,000 miles a year and has a fuel efficiency of 22 mpg; an average of 20 pounds of CO[2] are produced per gallon of gasoline consumed; and, CO[2] emissions from driving are reduced by 3% as a result of keeping tires properly Sources: www.climatecrisis.org; www.epa.gov/otaq/climate/420f05004.htm I have replaced my gasoline powered car with a hybrid This calculation assumes that average car is driven 12,000 miles a year, that an average of 20 pounds of CO[2] are produced per gallon of gasoline consumed and that the average new car is owned for 4 years before it is traded in for a new one. Also assumed is that the average small car has a fuel efficiency of 30 mpg, the average mid-sized car has a fuel efficiency of 23 mpg, and the average large car or SUV has a fuel efficiency of 18 mpg. Hybrid mileages come from www.greenhybrid.com. Sources: www.epa.gov/otaq/climate/420f05004.htm; www.greenhybrid.com Eat Green I will eat a vegetarian diet __ days a week According to Gidon Eschel and Pamela A. Martin of the University of Chicago, a person with a red meat diet emits the global warming equivalent of 2.52 tons of CO[2] a year more than a person with a vegetarian diet. For each day a week you eat on a vegetarian diet, you save 718 lbs of CO[2] a year. It is assumed that the portion of animal-based calories in the diet is 26%. Source: geosci.uchicago.edu/~gidon/papers/nutri/nutriEI.pdf I will eat only locally-produced food __ days a week According to www.climatecrisis.org, the average meal in the United States travels 1,200 miles from the farm to your plate. The average local meal travels 200 miles. If a truck travels just 10 miles per gallon of gasoline, it requires 100 gallons of fuel to transport food the excess distance. Source: www.climatecrisis.org I will eat organic food __ days per week For every day of the week that you eat only organically produced food, you save 572 pounds of CO[2] from being emitted each year. Source: www.stopglobalwarming.org/carboncalculator.asp?c=5 Reduce Waste—Support Recycling I now buy recycled paper with __% post-consumer waste content Switching to using only 100% post consumer recycled paper reduces CO[2] emissions by 288 pounds per year. This calculation assumes that the average per capita paper consumption in the United States is 273 pounds per year and that one ton of paper with a post consumer recycled content of 100% emits 2,108 less pounds of CO[2] equivalent than non-recycled paper. Source: Environmental impact estimates were made using the Environmental Defense Paper Calculator. For more information visit www.papercalculator.org. I will recycle all my white and mixed paper For every pound of paper that is recycled, CO[2] emissions are reduced by 4 pounds. This calculation assumes a yearly per capita paper consumption of 273 pounds and a 100% recycling rate. Sources: www.enviroliteracy.org/article.php/1345.html I will recycle all my newspaper Recycling 100% of your newspaper waste can reduce your annual CO[2] emissions by 184 pounds. This calculation assumes a 100% newspaper recycling rate and average waste generation; it also takes into account the CO[2] produced throughout the paper's lifecycle. Source: epa.gov/climatechange/wycd/calculator/ind_calculator.html I will recycle all my aluminum and steel cans Recycling 100% of your aluminum and steel cans can reduce your annual CO[2] emissions by 166 pounds. This calculation assumes a 100% recycling rate and average waste generation; it also takes into account the CO[2] produced throughout the metal's lifecycle. Source: epa.gov/climatechange/wycd/calculator/ind_calculator.html I have reduced my garbage by 10% by buying products with less packaging and carrying my own shopping bags Reducing your garbage by 10% can save 1,200 lbs of CO[2] from being released into the atmosphere. An additional 17 pounds of CO[2] can be saved each year by bringing your own bags to the grocery Source: www.climatecrisis.org; www.seattleu.edu/sustainability/gi_be_waste.asp I have reduced the amount of junk mail that I receive by taking myself off of mailing lists The amount of resources and energy it takes to produce the 5.2 million tons of bulk mail sent out in the United States each year result in the production of 92 pounds of CO[2] per person annually. Source: www.newdream.org/cnad/user/ttt_detail.php?config[r55][instance_uid]=4 Offset Your Emissions I have offset __ miles of my car travel Assumes average vehicle mileage of 22 miles per gallon and the release of 20 pounds of carbon dioxide per gallon of gasoline consumed. I have offset __ miles of my air travel Assumes CO[2] emissions of 0.417 pounds per mile flown. Source: www.carbonfund.org
{"url":"http://www.countdownyourcarbon.org/our-assumptions.html","timestamp":"2014-04-18T08:09:39Z","content_type":null,"content_length":"28192","record_id":"<urn:uuid:a160cd83-297d-4a9a-89d0-e3f74710a288>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Flaws in a Young-earth Cooling Mechanism George Murphy and Glenn Morton The main body of this paper is in another page. This page contains a supplementary APPENDIX (it wasn't in the paper published by NCSE) that is described in the published paper: One of us has developed a simple classical model for a harmonic oscillator (like a particle oscillating in a crystal), and in this model the particle does not lose energy to the cosmic expansion. While other force terms could be used in the equation of motion to give different results, the one used here seems to be the simplest and most natural generalization of the ordinary linear restoring force. The fact that energy is not lost here suggests that Humphreys's qualitative argument is incorrect. {emphasis added} In a space-time with uniform flat expanding space sections (as ours now seems to be) the line element in co-moving coordinates is ds^2 = - dt^2 + R^2(t)[dx^2 + dy^2 + dz^2] with R the scale factor and c = 1. The equation of motion for a particle is d^2x^i/ds^2 + 2{[j ]^i[ o]}(dx^j/ds)(dt/ds) = F^i/m where F^i is the force due to non-gravitational interactions. For sufficiently slow motions and weak fields we can replace the proper time s by coordinate time. The equation for motion in the x direction (with F the x-component of force) is d^2x/dt^2 + 2H dx/dt = F/m , where H = (1/R)(dR/dt) is the Hubble parameter. For an inflationary de Sitter state, which is of the most interest in several situations, R ~ exp(Ht) with H a constant. For a free particle (F = 0) we have dv/dt + 2(dR/dt)v/R = 0, with v = dx/dt. This gives v = C/R^2 with C a constant. x, however, is only a coordinate, and actual spatial distances are given by X where dX = Rdx. We can write V = dX/dt = Rv so V = C/R. For both v and V we see the "stopping dead" phenomenon noted by Tolman and Schroedinger. Since our coordinates are co-moving, this means that all free particles are eventually swept along with the cosmic expansion (as long as R is increasing). What about a particle that is not free – i.e., for which F is not zero? How might we represent a harmonic oscillator, which provides a useful model of many systems? There are a couple of possibilities which are easy to deal with mathematically, though neither of them is perfect. But they are, of course, only mathematical models. First, we can write F = -mw^2x so that the restoring force is proportional to the coordinate x. Then we have d^2x/dt^2 + 2H dx/dt + w^2x = 0 which, when H is constant, is the standard equation for a damped oscillator. The general solution for the underdamped case (w > H) x = A exp(-Ht) cos(Wt + b) where the damped frequency is W = (w^2 – H^2)^1/2 and A and b are constants. But again we must remember that displacements are actually given by X, not x. We have dX/dt = exp(Ht) dx/dt = -HA cos(Wt + b) – WA sin(Wt + b) which can be integrated to give (after some trigonometric manipulations) X = A(1 + H^2/W^2)^1/2 cos(Wt + b + tan^-1H/W) . Cosmic expansion thus results in an undamped oscillation with a lower frequency and larger amplitude than would be the case with no expansion. The energy is given by E = (1/2)m(dX/dt)[max]^2 , which works out to (1/2)mW^2(1 + H^2/W^2)A^2 = (1/2)mw^2A^2 . Thus the energy is not only constant but has the same value as in a static universe. The restoring force in this model is proportional to the coordinate x, and one might argue that it would be better to write it as F = -mw^2X rather than F = -mw^2x. In this case we would have d^2x/dt^2 + 2H dx/dt + w^2X= 0 as the equation of motion. We can put this entirely in terms of X by writing dX/dt = R dx/dt , d^2X/dt^2 = R d^2x/dt^2 + (dR/dt)(dx/dt) to get dx/dt = (dX/dt)/R and d^2x/dt^2 = (d^2X/dt^2)/R – H(dX/dt)/R . Substitution then gives d^2X/dt^2 + H dX/dt + w^2RX = 0. We now have damped oscillations (though with half the damping constant of the previous model) and an effective frequency R^1/2w that increases with time. If this increase is slow (as it will be if the vibrational period is much greater than the Hubble time 1/H) we can write approximately X = D exp(-Ht/2) cos(R^1/2wt + h) with D and h constants. This is no longer a simple sinusoidal oscillation, but to a first approximation the energy will be proportional to the square of the amplitude, D^2exp(-Ht), multiplied by the square of the effective frequency, Rw^2. Since R ~ exp(Ht) the time dependence cancels out and the energy will be constant over many oscillation periods. We have assumed here that H is a constant, a situation to which our present universe seems to be approaching and which would hold in any inflationary phase. In this case both our models seem to indicate that there is no dissipation of energy of an oscillator due to the expansion. The situation will be different if R(t) has a different form. If R ~ t^1/2, as for a radiation filled universe with no cosmological constant, then the oscillator equation for our second model can be solved in terms of Bessel functions, and use of the asymptotic form of those functions indicates that there would be a decrease in energy. Baumgardner JR, 2000. Distribution of radioactive isotopes in the earth. In: Vardiman L, Snelling AA, Chafin E. editors. Radioisotopes and the Age of the Earth. El Cajon (CA): Institute for Creation Research and Creation Research Society.
{"url":"http://www.asa3.org/ASA/education/origins/cooling-app.htm","timestamp":"2014-04-18T18:20:21Z","content_type":null,"content_length":"23910","record_id":"<urn:uuid:35e43923-6a7c-4edc-96ce-96c4dcc56275>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Adrian Albert Lectures in Algebra The Albert Lectures are the oldest of the four lecture series. They are named after Abraham Adrian Albert (1905-1972), who received his Ph.D from Chicago in 1928, under the supervision of L.E. Dickson. Albert later returned to Chicago as a member of the faculty, and served for a time as chair of the department and President of the AMS. Past Albert Lecturers include: Nathan Jacobson, Michael Atiyah, John Milnor, Jürgen Moser, Enrico Bombieri, Shing-Shen Chern, Dennis Sullivan, H. Jerome Keisler, Barry Mazur, John Griggs Thompson, William Fulton, Armand Borel, Joe Harris, Benedict Gross, J.P. Serre, Andrei Suslin, Efim Zelmanov, Karl Rubin, Phillip Griffiths, Jacques Tits, Richard Swan, Michael Artin, Jeremy Rickard, Carlos Simpson, Maxim Kontsevich, Richard Taylor, Michel Broué, Don Zagier, Alexander Merkurjev, Andrei Okounkov, Claire Voisin, Raphaël Rouquier, Jacob Lurie, and Peter Sarnak. 2014 Speaker: Spencer Bloch (University of Chicago) Lecture 1: Algebraic cycles Friday February 28, 2014, 4:00PM--5:20PM, Ryerson 251 Abstract: A rapid overflight of the great peaks in the algebraic cycle range, including Abel's theorem, the Riemann Roch theorem, enumerative geometry, higher K-theory, motivic cohomology, and the Hodge conjecture. Lecture 2: Periods associated to cycles Monday March 3, 2014, 4:00PM--5:20PM, Ryerson 251 Abstract: Extensions associated to cycles, Beilinson's conjectures, Feynman amplitudes, Nahm's conjecture. Lecture 3: Recent work related to the Hodge conjecture Tuesday March 4, 2014, 4:30PM--5:20PM, Ryerson 251 Abstract: I will discuss joint work with H. Esnault and M. Kerz. We use Thomason's descent theory for K-theory of singular schemes to show in both mixed characteristic and characteristic zero that the Hodge conjecture is true infinitesimally; algebraic cycles lift infinitesimally if and only if the crystalline or horizontal lifts of their cycle classes in cohomology are Hodge. 2012 Speaker: Peter Sarnak (Princeton University, IAS) on Randomness in Number Theory Lecture 1: Thin matrix groups and Diophantine problems Wednesday November 7, 4-5 PM, Eckhart 206 Abstract: The general Ramanujan Conjectures for congruence subgroups of arithmetic groups, and approximations that have been proven towards them, are central to many diophantine applications. Recently analogous results have been established for quite general subgroups of GL(n,Z). We will describe these and review some of their applications and the ubiquity of thin groups. Lecture 2: Mobius randomness and horocycle flows at prime times Thursday November 8, 3-4 PM, Eckhart 206 Abstract: The Mobius Function mu(n) is minus one to the number of prime factors of n, if n has no square factors and is zero otherwise. Understanding the randomness (often referred as the Mobius randomness principle) in this function is a fundamental and very difficult problem. We will explain a precise dynamical formulation of this randomness principle and report on recent advances in establishing it and its applications especially in connection with a prime number type theorem for horocycle flows at prime times. Lecture 3: Symmetry types for families of automorphic L-functions Friday November 9, 4-5 PM, Eckhart 206 Abstract: A theory for the local distribution of the zeros for families of automorphic L-functions has been developed over the last 15 years. We will describe some of the basics of this theory, specifically the apparent 4-fold symmetry types from random matrix ensembles that arise, as well as some recent advances. 2011 Speaker: Jacob Lurie (Harvard) Tamagawa Numbers via Nonabelian Poincaré Duality May 13 at 4pm in Ry 251 May 16 at 4pm in E 206 May 17 at 4:30pm in E 206 Abstract: Let L be a positive definite lattice. There are only finitely many positive definite lattices L' which are isomorphic to L modulo N for every N > 0. In fact, there is a formula for the number of such lattices (counted with appropriate multiplicities), called the Siegel mass formula. In the first lecture, I will review the Siegel mass formula and how it can be deduced from a conjecture of Weil on volumes of adelic points of algebraic groups. This conjecture was proven over number fields by Kottwitz, building on earlier work by Langlands and Lai. In the third lecture, I will describe some joint work with Dennis Gaitsgory which provides an approach to proving the analogue of Weil's conjecture over function fields. This approach is inspired by a ``nonabelian'' version of Poincare duality, which I will describe in the second lecture.
{"url":"http://www.math.uchicago.edu/research/abstracts/albert_abstracts.shtml","timestamp":"2014-04-19T07:42:39Z","content_type":null,"content_length":"12230","record_id":"<urn:uuid:edcb9263-37ef-4ce3-accc-1fd024e2c4f8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
de la Maza, Nonlinear programming and nonsmooth optimization by successive linear programming Results 1 - 10 of 25 , 2003 "... This paper describes an active-set algorithm for large-scale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the activ ..." Cited by 41 (12 self) Add to MetaCart This paper describes an active-set algorithm for large-scale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the active set at the solution. The linear program is obtained by making a linear approximation to the ` 1 penalty function inside a trust region. In the second stage, an equality constrained quadratic program (EQP) is solved involving only those constraints that are active at the solution of the linear program. - Large Scale Nonlinear Optimization, 35–59, 2006 , 2006 "... This paper describes Knitro 5.0, a C-package for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving large-scale, smooth nonlinear programming problems ..." Cited by 38 (3 self) Add to MetaCart This paper describes Knitro 5.0, a C-package for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving large-scale, smooth nonlinear programming problems, and it is also effective for the following special cases: unconstrained optimization, nonlinear systems of equations, least squares, and linear and quadratic programming. Various algorithmic options are available, including two interior methods and an active-set method. The package provides crossover techniques between algorithmic options as well as automatic selection of options and settings. 1 , 2007 "... This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. I ..." Cited by 11 (0 self) Add to MetaCart This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. In contrast with classical approaches, the choice of the penalty parameter ceases to be a heuristic and is determined, instead, by a subproblem with clearly defined objectives. The new penalty update strategy is presented in the context of sequential quadratic programming (SQP) and sequential linear-quadratic programming (SLQP) methods that use trust regions to promote convergence. The paper concludes with a discussion of penalty parameters for merit functions used in line search methods. - THE STATE OF THE ART IN NUMERICAL ANALYSIS , 1996 "... ..." , 2008 "... Abstract. We consider minimization of functions that are compositions of prox-regular functions with smooth vector functions. A wide variety of important optimization problems can be formulated in this way. We describe a subproblem constructed from a linearized approximation to the objective and a r ..." Cited by 7 (2 self) Add to MetaCart Abstract. We consider minimization of functions that are compositions of prox-regular functions with smooth vector functions. A wide variety of important optimization problems can be formulated in this way. We describe a subproblem constructed from a linearized approximation to the objective and a regularization term, investigating the properties of local solutions of this subproblem and showing that they eventually identify a manifold containing the solution of the original problem. We propose an algorithmic framework based on this subproblem and prove a global convergence result. - SIAM Journal on Optimization , 2006 "... Abstract. Techniques that identify the active constraints at a solution of a nonlinear programming problem from a point near the solution can be a useful adjunct to nonlinear programming algorithms. They have the potential to improve the local convergence behavior of these algorithms, and in the bes ..." Cited by 6 (1 self) Add to MetaCart Abstract. Techniques that identify the active constraints at a solution of a nonlinear programming problem from a point near the solution can be a useful adjunct to nonlinear programming algorithms. They have the potential to improve the local convergence behavior of these algorithms, and in the best case can reduce an inequality constrained problem to an equality constrained problem with the same solution. This paper describes several techniques that do not require good Lagrange multiplier estimates for the constraints to be available a priori, but depend only on function and first derivative information. Computational tests comparing the effectiveness of these techniques on a variety of test problems are described. Many tests involve degenerate cases, in which the constraint gradients are not linearly independent and/or strict complementarity does not hold. , 2002 "... This paper describes an active-set algorithm for large-scale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [9]. The step computation is performed in two stages. In the rst stage a linear program is solved to estimate the active set ..." Cited by 5 (1 self) Add to MetaCart This paper describes an active-set algorithm for large-scale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [9]. The step computation is performed in two stages. In the rst stage a linear program is solved to estimate the active set at the solution. The linear program is obtained by making a linear approximation to the `1 penalty function inside a trust region. In the second stage, an equality constrained quadratic program (EQP) is solved involving only those constraints that are active atthesolution of the linear program. The EQP incorporates a trust-region constraint and is solved (inexactly) by means of a projected conjugate gradient method. Numerical experiments are presented illustrating the performance of the algorithm on the CUTEr [1] test set. , 2008 "... A sequential quadratic programming (SQP) method is presented that aims to overcome some of the drawbacks of contemporary SQP methods. It avoids the difficulties associated with indefinite quadratic programming subproblems by defining this subproblem to be always convex. The novel feature of the appr ..." Cited by 5 (1 self) Add to MetaCart A sequential quadratic programming (SQP) method is presented that aims to overcome some of the drawbacks of contemporary SQP methods. It avoids the difficulties associated with indefinite quadratic programming subproblems by defining this subproblem to be always convex. The novel feature of the approach is the addition of an equality constrained phase that promotes fast convergence and improves performance in the presence of ill conditioning. This equality constrained phase uses exact second order information and can be implemented using either a direct solve or an iterative method. The paper studies the global and local convergence properties of the new algorithm and presents a set of numerical experiments to illustrate its practical performance. , 2003 "... We analyze the global convergence properties of a class of penalty methods for nonlinear programming. These methods include successive linear programming approaches, and more speci cally the SLP-EQP approach presented in [1]. Every iteration requires the solution of two trust region subproblems inv ..." Cited by 4 (1 self) Add to MetaCart We analyze the global convergence properties of a class of penalty methods for nonlinear programming. These methods include successive linear programming approaches, and more speci cally the SLP-EQP approach presented in [1]. Every iteration requires the solution of two trust region subproblems involving linear and quadratic models, respectively. The interaction between the trust regions of these subproblems requires careful consideration. It is shown under mild assumptions that there exist an accumulation point which is a critical point for the penalty function. , 2004 "... This paper reviews the development of exact penalty methods for nonlinear optimization and discusses their increasingly important role in optimization algorithms and software. In their most recent stage of development, penalty methods adjust the penalty parameter dynamically. By controlling the deg ..." Cited by 4 (2 self) Add to MetaCart This paper reviews the development of exact penalty methods for nonlinear optimization and discusses their increasingly important role in optimization algorithms and software. In their most recent stage of development, penalty methods adjust the penalty parameter dynamically. By controlling the degree of linear feasibility achieved at every iteration, these methods balance progress toward optimality and feasibility. The choice of the penalty parameter thus ceases to be a heuristic and is determined, instead, by a subproblem with clearly defined objectives. The new penalty update strategy is presented in the context of sequential linear-quadratic penalty methods, and is then extended to sequential quadratic programming. The paper concludes with a discussion of penalty parameters for merit functions used in line search methods.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=435592","timestamp":"2014-04-21T06:13:56Z","content_type":null,"content_length":"36967","record_id":"<urn:uuid:5fa5b6d4-e067-46ad-9cce-f3ba81dbb162>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Southlake Math Tutor Find a Southlake Math Tutor ...Prealgebra is the key to success in all future math courses. The basics learned in this course help in many ways determine the student's success in Algebra as well as Geometry. I have over 22 years of secondary mathematics teaching experience and over 5 years of one-on-one tutoring experience. 15 Subjects: including statistics, algebra 1, algebra 2, calculus ...I work very hard to make learning meaningful and fun. As an educational psychologist, I have completed many hours of advanced coursework, and I am well-versed in the current research regarding learning, memory, and instructional practices. I utilize this knowledge to identify underlying process... 39 Subjects: including ACT Math, English, statistics, reading ...I have a Bachelor of Science degree in Multidisciplinary Studies (Dbl Major in Education, Counseling, Minor in Computer Science), and an advanced degree in Biblical Counseling/Life Coaching. Does your student need targeted core subject support, homework assistance, a boost in confidence, intrins... 27 Subjects: including prealgebra, English, reading, writing ...For the visual learner, I would work with various visual illustrations, flash cards and other activities that would aid the student in their studies. Study skills would also include doing research, planning out projects with definite steps towards completion, and learning how to outline and take... 8 Subjects: including algebra 1, vocabulary, grammar, prealgebra ...I love to teach mathematics, not to just solve problems, but also to make sure concepts and fundamentals are clear. I have experience in teaching Pre-Algebra, Algebra 1, Geometry and Algebra 2. I also have an expertise in more complex mathematics such as Statistics, Pre-Calculus, Operations Research. 12 Subjects: including calculus, SAS, linear algebra, algebra 1
{"url":"http://www.purplemath.com/southlake_math_tutors.php","timestamp":"2014-04-21T02:37:33Z","content_type":null,"content_length":"23705","record_id":"<urn:uuid:1e4c3242-69d9-4a65-99a4-b57e91678726>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Group Theory Classical groups Finite groups Group schemes Topological groups Lie groups Super-Lie groups Higher groups Cohomology and Extensions The loop space of a topological group $G$ inherits the structure of a group under pointwise group multiplication of loops. This is called a loop group of $G$. (Notice that this is a group structure in addition to the infinity-group-structure of any loop space under composition of loops.) If $G$ is a Lie group, then there is a smooth version of the loop group consisting of smooth functions $S^1 \to G$. By the discussion at manifold structure of mapping spaces the collection of such smooth maps is itself an infinite-dimensional smooth manifold and so the smooth loop group of a Lie group is an infinite-dimensional Lie group. Among all infinite-dimensional Lie groups, loop groups are a most well behaved class. In particular their representation theory is similar to that of compact Lie groups. Some of these nice properties are solely due to the circle $S^1$ being a compact manifold. For $X$ any other compact manifold there is similarly an infinite-dimensional Lie group $[X,G]$ of smooth functions $X \to G$ under pointwise multiplication in $G$. Such mapping groups appear in physics notably as groups of gauge transformations over a spacetime/worldvolume $X$. Accordingly, loop groups play a prominint role in 1- and 2-dimensional quantum field theory, notably the WZW model describing the propagation of a string on $G$. The current algebras (affine algebras) which arise as Lie algebras of (centrally extended) loop groups derive their name from this relation to physics. Accordingly, as for compact Lie groups, the representation theory of loop groups is naturally understood in terms of their geometric quantization (by a loop variant of the orbit method). On the other hand, for $X$ of dimension greater that 1 there are very few known results about the properties of the mapping group $[X,G]$. Lie algebra Let $G$ be a compact Lie group. Write $\mathfrak{g}$ for its Lie algebra. The Lie algebra of $L G$ is the loop Lie algebra? $Lie(L G) \simeq L Lie(G) = L \mathfrak{g} \,.$ Let $G$ be a compact Lie group. The complexification? of $L G$ is the loop group of the complexification of $G$ $(L G)_{\mathbb{C}} \simeq L (G_\mathbb{C}) \,.$ Central extensions Loop groups of compact Lie groups have canonical central extensions, often called Kac-Moody central extensions . A detailed discussion is in (PressleySegal). A review is in (BCSS) Positive energy $t_\theta \colon L G \to L G$ for the automorphism which rotates loops by an angle? $\theta$. The corresponding semidirect product group we write $S^1 \rtimes L G$ Let $V$ be a topological vector space. A linear representation $S^1 \to Aut(V)$ of the circle group is called positive if $\exp(i \theta)$ acts by $\exp(i A \theta)$ where $A \in End(V)$ is a linear operator with positive spectrum. A linear representation $\rho : L G \to Aut(V)$ is said to have positive energy or to be a positive energy representation if it extends to a representation of the semidirect product group $S^1 \rtimes L G$ such that the restriction to $S^1$ is By geometric quantization (looped orbit method) We discuss the quantization of loop groups in the sense of geometric quantization of their canonical prequantum bundle. Let $G$ be a compact Lie group. Let $T \hookrightarrow G$ be the inclusion of a maximal torus. There is a fiber sequence $\array{ G/T &\to& L G / T \\ && \downarrow \\ && L G / G & \simeq \Omega G } \,.$ The irreducible projective positive energy representations of $L G$ correspond precisley to the possible geometric quantizations of $L G / T$ (as in the orbit method). More in detail: The degree-2 integral cohomology of $L G / T$ is $H^2(L G / T) \simeq \mathbb{Z} \oplus H^2(G / T, \mathbb{Z}) \simeq \mathbb{Z} \oplus \hat T \,.$ Writing $L_{n,\lambda}$ for the corresponding complex line bundle with level $n \in \mathbb{Z}$ and weight $\lambda \in \hat T$ we have that 1. the space of holomorphic sections of $L_{n,\lambda}$ is either zero or is an irreducible positive energy representation; 2. every such arises this way; 3. and is non-zero precisely if $(n,\lambda)$ is positive in the sense that for each positive coroot? $h_\alpha$ of $G$ $0 \leq \lambda(h_\alpha) \leq n \langle h_\alpha, h_\alpha\rangle \,.$ This appears for instance as (Segal, prop. 4.2). Relation to equivariant elliptic cohomology Under mild conditions (but over the complex numbers) the representation ring of a loop group $L G$ is equivalent to the $G$-equivariant elliptic cohomology (see there for more) of the point (Ando 00, theorem 10.10). This is a higher analog of how $G$-equivariant K-theory of the point gives the representation ring of $G$. The standard textbook on loop groups is • Andrew Pressley, Graeme Segal, Loop groups Oxford University Press (1988) A review talk is A review of some aspects with an eye towards loop groups as part of the crossed module of groups representing a string 2-group is in • BCSS, From loop groups to 2-groups (web) The relation between representations of loop groups and twisted K-theory over the group is the topic of The relation between representations of loop groups an equivariant elliptic cohomology of the point is discussed in • Matthew Ando, Power operations in elliptic cohomology and representations of loop groups Transactions of the American Mathematical Society 352, 2000, pp. 5619-5666. (JSTOR, pdf)
{"url":"http://www.ncatlab.org/nlab/show/loop+group","timestamp":"2014-04-16T18:59:45Z","content_type":null,"content_length":"50427","record_id":"<urn:uuid:30df41cb-9d1c-4e8e-8ed0-bef88468d759>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. Have you ever heard anyone use this word before? It 's actually Greek (meaning "I've found it!") and many people would associate it with a man called Archimedes . Here you'll be able to find out about Archimedes and in particular his Spiral. In 322 BC, Alexander the Great built the city of Alexandria and soon afterwards, under the reign of Ptolemy, it became the capital of Egypt. To attract clever people to this city, Ptolemy set up a university in Alexandria, which was the first of its kind. It was at the University of Alexandria that nearly all the famous mathematicians of the time studied or taught. Archimedes was one of these. He was born in about 287 BC in the city of Syracuse on the island of Sicily. We think that Archimedes visited Alexandria and studied there, as he seems to have had many friends who were also great mathematicians. Archimedes on a Greek stamp This image was taken from the following website: http://www-groups.dcs.st-andrews.ac.uk/~history/Mathematicians/Archimedes.html There are many stories about Archimedes' discoveries and inventions. When the Romans were attacking Syracuse, the King (who was also a good friend) asked Archimedes to help defend the city. He invented huge cranes, which could lift ships out of water; movable poles which dropped weights on enemy ships and powerful catapults. Archimedes' Claw, a device which lifted ships into the air and rolled them over. Archimedes also developed complicated pulley systems to drag ships along. These war-time inventions gained Archimedes a lot of recognition and respect from the local people. It was when Archimedes was in his bath that he jumped up shouting the famous "Eureka!" He had been trying to solve a problem for the King - how to find out whether the King's crown was pure gold or if some silver had been added. As he got into the bath, some water sloshed over the sides and this inspired Archimedes to try an experiment. He discovered that when the crown was put in a bowl of water, more water over-flowed than when the same weight of pure gold was put in. This meant that the crown could not be entirely pure gold. This led to the first law of hydrostatics. You can find out about this in more detail on the websites listed at the end. Although Archimedes became famous for his inventions during the war, he loved to study "pure" maths much more. He had a passion for geometry and he wrote many important papers on different topics. Here we shall concentrate on his spiral. Archimedes' Spiral A spiral is a special kind of curved path on a 2d (flat) surface. Spirals are described in maths by sets of equations and are given different names. Archimedes investigated many different types of spirals and one of these is now named after him. The Archimedes' spiral is the path of a point that is travelling towards or away from a fixed origin at a constant speed and constant velocity . Velocity is a measure of speed in a particular This is what it looks like: Used by permission As if all this wasn't enough, Archimedes investigated many other branches of geometry. One of his most famous proofs was the calculation of pi to between 3 10/71 and 3 1/7. (See Pi, AVery Special Number ) It is said that Archimedes used to get very absorbed in his work and would draw diagrams in sand, ashes from the fire and even on himself using bathing oils! One story tells that it was due to this total absorption in maths that Archimedes was killed. Apparently, he asked a Roman soldier to stand out of the way of a diagram he had sketched in the sand. The soldier ran a spear through his chest. This Roman floor mosaic shows Archimedes' death http://www-groups.dcs.st-andrews.ac.uk/~history/Mathematicians/Archimedes.html Archimedes can be regarded as one of the most accomplished mathematicians in history. His discoveries certainly seem remarkable, considering how long ago he lived. We've only touched the surface here so if you'd like to read more about him, why not take a look at some of the websites listed below? Go on... www-groups.dcs.st-andrews.ac.uk/~history/Mathematicians/Archimedes.html tqjunior.thinkquest.org/4116/History/archimedes.htm www.mcs.drexel.edu/~crorres/Archimedes/contents.html www-maths.mcs.st-andrews.ac.uk/~gmp/gmpANA.html physics.weber.edu/carroll/Archimedes/theIndex.htm There is also a lot of information about Archimedes in "An Introduction to the History of Mathematics" (4th edition) by Howard Eves published by Holt, Rinehart and Winston.
{"url":"http://nrich.maths.org/2582/index?nomenu=1","timestamp":"2014-04-18T15:48:21Z","content_type":null,"content_length":"9844","record_id":"<urn:uuid:f455ccf4-3a50-4853-90ed-fe01c39ea8ee>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] First-order arithmetical truth V.Sazonov@csc.liv.ac.uk V.Sazonov at csc.liv.ac.uk Tue Oct 24 18:36:43 EDT 2006 Dear Arnon, Unfortunately, I should note that there is almost complete misunderstanding from your part of what I am trying to say. At least about one thing we could agree that no (first-order) formal system can define uniquely the concept of natural numbers. I assert that the only way to demonstrate that what we are doing is indeed a mathematics is to present this formally (rigorously) either as a derivation of a theorem or as an explicit definition of a concept in the framework of a formal system or as an implicit definition of a concept (model) by a formal axiomatic system and desirably to prove a metatheorem (in some other formal system) that this axiomatic system is consistent. Sometimes we just postulate a new axiom like in the case of CON(PA) (just CON(PA) itself or, for example, epsilon_0-induction allowing to derive CON(PA)). Whichever way will you argument that some non-first-order non-r.e. logic (language + semantics only) is "nice", you eventually will present this in a context of a formal system (such as ZFC) and deduce formally some theorem on its "niceness" (say, that it characterises natural numbers uniquely). Whatever will you do will be eventually presented formally (rigorously) once you pose yourself as a mathematician. And I am sure that you behave exactly in this way, whatever you assert in your informal comments. IN THIS SENSE, any your considerations on uniqueness of N are done eventually RELATIVE to a (first order) formal system (typically, ZFC) and therefore are NOT ABSOLUTE. IN THIS SENSE you DO NOT and CAN NOT I repeat, IN THIS SENSE! I hope you should confirm the last paragraph and we can find the point of agreement thereby PUTTING ASIDE (or ISOLATING) the point of disagreement which still remains existing and essential. Best wishes, This message was sent using IMP, the Internet Messaging Program. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-October/011017.html","timestamp":"2014-04-19T19:37:04Z","content_type":null,"content_length":"4473","record_id":"<urn:uuid:2c8f7909-4ec1-48e2-8be4-4c8a4a80eb85>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
how do i figure out the real speed [Archive] - Baseball Fever View Full Version : how do i figure out the real speed Actually, i heard the ball approximetly slows down 1mph every 7 inches... or was it 7 feet... something like that. A little anecdotal data for the thread. My son's 4-seam runs 87-89, his 2-seam runs 82-85 and tails in on righties 4-6 inches. He has never thrown a 2-seam faster the 85. I was trying to use the formula in this thread to help me with a question. We are in machine pitch league and the machine is set at 35mph at 39 feet. What formula would I use to figure out what would be equal to this speed from say 25 feet? or 15 feet. I like to use machine at different distances in bp and I want to keep it close to what they are used to.
{"url":"http://baseball-fever.com/archive/index.php/t-50464.html","timestamp":"2014-04-19T22:06:09Z","content_type":null,"content_length":"14330","record_id":"<urn:uuid:57a1503a-262e-49de-9f53-a8811eac2c1b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Division is the operation which assigns to every two numbers (or more generally, elements of a field) $a$ and $b$ their quotient or ratio, provided that the latter, $b$, is distinct from zero. The quotient (or ratio) $\frac{a}{b}$ of $a$ and $b$ may be defined as such a number (or element of the field) $x$ that $b\cdot x=a$. Thus, which is the “fundamental property of quotient”. The explicit general expression for $\frac{a}{b}$ is $\frac{a}{b}=b^{{-1}}\cdot a$ where $b^{{-1}}$ is the inverse number (the multiplicative inverse) of $a$, because • For positive numbers the quotient may be obtained by performing the division algorithm with $a$ and $b$. If $a>b>0$, then $\frac{a}{b}$ indicates how many times $b$ fits in $a$. • The quotient of $a$ and $b$ does not change if both numbers (elements) are multiplied (or divided, which action is called reduction) by any $keq 0$: So we have the method for getting the quotient of complex numbers, where $\bar{b}$ is the complex conjugate of $b$, and the quotient of square root polynomials, e.g. $\frac{1}{5+2\sqrt{2}}=\frac{5-2\sqrt{2}}{(5-2\sqrt{2})(5+2\sqrt{2})}=\frac{5-2% \sqrt{2}}{25-8}=\frac{5-2\sqrt{2}}{17};$ in the first case one aspires after a real and in the second case after a rational denominator. • The division is neither associative nor commutative, but it is right distributive over addition: quotient, ratio, fundamental property of quotient, reduction InverseFormingInProportionToGroupOperation, DivisionInGroup, ConjugationMnemonic, Difference2, UniquenessOfDivisionAlgorithmInEuclideanDomain Mathematics Subject Classification no label found no label found Added: 2004-09-06 - 21:16 Attached Articles
{"url":"http://planetmath.org/division","timestamp":"2014-04-20T13:41:25Z","content_type":null,"content_length":"81321","record_id":"<urn:uuid:8d6c3f57-ae3a-4ce4-b1bb-3acdb8377715>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
QUADRATIC APPROXIMATE OFFSET CURVES FOR QUADRATIC CURVE SEGMENTSAANM Ruf; Erik S.AACI KirklandAAST WAAACO USAAGP Ruf; Erik S. Kirkland WA US Patent application title: QUADRATIC APPROXIMATE OFFSET CURVES FOR QUADRATIC CURVE SEGMENTSAANM Ruf; Erik S.AACI KirklandAAST WAAACO USAAGP Ruf; Erik S. Kirkland WA US Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A system is described herein that produces at least one approximate offset curve that is separated by an original curve segment S by at least a distance d, thus defining a bounding region between the original curve segment S and the approximate offset curve. The original curve segment S and the approximate offset curve both have a quadratic form. In view of this quadratic form, the system can represent the approximate offset curve in an efficient manner (e.g., using three control points). Further, the system can perform calculations with respect to the approximate offset curve in an efficient manner. A method for producing an offset curve, implemented using tangible and physical computing functionality, comprising: receiving data that describes an original control triangle, the original control triangle defining an original curve segment, the original curve segment, in turn, defining a portion of a parabola having an axis a; constructing an original axis-symmetric control triangle based on the original control triangle, the original axis-symmetric control triangle being symmetric with respect to the axis a of the parabola; constructing a new control triangle by adjusting positions of control points associated with the original axis-symmetric control triangle, the new control triangle also being symmetric with respect to the axis a of parabola, the new control triangle defining an approximate offset curve, and all points on the approximate offset curve being separated from all points on the original curve segment by at least a distance d; and performing an application-specific action based on the approximate offset curve. The method of claim 1, wherein the computing functionality comprises at least one processing device for processing instructions maintained in a computer-readable storage medium. The method of claim 1, wherein said constructing of the new control triangle comprises moving the positions of the control points associated with the original axis-symmetric control triangle in an outward direction, such that the new control triangle is larger than the original axis-symmetric control triangle. The method of claim 1, wherein said constructing of the new control triangle comprises moving the positions of the control points associated with the original axis-symmetric control triangle in an inward direction, such that the new control triangle is smaller than the original axis-symmetric control triangle. The method of claim 1, wherein said constructing of the new control triangle comprises: constructing a first new control triangle which defines a first approximate offset curve; and constructing a second new control triangle which defines a second approximate offset curve. The method of claim 5, wherein the original curve segment lies between the first approximate offset curve and the second approximate offset curve. The method of claim 1, wherein the application-specific action comprises generating a graphical image based on at least the original curve segment and one or more corresponding approximate offset The method of claim 1, wherein the application-specific action comprises controlling a path taken by a machine based on at least the original curve segment and one or more corresponding approximate offset curves. The method of claim 1, wherein said application specific action comprises: receiving a query point; determining whether the query point lies in a region bounded by the approximate offset curve and the original curve segment; if the query point lies in the region, performing an analysis operation; and if the query point does not lie in the region, avoiding the analysis operation. The method of claim 9, wherein the analysis operation comprises determining a distance between the query point and the original curve segment using a distance-determination technique. The method of claim 10, further comprising using the distance to perform an anti-aliasing operation within a graphical image. The method of claim 10, further comprising using the distance to render a brushed path within a graphical image. A system for producing and applying an offset curve, implemented using tangible and physical computing functionality, comprising: an offset curve determination module for: receiving data that describes an original control triangle, the original control triangle defining an original curve segment; and producing an approximate offset curve based on the original control triangle, all points on the approximate offset curve being separated from all points on the original curve segment by at least a distance d, the original curve segment and the approximate offset curve each representing quadratic curves; and an application module for performing an application-specific action based on the approximate offset curve. The system of claim 13, wherein the offset curve determination module comprises: a symmetric control triangle determination module for constructing an original axis-symmetric control triangle based on the original control triangle, the original axis-symmetric control triangle being symmetric with respect to an axis a of a parabola associated with the original curve segment; and a control triangle extension module for constructing a new control triangle by adjusting positions of control points associated with the original axis-symmetric control triangle, the new control triangle also being symmetric with respect to the axis a of the parabola, the new control triangle defining the approximate offset curve. The system of claim 13, wherein the application-specific action comprises generating a graphical image based on at least the original curve segment and one or more corresponding approximate offset The system of claim 13, wherein the application-specific action comprises controlling a path taken by a machine based on at least the original curve segment and one or more corresponding approximate offset curves. The system of claim 13, wherein the application module comprises: a query point determining module for receiving a query point and determining whether the query point lies in a region bounded by the approximate offset curve and the original curve segment; and an action-taking module for, if the query point lies in the region, determining a distance between the query point and the original curve segment using a distance-determination technique. A computer readable storage medium for storing computer readable instructions, the computer readable instructions providing an offset curve determination module when executed by one or more processing devices, the computer readable instructions comprising: logic configured to receive data that describes an original control triangle associated with control points p , p , and p , the original control triangle defining an original curve segment S ; logic configured to determine a line m which connects a midpoint of a line segment p to the control point p ; logic configured to draw a chord that is perpendicular to the line m which intersects an extended version of the original curve segment S twice, defining new control points q and q ; logic configured to extend tangent lines at the new control points q and q , yielding a new control point q at an intersection of the tangent lines, the control points q , q , and q together describing an original axis-symmetric control triangle; and logic configured to displace the new control points (q , q , q ) in an inward or outward direction to produce new control points (q' , q' , q' ), the new control points (q' , q' , q' ) defining a new control triangle and an associated approximate offset curve. The computer-readable storage medium of claim 18, wherein the chord intersects either the control point p or the control point p.sub. 2. 20. The computer-readable storage medium of claim 18, wherein the computer-readable instructions further implement an application module, the computer readable instructions comprising: logic for receiving a query point and determining whether the query point lies in a region bounded by the approximate offset curve and the original curve segment; and logic for, if the query point lies in the region, determining a distance between the query point and the original curve segment using a distance-determination technique. BACKGROUND [0001] There sometimes arises a need to define a bounding region which extends from one or both sides of an original curve segment. Different approaches exist for defining such a bounding region with respect to an original curve segment. In one approach, an application can define a bounding region that has multiple components, such as multiple small polygonal bounding regions which encompass the original curve segment along the path of the original curve segment. In adopting this approach, this technique can typically produce a bounding region that has high fidelity with respect to the original curve segment. However, providing a precise bounding region comes at a cost, for reasons specified herein. SUMMARY [0002] A system is described herein that produces at least one approximate offset curve that is separated from an original curve segment S by at least a distance d, thus defining a bounding region between the original curve segment S and the approximate offset curve. The original curve segment S and the approximate offset curve both have a quadratic form. In view of this quadratic form, the system can represent the approximate offset curve in an efficient manner (e.g., using three control points). Further, the system can perform calculations with respect to the approximate offset curve in an efficient manner. More specifically, in one implementation, the system can generate the approximate offset curve by first constructing an original axis-symmetric control triangle based on the original curve segment S . The original axis-symmetric control triangle is symmetric with respect to the axis a of the parabola associated with the original curve segment S . The system then constructs a new control triangle by displacing the positions of control points associated with the original axis-symmetric control triangle in an outward or inward direction, where the new control triangle is also symmetric with respect to the axis a of the underlying parabola of the original curve segment S . The control points of the new control triangle define the quadratic approximate offset curve. The system can perform any type of application-specific action based on the approximate offset curve. For example, in one application, the system can use at least one original curve segment and at least one corresponding approximate offset curve in generating a graphical image. In another application, the system can apply at least one original curve segment and at least one corresponding approximate offset curve in controlling the path of a machine. Other applications are possible. The above approach can be manifested in various types of systems, components, methods, computer readable media, data structures, articles of manufacture, and so on. This Summary is provided to introduce a selection of concepts in a simplified form; these concepts are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. BRIEF DESCRIPTION OF THE DRAWINGS [0007] FIG. 1 shows an illustrative system for producing and applying an approximate offset curve. FIG. 2 is a flowchart that describes an overview of one manner of operation of the system of FIG. 1. FIG. 3 is a flowchart that shows an illustrative manner of generating an original axis-symmetric control triangle. FIG. 4 is a flowchart that shows an illustrative manner of generating a new control triangle by extending the control points of the axis-symmetric control triangle provided in FIG. 3. FIG. 5 is a flowchart that shows an illustrative manner of applying an approximate offset curve. FIG. 6 shows graphical depictions of curves and associated control triangles for use in illustrating the operations of FIG. 3. FIG. 7 shows graphical depictions of curves and associated control triangles for use in illustrating the operations of FIG. 4. FIG. 8 is a graphical depiction of one use of an approximate offset curve. FIG. 9 is a graphical depiction of one use of two approximate offset curves. FIG. 10 shows illustrative computing functionality that can be used to implement any aspect of the features shown in the foregoing drawings. The same numbers are used throughout the disclosure and figures to reference like components and features. Series 100 numbers refer to features originally found in FIG. 1, series 200 numbers refer to features originally found in FIG. 2, series 300 numbers refer to features originally found in FIG. 3, and so on. DETAILED DESCRIPTION [0018] This disclosure is organized as follows. Section A describes an illustrative system for generating and applying at least one approximate offset curve. Section B describes illustrative methods which explain the operation of the system of Section A. Section C describes illustrative computing functionality that can be used to implement any aspect of the features described in Sections A and B. As a preliminary matter, some of the figures describe concepts in the context of one or more structural components, variously referred to as functionality, modules, features, elements, etc. The various components shown in the figures can be implemented in any manner by any physical and tangible mechanisms, for instance, by software, hardware (e.g., chip-implemented logic functionality), firmware, etc., and/or any combination thereof. In one case, the illustrated separation of various components in the figures into distinct units may reflect the use of corresponding distinct physical and tangible components in an actual implementation. Alternatively, or in addition, any single component illustrated in the figures may be implemented by plural actual physical components. Alternatively, or in addition, the depiction of any two or more separate components in the figures may reflect different functions performed by a single actual physical component. FIG. 10, to be discussed in turn, provides additional details regarding one illustrative physical implementation of the functions shown in the figures. Other figures describe the concepts in flowchart form. In this form, certain operations are described as constituting distinct blocks performed in a certain order. Such implementations are illustrative and non-limiting. Certain blocks described herein can be grouped together and performed in a single operation, certain blocks can be broken apart into plural component blocks, and certain blocks can be performed in an order that differs from that which is illustrated herein (including a parallel manner of performing the blocks). The blocks shown in the flowcharts can be implemented in any manner by any physical and tangible mechanisms, for instance, by software, hardware (e.g., chip-implemented logic functionality), firmware, etc., and/or any combination thereof. As to terminology, the phrase "configured to" encompasses any way that any kind of physical and tangible functionality can be constructed to perform an identified operation. The functionality can be configured to perform an operation using, for instance, software, hardware (e.g., chip-implemented logic functionality), firmware, etc., and/or any combination thereof. The term "logic" encompasses any physical and tangible functionality for performing a task. For instance, each operation illustrated in the flowcharts corresponds to a logic component for performing that operation. An operation can be performed using, for instance, software, hardware (e.g., chip-implemented logic functionality), firmware, etc., and/or any combination thereof. When implemented by a computing system, a logic component represents an electrical component that is a physical part of the computing system, however implemented. The following explanation may identify one or more features as "optional." This type of statement is not to be interpreted as an exhaustive indication of features that may be considered optional; that is, other features can be considered as optional, although not expressly identified in the text. Finally, the terms "exemplary" or "illustrative" refer to one implementation among potentially many implementations A. Illustrative System FIG. 1 shows an illustrative system 100 for generating and applying an approximate offset curve. As shown in representation 102, the approximate offset curve is displaced at least a distance d from an original curve segment S at all points along the path of the approximate offset curve. More specifically, the approximate offset curve is considered "approximate" because it is an approximation of a hypothetical "true" offset curve. For any point on the original curve segment S , the distance to the nearest point on the true offset curve is exactly distance d. As shown, the approximate offset curve may extend farther than distance d from the original curve segment S at various points along the path of the approximate offset curve. The system 100 includes an offset curve determination module 104 for generating the approximate offset curve. The system 100 also includes an application module 106 for applying the approximate offset curve to perform one or more application-specific actions. In one example, the offset curve determination module 104 and the application module 106 can be implemented by computing functionality, which, in turn, may be implemented by one or more computing devices. In the case in which plural devices are used, these devices can be provided at a single location or can be distributed over plural respective locations. Section C provides additional information on illustrative physical implementations of the computing functionality. The offset curve determination module 104 can include (or can be conceptualized as including) plural components which perform different functional roles. A symmetric control triangle determination module 108 receives data that describes an original control triangle (e.g., from a data store 110 or some other source or sources). The original control triangle is defined with respect to three control points (p , p , p ). These control points define the path of a quadratic Bezier curve, constituting the original curve segment S (where the control points p and p describe the endpoints of the original curve segment S ). The symmetric control triangle determination module 108 determines an original axis-symmetric control triangle based on the original control triangle. Section B will provide additional information regarding the characteristics of the original axis-symmetric control triangle and the manner in which the original axis-symmetric control triangle can be derived. At this point, suffice it to say that the original curve segment S defines a portion of a parabola, and that parabola has an axis a (defining the axis which divides the parabola into two matching or mirror sides). The symmetric control triangle determination module 108 defines the original axis-symmetric control triangle such that it is symmetric with respect to the axis a of the parabola (meaning that the axis a divides the original axis-symmetric triangle into two mirror sides). That is, the original control triangle and the original axis-symmetric control triangle describe the same curve, but the original axis-symmetric control triangle is symmetric about the axis a. The original control triangle may or may not be symmetric with respect to the axis a. A control triangle extension module 112 modifies the original axis-symmetric control triangle by displacing its control points (q , q , q ) to produce new control points (q' , q' , q' ). For example, in one case, the control triangle extension module 112 can displace the control points (q , q , q ) in an outward direction to produce a larger control triangle than the original axis-symmetric control triangle. In another case, the control triangle extension module 112 can displace the control points (q , q , q ) in an inward direction to produce a smaller control triangle than the original axis-symmetric control triangle. In either case, the control points (q' , q' , q' ) define an approximate offset curve that corresponds to a quadratic Bezier curve segment. That is, both of the original curve segment S and the approximate offset curve have a quadratic form that can be represented by respective control triangles. As noted above, the approximate offset curve has the property that it is displaced from the original curve segment S by at least distance d along all points of the approximate offset curve. In other cases, the offset curve determination module 104 can generate two approximate offset curves based on an original curve segment S . A first approximate offset curve lies at least a distance d from the outer side (e.g., the convex side) of the original curve segment S , and is produced by moving the control points (q , q ) in an outward direction. A second approximate offset curve lies at least a distance d from the inner side (e.g., the concave side) of the original curve segment S , and is produced by moving the control points (q , q ) in an inward direction. In other cases, the offset curve determination module 104 can form one or more piecewise or composite approximate offset curves, where each such offset curve is composed, in turn, by two or more quadratic Bezier segments. Each segment is at least d distance from a portion of the original curve segment S which it respectively models. The offset curve determination module 104 can store the control points (q' , q' , q' ) which define an approximate offset curve in a data store 114. The approximate offset curve defines a bounding region that lies between the original curve segment S and the approximate offset curve. Alternatively, two approximate offset curves may bracket the original curve segment S , such that the bounding region lies to both sides of the original curve segment S The application module 106 can apply an approximate offset curve to achieve different purposes. In one general case, a query point inquiry module 116 can make a determination as to whether a query point lies within a bounding region defined by one or more approximate offset curves. An action-taking module 118 can then perform an analysis operation if the query point lies in the bounding region, such as by using any iterative distance-determination technique to determine a minimum distance between the query point and the original curve segment S . The action-taking module 118 can avoid performing this analysis if the query point lies outside the bounding region. Thus, overall, the application module 106 can leverage the bounding region to reduce the number of times it is required to perform the detailed analysis operation. In one case, the application module 106 can produce a graphical image based on the analysis described above. For example, the application module 106 can apply an anti-aliasing operation to at least some of the pixels (corresponding to respective query points) that lie within the bounding region. More specifically, for a particular pixel, the application module 106 can determine whether the pixel's position lies within the bounding region. If so, the application module 106 can compute the minimum distance between the pixel and the original curve segment S , and use that distance to define a distance-based transparency value for that pixel. Note that the approximate offset curve is an approximation of the true offset curve, so some of the query points may actually lie outside an ideal bounding region which exists between the true offset curve and the original curve segment S (again note the representation 102 of FIG. 1). Hence, the application module 106 can avoid applying an anti-aliasing effect to these pixels that are determined to lie outside the ideal bounding region. The approximate offset curve identifies those points which cannot possibly lie within the true bounding region, and thereby serves to, overall, accelerate the anti-aliasing operation by eliminating the need for time-consuming distance calculation for these outlying points. Alternatively, or in addition, the application module 106 can use an original curve segment S and two approximate offset curves to render a segment of a brushed path of width d, where the first approximate offset curve lies at a distance of at least +d/2 with respect to the original curve segment S , and the second approximate offset curve lies at a distance of at least -d/2 with respect to the original curve segment S . More specifically, for a particular pixel, the application module 106 can determine whether the pixel lies within the bounding region defined by the approximate offset curves. If so, the application module 106 can compute the minimum perpendicular distance between the pixel and the original curve segment S , and use that distance to define, using any arbitrary mapping function, a transverse parameter value for that pixel. For example, in one illustrative case, the application module 106 can assign normalized values of -1 and +1 to maximum perpendicular displacements from points along the original curve segment S (e.g., to either respective side of S ), defining a brushed path region through which the original curve segment S runs at position 0. The application module 106 can assign a transverse parameter value to a query point based on its displacement from S within this normalized range (if, in fact, it lies within this range at all). In another case, a brushed path can be defined with respect to a single approximate curve and the original curve segment In another scenario, the application module 106 can control the path of a machine based on the analysis described above. For example, the application module 106 can guide a vehicle or a manufacturing tool or some other apparatus along a physical path defined by one or more original curve segments. The application module 106 can use one or more approximate offset curves to approximate a range d of permissible deviation along that path. For a particular candidate position, the application module 106 can determine whether the position lies within a bounding region defined by one or more approximate offset curves. If so, the application module 106 can compute the minimum distance between the position and the original curve segment S , and use that distance to guide the machine, e.g., by controlling the machine so that its path more closely follows the original curve segment S Still other applications are possible. Generally, the application module 106 represents any type of mechanism that implements an application in a particular environment, and may encompass computing resources and/or mechanical resources. In conclusion, the quadratic approximation curve represents a conservative estimate of the true offset curve. This is because, as shown in the representation 102, the approximate offset curve may diverge, at certain points, from the true offset curve by sweeping out farther than a distance d from the original curve segment S . This lack of precision is acceptable, however, because, in one implementation, the application module 106 uses the approximate offset curve as only an acceleration mechanism to reduce the amount of processing that is performed on a set of query points. That is, the lack of precision means that the application module 106 will perform detailed analysis on some query points that is not, strictly speaking, necessary (because these query points may lie more than distance d from the original curve segment S ); but this unnecessary processing will not affect the integrity of the ultimate output of the application module 106. At the same time, the system can generate, store, and apply the approximate offset curve in a highly efficient manner. This is because the approximate offset curve has a quadratic form that can be represented with only three control points. These efficiency-related benefits can ameliorate any additional processing associated with the use of an imprecise offset curve. In contrast, consider the case of an application which uses multiple polygonal components to represent a precise boundary region (e.g., as described in the Background section). In one implementation, this application must consume time to create a data structure that describes the bounding region. And then the application must devote memory resources to store the data structure. Further, the application must devote sufficient processing resources to perform computations which involve the bounding region. These costs detract from the otherwise desirable goal of providing a bounding region which precisely follows the shape of the original curve segment. B. Illustrative Processes FIGS. 2-5 show procedures (200, 300, 400, 500) that explain one manner of operation of the system 100 of FIG. 1. These figures will be explained in conjunction with the illustrative graphical depictions in FIGS. 6-9. Starting with FIG. 2, this figure shows a procedure 200 which represents an overview of one implementation of the system 100. In block 202, the system 100 receives data that describes an original control triangle. The original control triangle has three control points (p , p , p ) that define an original curve segment S (based on the Bezier equation which uses the control points to define the path of the curve segment, i.e., B(t)=(1-t) +2(1-t)t p ), where 0≦t≦1. These points also describe an infinite parabolic curve on which S lies. Determination of whether a query point lies on, outside, or inside of this parabola can be performed by evaluating an implicit equation that is derivable from the control points. More specifically, every quadratic Bezier curve has an implicit equation b (p)=0, where the b are the barycentric coordinates of point p with respect to the control triangle points p . The left-hand side of this equation can be used as a formula for determining whether a query point lies above, below, or on the original curve segment S In block 204, the system 100 constructs an original axis-symmetric control triangle based on the original control triangle. The original axis-symmetric control triangle is symmetric with respect to the axis a of the parabola defined by the original curve segment S . The original axis-symmetric control triangle is defined with respect to three control points (q , q , q ). FIG. 3 provides additional details regarding the operation of block 204. In block 206, the system 100 constructs a new control triangle by displacing the positions of the control points (q , q , q ) of the original axis-symmetric control triangle in an inward or outward direction, yielding new control points (q' , q' , q' ). The new control triangle defines a quadratic approximate offset curve which is at least distance d from the original curve segment S . FIG. 4 provides additional details regarding the operation of block 204. In block 208, the system performs an application-specific action based on the approximate offset curve that has been calculated in block 204. FIG. 5 provides additional details regarding the operation of block 208. Advancing to FIG. 3, this figure explains one manner by which the system 100 can generate the original axis-symmetric control triangle (corresponding to block 204 of FIG. 2). This figure will be explained below with reference to the graphical depictions of FIG. 6. The procedure 300 is performed by operating on the control points (p , p , p ) of the original control triangle, which, in turn, describes the original curve segment S In block 302, the system 100 defines a line segment p which connect the control points p and p of the original control triangle. The system 100 then defines a line m which runs from the midpoint of the line segment p to the control point p . This line m can be shown to be parallel to the axis a of the parabola which underlies the original curve segment S In block 304, the system 100 can extend the parabola associated with the original curve segment S to produce an extended curve segment S . More specifically, the original curve segment S represents just a portion of an infinite parabola. If necessary (for the purposes of the subsequent operation in block 306), block 304 extends one or more ends of the segment to show additional parts of the underlying parabola, producing an extended version of S In block 306, the system 100 next draws a chord which runs parallel to the line m (determined in block 302) and which intersects the extended curve segment S twice. There are an infinite number of chords which satisfy this constraint. In one implementation, the system 100 can draw a chord which intersects either the control point p or the control point p . Doing so will create an approximate offset curve that preserves the tangent behavior of at least one of the endpoints of the original curve segment S . More specifically, the system 100 can draw the chord through whatever control point is farthest from the control point p . This choice helps reduce numerical problems for asymmetric curve segments having one endpoint very close to p . In any case, the two points at which the chord intersects the extended curve segment S define two control points (q , q ) of the original axis-symmetric control triangle. In block 308, the system 100 can extend lines which are tangent to S at the control points q and q . The intersection of these lines defines the control point q . Taken together, the three control points (q , q , q ) define the original axis-symmetric control triangle. This control triangle is symmetric in that, by folding it about the axis a, one side mirrors the other. Advancing to FIG. 4, this figure explains one manner by which the system 100 can generate the new control triangle based on the original axis-symmetric control triangle (corresponding to block 206 of FIG. 2). This figure will be explained below with reference to the graphical depictions of FIG. 7. The procedure 400 is performed by operating on the control points (q , q , q ) of the original axis-symmetric control triangle provided via the procedure of FIG. 3. In block 402, the system 100 displaces the control point q outward along the normal to the extended curve segment S at point q by a distance d. This defines a new control point q' . Alternatively, the system 100 can displace the control q in the inward direction by a distance d. Similarly, in block 404, the system 100 displaces the control point q outward along the normal to the extended curve segment S at point q by a distance d. This defines a new control point q' . Alternatively, the system 100 can displace the control q in the inward direction by a distance d. In block 406, the system 100 extends lines from the control points q' and q' . The lines have slopes which match the original tangents lines at points q and q , respectively. The intersection of the lines defines a new control point q' . Together, the control points q' , q' , and q' define a new control triangle. The new control triangle, in turn, defines a quadratic approximate offset curve (based on the Bezier equation set forth above), having the properties described above. More specifically, for cases in which the control points (q , q , q ) are extended in the outward direction, the approximate offset curve can be guaranteed to lie at least a distance d from all points defined by the original curve segment S . The same is true for movement of the control points (q , q , q ) in the inward direction, providing that the displacement of q and q does not extend beyond the axis a of the parabola. Other techniques can be used to calculate the approximate offset curve. Hence, the particular operations shown in FIGS. 3 and 4 (as well as the particular sequence of operations) are to be understood as representative, not limiting. Advancing to FIG. 5, this figure shows a procedure 500 for applying an approximate offset curve. This procedure 500 will be described with respect to the case in which a bounding region is defined with respect to a single approximate offset curve which is at least a distance d from an original curve segment S . But the same procedure 500 can be applied with respect to a bounding region defined by two or more approximate offset curves. Furthermore, the procedure 500 will be described with respect to a single instance of an original curve segment S and its corresponding approximate offset curve; but an application can apply the procedure 500 with respect to multiple instances of original curve segments and corresponding approximate offset curves. In some cases, the system 100 can calculate an approximate offset curve in advance, e.g., before it used in the procedure 500 of FIG. 5. In other cases, the system 100 can calculate the approximate offset curve in a dynamic manner, e.g., when it is needed by the procedure 500 of FIG. 5. In block 502, the system 100 determines whether a query point lies in a region bounded by the approximate offset curve and the original curve system S . This operation can be performed by feeding the query point as an input into an implicit formula that is derived from the Bezier triangle defining the original curve segment S , and then feeding the query point as an input into an implicit formula that is derived from the Bezier triangle defining the approximate offset curve. For example, the system 100 can use the implicit equation described above to provide the implicit formulas. The outputs of the implicit equations indicate whether the query point lies in the region. In block 506, if it is determined that the query point lies outside the region (as assessed in block 504), then the system 100 can avoid detailed analysis of this query point. In block 508, if it is determined that the query point lies within the region, then the system 100 can perform detailed analysis on this query point. For example, in one implementation, the system 100 can employ any iterative distance-determination technique to determine the minimum distance between the query point and the original curve segment S In block 510, the system 100 can perform a further application-specific action based on the output of block 508. For example, suppose that the query point represents a graphical display element (such as a pixel). The system 100 can modify an attribute of the graphical display element (such as a transparency value or a transverse parameter value) based on the distance of this element from the original curve segment S , e.g., in one case, so as to reduce the effects of aliasing. The procedure 500 can be modified in different ways, e.g., by incorporating additional tests to determine whether the query point lies in the region bounded by the original curve segment S and the approximate offset curve. For example, the system 100 can also determine whether the query point lies in a rectangular bounding box defined by the x and y extremes of the original curve segment S , extended by a distance d. Alternatively, or in addition, the system 100 can also determine whether the query point lies within a prescribed distance from the endpoints of the original curve segment , and so on. In this case, the procedure 500 can determine that a query point lies within the bounding region only if it satisfies the plural inclusion tests. FIG. 8 is a graphical depiction of a first application of an approximate offset curve. Here, the original curve segment S defines a contour of some object, such as a graphical object (e.g., a font character, etc.). The approximate offset curve defines a region between the original curve segment S and the approximate offset curve, where, at all points, the approximate offset curve is at least distance d away from the original curve segment S . Based on the analysis of FIG. 5, the system 100 can determine that query point W lies outside this region and that query point X lies inside the region. The system 100 can then perform detailed analysis for query point X, but not query point W. For example, the system 100 can apply an anti-aliasing operation to modify an attribute of a pixel associated with query point X, but not a pixel associated with query point W. FIG. 9 is a graphical depiction in which a first and second approximate offset curves bracket the original curve segment S , defining a region between the offset curves. Based on the analysis of FIG. 5, the system 100 can determine that a query point Y lies outside the region, and that query point Z lies within the region. The system 100 can then perform detailed analysis for query point Z, but not query point Y. C. Representative Computing functionality FIG. 10 sets forth illustrative computing functionality 1000 that can be used to implement any aspect of the functions described above. For example, the computing functionality 1000 can be used to implement any aspect of the system 100 of FIG. 1. In one case, the computing functionality 1000 may correspond to any type of computing device that includes one or more processing devices. In all cases, the electrical data computing functionality 1000 represents one or more physical and tangible processing mechanisms. The computing functionality 1000 can include volatile and non-volatile memory, such as RAM 1002 and ROM 1004, as well as one or more processing devices 1006 (e.g., one or more CPUs, and/or one or more GPUs, etc.). The computing functionality 1000 also optionally includes various media devices 1008, such as a hard disk module, an optical disk module, and so forth. The computing functionality 1000 can perform various operations identified above when the processing device(s) 1006 executes instructions that are maintained by memory (e.g., RAM 1002, ROM 1004, or elsewhere). More generally, instructions and other information can be stored on any computer readable medium 1010, including, but not limited to, static memory storage devices, magnetic storage devices, optical storage devices, and so on. The term computer readable medium also encompasses plural storage devices. In all cases, the computer readable medium 1010 represents some form of physical and tangible The computing functionality 1000 also includes an input/output module 1012 for receiving various inputs (via input modules 1014), and for providing various outputs (via output modules). One particular output mechanism may include a presentation module 1016 and an associated graphical user interface (GUI) 1018. The computing functionality 1000 can also include one or more network interfaces 1020 for exchanging data with other devices via one or more communication conduits 1022. One or more communication buses 1024 communicatively couple the above-described components The communication conduit(s) 1022 can be implemented in any manner, e.g., by a local area network, a wide area network (e.g., the Internet), etc., or any combination thereof. The communication conduit(s) 1022 can include any combination of hardwired links, wireless links, routers, gateway functionality, name servers, etc., governed by any protocol or combination of protocols. Alternatively, or in addition, any of the functions described in Sections A and B can be performed, at least in part, by one or more hardware logic components. For example, without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. In closing, the description may have described various concepts in the context of illustrative challenges or problems. This manner of explanation does not constitute an admission that others have appreciated and/or articulated the challenges or problems in the manner specified herein. More generally, although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the Patent applications by Erik S. Ruf, Kirkland, WA US Patent applications by Microsoft Corporation Patent applications in class MODELING BY MATHEMATICAL EXPRESSION Patent applications in all subclasses MODELING BY MATHEMATICAL EXPRESSION User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130018636","timestamp":"2014-04-19T10:10:55Z","content_type":null,"content_length":"78870","record_id":"<urn:uuid:0cff71d5-f91f-4774-83e3-c83e161e4d35>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Brevet US5986666 - Method for dynamic generation of synthetic images with automatic detail level, and implementation device To produce a terrain model, the present invention employs a mesh generation technique based on Delaunay triangulation, and more precisely constrained Delaunay triangulation, schematically illustrated in FIG. 1. The points P defining the nodes of the mesh may be arbitrarily distributed in the plane of the region 1 being processed. This region 1 is one of the zones of the terrain. In the example in FIG. 1, this zone is simply a rectangle. This mesh generation technique has a number of advantages. First, the simplicity of the objects (triangles) that are processed makes it possible to process them in real-time. This is due, in particular, to the fact that the majority of visualization and geometrical manipulation algorithms are simplified and accelerated by virtue of the exclusive use of triangles. Furthermore, for a given set of points (points of an altimetric map), the corresponding Delaunay triangulation is unique, which allows these points to be processed in any desired order. The remarkable property of Delaunay triangulation is that it generates triangles which are as equilateral as possible. This property is very advantageous in image synthesis, for which it reduces the problems of smoothing and numerical Delaunay triangulation allows a point to be added to or removed from an already triangulated set without having to recalculate all the points, because of the solely local influence of such a point. An interactive manipulation of this type is also beneficial for the real-time modification of a visualized terrain. Constrained Delaunay triangulation guarantees the existence of certain edges in the mesh, which makes it possible to respect the geometry of the objects integrated in the terrain (roads, railway lines, buildings, etc.). Further to triangulation, the present invention employs filtering of the visualized terrain surfaces. This filtering makes it possible to eliminate the least significant points of this surface, which makes it possible to simplify the processing of the geographical zone to be visualized. The triangulation then carried out on the remaining points of this surface corresponds to the finest detail level of the database which provides these points. The filtering consists in using an iterative process to refine the triangulation of the terrain. For each point P, the distance between this point and the point Q which is the projection of P onto the current triangulation is calculated. This distance corresponds locally to the error existing between the surface of the approximated terrain (resulting from the current triangulation) and the surface of the real terrain. After having thus calculated the errors relating to the various points, the point of maximum error is selected, and is inserted into the current triangulation. This step is repeated until the maximum error of the points of the zone in question is less than a threshold, which is fixed as a function of the desired realism of the display, and while taking into account the computing power needed to cross this threshold for various types of terrains. The curve in FIG. 2 shows the change in the error (corresponding to the distance PQ as defined above) as a function of the number of points defining a given zone. Since the terrain is not in general a convex surface, the insertion of a point into the current triangulation may have the effect of increasing the error instead of decreasing it. Thus, for example as represented in FIG. 3A, it is assumed that four points P1, P2, P3 and P4 are initially available along an undulating terrain section. The initial approximation is the segment P1-P2. The point which is then inserted is the one for which the distance to the segment P1-P2 is the greatest. Let us assume that this is P3. It is then observed that the new error of P4 (distance from P4 to the segment P3-P2) may be greater than the distance D' between P3 and its projection Q3 on to the segment P1-P2. It is in general observed that, after several consecutive insertions, the error finally falls below the value (D' in the example in FIG. 3A) which it had when the first point was inserted (P3, for the example in question). In practice, this insertion step is repeated until the maximum error becomes less than a threshold which is fixed. In the diagram in FIG. 2, the temporary peaks due to the insertion of points (such as P4) corresponding to reliefs having the opposite curvature to the current curvature have been removed. Thus, according to this FIG. 2, a quasi-exponential decrease in the maximum error is observed as soon as the number of points defining a given zone (of dimensions of the order of 100 km.sup.2) exceeds one thousand. It is therefore easy, for a given type of relief (little undulation, moderately mountainous, very mountainous, etc.) to carry out tests in order to obtain a curve such as the one shown in FIG. 2, and to optimize the ratio between the accuracy of the representation of the terrain and the number of points needed for a faithful representation of the terrain, without unnecessarily overloading the graphics calculation processor or "graphics engine" (beyond a certain number of points, the gain in precision comes derisory in view of the increase in the number of calculations). On the basis of the principles set out above, the invention consists in modifying the database in real-time so as to send the graphics engine only the polygons characteristic of the observer's point of view. The precision of the terrain is adjusted locally by adding and removing points in the current mesh. The selection of the points to be inserted and to be removed takes into account the relief (in particular in order properly to respect the ridge lines, which are an important element of the landscape for a helicopter or aircraft pilot), the observer's position and the type of vehicle (tank, aircraft, helicopter, etc.). The diagram in FIG. 4 gives a simplified representation of the two main software layers 2, 3 of the graphics engine employing the method of the invention. Layer 2 is the one charged with the Delaunay triangulation, carried out in asynchronous mode. Amongst other things, it regulates the loading of the processor (optimization of the processing of the data packets being exchanged in asynchronous mode), calculates the detail level and regenerates the database (after local modification of the description of the terrain) in the format of the host processor. Layer 3 is the graphics task proper, which refreshes the screen at a rate of, for example, 30 Hz. This task essentially carries out the pretruncation of the regions (so as to have to display only the regions visible to the user), the display and the transition between two detail levels. The interaction between the two software layers 2 and 3 takes place in the following way. It is assumed that the graphics system is displaying scenes which are to be seen by an observer located in a mobile vehicle simulator at a given instant. At regular time intervals (for example at a rate of 30 Hz, as specified above), the graphics task 3 sends the Delaunay-meshed polygons to be displayed (as a function of the movement of the said vehicle). These polygons correspond to the current detail level which is being displayed. The Delaunay task 2 asynchronously recalculates the appropriate detail level, in order to take into account the change in the observer's position relative to the displayed region. The points which have become pertinent as a result of this (visible by the observer and necessary for realistic definition of the landscape, that is to say not too far away from the observer) are inserted into the Delaunay mesh. The points which have become superfluous are removed. The method for selecting these points is explained in more detail below. As soon as the new mesh is calculated, it is substituted for the old one by toggling (marked "flip-flop" in FIG. 4). In response to each request for a change in detail level which is sent by the layer 3, the graphics processor recalculates the pertinence of the points from the database. In the case of an observer who can turn very quickly (helicopter pilot or tank driver), all the terrain regions of the local database are processed. In the contrary case (civil aviation pilot, etc.), only the regions relating to the heading of the vehicle are recalculated. The number of regions to be processed can thus be reduced. The points to be inserted in or removed from a mesh are selected while respecting the following three criteria: 1) criterion of determinism. The triangulation obtained for a given observer position should always be the same, irrespective of the path followed by this observer to reach this position. In other words, the calculation of the pertinence of a point must be independent of the preceding calculations. 2) criterion of aspect continuity of the regions. If a point is inserted in or removed from a region edge, it must also be inserted in or removed from the neighbouring region. This guarantees geometrical connection between neighbouring regions. 3) criterion of respecting the relief. The characteristic points of the terrain must be preserved. According to the invention, the calculation of the pertinence of a point Pn of a mesh (that is to say the determination of the need to keep this point or remove it when changing the detail level) takes place as follows. Let O be the point in space where the observer is located, and let Qn be the vertical projection (along the local vertical) of Pn on to the triangulation Pm which does not contain the point Pn. The variable determining the pertinence of the point Pn is the angle Ea formed by the half-lines OPn and OQn (see FIG. 5). This angle Ea can be referred to as the angular error (corresponding to the removal of Pn). In order to meet the first criterion mentioned above, it is important for the evaluation of the angle Ea always to be of the same value for a given current position of the point, because otherwise the triangulation would be non-deterministic. An instability phenomenon would even be produced if the calculation of Ea depended on the neighbouring points, because the insertion of a point would then affect the angular errors of the neighbouring points, which could lead to an infinite look of point insertions and removals. Furthermore, the boundary points between two adjacent regions would not be inserted or removed at the same time, because their angular errors would be different, and the above criterion 2 would not be met. For all these reasons, when calculating Ea, the distance Dn between Pn and Qn is not calculated by projecting Pn on to the current triangulation which does not contain Pn. Instead, this distance becomes constant and is precalculated in the manner explained above with reference to FIGS. 3A to 3C. During the visual simulation, this elevation error is converted into an angular error relative to the observer. If the angular error thus calculated is greater than a threshold value Ethreshold, the point Pn is inserted, otherwise it is removed. The graphics task 3 controls the switching of the detail level, that is to say the switching between a scene prior to an operation of point insertion/removal, due to a change in the observer's position, and a scene just after this operation. The process of toggling from a detail level N to a detail level N+1 has been represented in simplified fashion in FIG. 6, which is a view from above. To prevent the transition between the triangulations that correspond to these detail levels from producing a visual artefact referred to as "popping", but instead for it to be virtually invisible, or in any case so that it does not cause a problem, morphing is carried out, this being an interpolation technique which is well-known per se. The intermediate morphing triangulation is displayed throughout the time taken for the morphing operation (in order to avoid having an abrupt jump between the initial and final detail levels). This intermediate triangulation is calculated by preserving the points of the triangulations of the two detail levels, while adding the possible intersection points to them. In the simplified example in FIG. 6, three adjacent non-coplanar triangles forming part of the detail level N are represented on the left, these triangles together forming a skew surface with pentagonal contour a, b, c, d, e, in which the edges that are each common to two triangles are be and bd. The detail level N+1 has been represented on the right of the figure (it would equally well be possible to discuss the level N-1), the skew surface of which has the same pentagonal contour a, b, c, d, e as in the detail level N, but has a vertex f which, when viewed from above, lies substantially at the centre of the pentagonal contour. This vertex f is joined by five edges to the five respective vertices of the pentagon. The intermediate triangulation has been represented at the middle of FIG. 6. In order to ensure a virtually invisible transition between the detail levels N and N+1, three new points are imposed in the initial triangulation (see the representation at the middle of FIG. 6): i.sub.1 on the edge be (on the vertical passing through be and af) i.sub.2 on the edge bd (on the vertical passing through bd and cf) f on the face bde (on the vertical passing through the final position of f). These three points are then progressively shifted (morphing operation), until: i.sub.1 is on the edge af i.sub.2 is on the edge fc f reaches its reference position. When the final positions of these three points have been reached, the morphing operation is ended, and the new triangulation of the detail level N+1 is displayed (that is to say the triangular surface on the right in FIG. 6). The speed of rise of a point depends on several factors, in particular at least one of the following factors: its visibility by the observer. Thus, a point which does not lie in the observer's instantaneous field of view is raised immediately. the observer's speed of movement. Thus, when he is stationary, the raising of the points is frozen. the distance from the point to the observer. For equal altitude, the point rises faster as its distance from the observer increases. By virtue of these characteristics of the method of the invention, rapid switching of the consecutive detail levels is obtained, with the best possible fluidity of transition between images. The other advantages of the invention are: a reduction in the volume of the database without loss of image quality. The visualization system is capable of calculating an image with rendition equal to that obtained by the methods of the prior art, with far fewer polygons: on average, 2/3 of the facets can be removed without degrading the image. automation of the workload management of the graphics processor. It adapts the database to the hardware performance simply by adjusting a parameter (threshold angular error). It therefore makes it possible to obtain the best possible realism for the scene display, in view of the capacities of the graphics and visualisation system. It simplifies the generation of the database: precalculation of the detail levels becomes superfluous (the appropriate detail level is generated directly by the triangulation layer 2), which commensurately reduces the cost price of the means for producing the database. It allows real-time modification of the representation of the terrain, in particular in order to make it possible to adapt to demanding simulation scenarios, or to new requirements, for example shared interactive simulation. The present invention will be understood more clearly on reading the detailed description of an illustrative embodiment, illustrated by the appended drawing, in which: FIG. 1 is an example of constrained Delaunay triangulation, which can be employed by the present invention, FIG. 2 is a diagram showing the change in the triangulation error of a surface as a function of the number of points chosen to represent it, FIGS. 3A-3C are explanatory diagrams showing the effect of the insertion of a significant point on the triangulation error, FIG. 4 is a simplified diagram of the software architecture of a graphics processor employing the method of the invention, FIG. 5 is a simplified explanatory perspective view defining the angular error criterion used by the present invention, and FIG. 6 is a simplified example explaining the toggling from a detail level N to a detail level N+1, according to the invention. 1. Field of the Invention The present invention relates to a method for dynamic generation of synthetic images with automatic detail level, as well as to a device for implementing this method. 2. Discussion of the Background The real-time image synthesis machines used in flight simulators can generate almost fully realistic images, in particular by large-scale use of photographic textures. However, the maximum number of polygons which can be displayed at each image cycle is still their main limitation. Unfortunately, this number is found to be very poorly used in some situations. This is because the same attention is paid to the distant facets as to the close facets, irrespective of the observer's visibility distance. However, during calculation, the pertinence in the image of the distant facets is very small compared to those found close to the observer. Some attempts have been made in the past to simplify the long-distance landscape. However, it has been found that they are not very effective and are too constringent in terms of the database generation. It is possible to visualize real terrains in synthetic images by virtue of the existence of altimetric maps resulting from radar or satellite observations. These altimetric data are generally in the form of a two-dimensional grid giving the altitude at each point. The terrain model (algorithms and data structures) which manages these altimetric data should take into consideration the following three requirements which are essential, in particular for aircraft pilot simulators: The characteristic aspects of the relief (peaks, valleys) are very important visual references for pilots and influence the quality of their training and their decisions during a mission. Faithfully representing the ridge lines is therefore an essential criterion for any cartographic model. Economy of Information For equal precision, the number of polygons representing a given terrain directly influences the response times of a real-time simulator (rendition, collision, roll, intervisibility, etc.). Since the roughness of a terrain is not regular, the mesh generation should adapt to the relief, and be coarse in zones with constant slope and fine in undulating parts. Speed of Generation Since the simulation databases may cover thousands of km.sup.2, their generation cost is directly linked with the use of high-performance algorithms which make it possible to integrate the various data sources (planimetry, altimetry, photometry) in a minimal amount of time. Since the databases of aircraft simulators are in general very extensive, the number of facets representing the terrain is considerable. However, the visual system can display in real-time only a few thousands of facets. In order very rapidly to eliminate those which are not in the field of view, region pre-truncation is carried out. During the modelling of the database, the terrain is partitioned into rectangular zones referred to as regions. Simple pyramid-box intersection calculations make it possible to select the visible regions and thus eliminate a very large number of This division is also very useful if the database cannot be loaded into memory in one block. It is then sufficient to load only the regions lying in a sphere which is centred on the observer and has a radius equal to the visibility distance. This local database is updated as the observer moves: the regions leaving the sphere are dumped and replaced by those entering it. It is thus only memory space which limits the size of the databases. For smaller visibility distances (&lt;10 km) and a moderately undulating terrain, region pre-truncation is found to be sufficient in order to guarantee the image calculation frequency. Beyond this, the workload management of the visual system remains problematic. Overloading the visual system with polygons results in image jumps following cycle overflows. This situation is scarcely acceptable for a real-time flight simulator. The only remedy currently available is to simplify the zones of the database where the display "jams". This solution is expensive and not very practical. Furthermore, for large visibility distances, the relief is simplified far too much. Only sophisticated detail level algorithms can greatly reduce the number of facets displayed without degrading the quality of the image. These algorithms have arisen from the following observation: the pertinence of a polygon of the terrain, that is to say the number of pixels which it occupies on the screen, decreases as the polygon becomes further away. If nothing is done, some of the graphics power of the machine is wasted on clipping, projecting, texturing, etc. polygons which, in the end, only occupy one or two pixels on the screen. The entire difficulty therefore consists in simplifying a terrain seen by a mobile observer without compromising the pertinence of the image. Faced with the mathematical and algorithm complexity, no simulator manufacturer has integrated a device of this type truly effectively. A first approach consists in precalculating various detail levels for each region and in switching from one level to another in real-time. Unfortunately, this method has many drawbacks: Dependence on the Regions Since the regions here fulfill the role of boundaries between detail levels, the quality of the switching depends on their sizes. Memory Requirement The memory requirement limits the number of detail levels per region and penalizes the dynamic loading of the database. Furthermore, this number varies as a function of the relief of each region. Excessively Abrupt Switching As the number of detail levels of a region decreases, the abruptness of the switching increases. These image artefacts are a great problem for the pilot. In order to remedy these problems, hierarchical terrain models have been envisaged. A hierarchy of detail levels within the same data structure (quad tree, Delaunay pyramid) only stores the changes for moving from one detail level to another, which gives a significant memory saving. The tree structure is followed in order to select the triangles to be displayed as a function of the required Three major drawbacks remain Overall Detail Level The construction of the successive detail levels is based solely on a precision criterion, independently of an observer's position. The order in which the points appear is fixed, whereas the pertinence of the points does indeed vary as a function of the observer's position. Modifications to the Terrain are Impossible These hierarchical data structures are rigid and do not allow any modification to the terrain in real-time. It would be necessary to reconstruct the entire tree structure, which is too expensive. Additional cost of generating the database. The generation of the database remains something which is very expensive. The constraints linked with real-time complicate the modelling of the scene. The subject of the present invention is a method for dynamic generation of synthetic images, allowing synthetic images to be generated in real-time as faithfully as possible and with the best possible rendition, without needing powerful calculation means, by taking account of the pertinence of the points of the various detail levels as a function of the observer's position, this method also allowing modifications to the configuration of the terrain in real-time. A further subject of the invention is a dynamic generator of synthetic images, including means which are as inexpensive as possible, preferably means such as computers and database storage means which are simple and widely available. The method according to the invention consists in forming a database from a file containing the topographical data relating to the terrains to be visualized, in eliminating the least significant data, then in calculating in real-time the points to be displayed as a function of the required detail level, which is itself a function of the observer's position, the maximum altitude of the terrain to be visualized, the visibility distance and the required detail level, by selecting a subset of points in the database which define a terrain portion whose detail level has changed, in performing irregular mesh generation on the selected terrain portion, preferably a mesh generation of the Delaunay type, and in applying a texture to the polygons resulting from the mesh generation. According to one aspect of the method of the invention, the precision of the representation of the terrain is adjusted locally by adding or removing points in the mesh, these points being selected as a function of the relief, the observer's position and the type of vehicle carrying the observer.
{"url":"http://www.google.fr/patents/US5986666","timestamp":"2014-04-20T13:31:58Z","content_type":null,"content_length":"78775","record_id":"<urn:uuid:be768285-023c-4643-9afd-761b122a0dfd>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
What Numbers Can We Make Now? Copyright © University of Cambridge. All rights reserved. Why do this problem? This problem offers students the opportunity to consider the underlying structure behind multiples and remainders, as well as leading to some very nice generalisations and justifications. Possible approach Start by showing the interactivity from the , and clicking on 'New Numbers' several times: "This interactivity can generate lots of different sets of bags like the set we worked on last lesson. Later on I'm going to generate a set of bags and ask you what is special about the total when I choose three, four, five, six... 99 or 100 numbers. To prepare a strategy for answering these questions, here are some bags to get you started." Display this (available as a ). Then arrange the class in pairs or small groups, and allocate one or two sets of bags to each. "In a while, you'll need to be able to explain to the rest of the class what happens when you add together three, four, five, six... 99, 100 numbers from your set of bags, and how you worked it out." While groups are working, circulate and listen for any useful insights to bring out in the whole class discussion later. If anyone finishes their set of bags early, they can apply their strategy to someone else's set. Bring the class together, and invite groups to share what they found. Then allow groups a few minutes to discuss a general strategy for answering the questions generated by the interactivity. Finally, display the interactivity again. Generate new questions, and invite the groups to use their strategy to work out what happens for three, four, five, six, 99 or 100 numbers. Check their answers, and then repeat, giving each group a chance to have a go at answering a 99 or 100 question. You could finish off by asking the final question from the problem: "If the bags contained 3s, 7s, 11s and 15s, can you describe a quick way to check whether it is possible to choose 30 numbers that will add up to 412?" Key questions If I choose 5 numbers that are each one more than a multiple of 5, what is special about their total? Why? Possible extension There are a few related problems that students could work on next: Take Three from Five Shifting Times Tables Charlie's Delightful Machine A Little Light Thinking Where Can We Visit? Cinema Problem Possible support Begin by asking students to explore what happens when they add numbers chosen from a set of bags containing 2s, 4s, 6s and 8s. They could then consider what happens when they add numbers chosen from a set of bags containing 1s, 11s, 21s and 31s. Can they explain their findings?
{"url":"http://nrich.maths.org/8280/note?nomenu=1","timestamp":"2014-04-20T00:40:37Z","content_type":null,"content_length":"6559","record_id":"<urn:uuid:367de223-4c08-41e0-9ea6-078f7229641b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
A thought on Linear Models on Stocks - DataPunks A thought on Linear Models on Stocks Timely Portfolio has a nice post about linear models sytems for stock. The idea follows from the steps below: • Get the weekly closing values of the S&P 500. • Choose a time window (i.e. 25 weeks) and for each window, linearly regress the subset of closing values • Choose an investment strategy based on the residuals, the running average of slope coefficients, or the running average of correlation with data points The idea is quite simple, and so far, from Timely Portfolio’s post, it looks like the drawdown is behaving nicely. It seems like the idea could be extended to a non-linear method. The residuals are getting larger and larger, and this indicates that linear methods are less reliable as time goes by. # code from Timely Portfolio # http://timelyportfolio.blogspot.ca/2011/08/unrequited-lm-love.html GSPC <- to.weekly(GSPC)[,4] width = 25 for (i in (width+1):NROW(GSPC)) { linmod <- lm(GSPC[((i-width):i),1]~index(GSPC[((i-width):i)])) ifelse(i==width+1,signal <- coredata(linmod$residuals[length(linmod$residuals)]), signal <- rbind(signal,coredata(linmod$residuals[length(linmod$residuals)]))) signal <- as.xts(signal,order.by=index(GSPC[(width+1):NROW(GSPC)])) plot(signal, main="Residuals through time") plot(log(signal), main="Log of Residuals through time") 4 Comments Already One pingback/trackback
{"url":"http://blog.datapunks.com/2012/04/a-thought-on-linear-models-on-stock/","timestamp":"2014-04-17T15:26:31Z","content_type":null,"content_length":"26952","record_id":"<urn:uuid:4e4a6cdd-a47d-442c-a0dd-e2c2fe738ce9>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Vandermonde Determinant Date: 02/11/2003 at 19:55:04 From: Roger Subject: Determinant Compute the n x n determinant: | 1 1 1 ... 1 | | x1 x2 x3 xn | | x1^2 x2^2 x3^2 ... xn^2 | | . . | | . . | | . . | | x1^(n-1) x2^(n-1) x3^(n-1) ... xn^(n-1) | Date: 02/13/2003 at 09:11:40 From: Doctor Jacques Subject: Re: Determinant Hi Roger, Let us consider a simple example, with n = 3. The determinant is: | 1 1 1 | D= | x y z | | x^2 y^2 z^2 | If you expand this determinant, you will get a polynomial in x, y, z. Each term in this polynomial is of the form: where a and b are taken from x, y, z. This means that each term is of degree 3. Notice now that, if x = y, the determinant has two equal columns, and is therefore equal to 0. This means that our polynomial is divisible by (x-y). In a similar way, D is divisible by (y-z) and (z-x). We conclude that: D = (x-y)(y-z)(z-x)*p(x,y,z) where p(x,y,z) is an unknown polynomial in x, y, z. However, (x-y)(y-z)(z-x) is already of degree 3, so p(x,y,z) is just a constant, say A. We have thus: D = A(x-y)(y-z)(z-x) It remains to find the value of A. This can be done by looking at the coefficient of a particular term, for example y*z^2. You should have no trouble generalizing this to higher degrees. Note that the degree of the polynomial is n(n-1)/2, and this is just (by chance?) the number of pairs of different variables. This determinant is called a Vandermonde determinant. Please feel free to wrie back if you require more assistance. - Doctor Jacques, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/62210.html","timestamp":"2014-04-16T19:22:41Z","content_type":null,"content_length":"6656","record_id":"<urn:uuid:87d18c7b-9947-48c3-a781-bfbc8f21b495>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Eddie Game: Steps to Better Metrology Be honest, did you glance at the title and read it as “meteorology”? Or saw metrology but assumed (quite plausibly) that I had misspelled meteorology? Given the recent weather, you can be forgiven. Metrology is the science of measurement — a task that a great many of us at TNC do with surprising frequency. (Witness the effort demonstrated in the November 2012 issue of Chronicles alone to measure resilience.) Think of some of the things that we might measure in a conservation planning effort; disturbance, viability, condition, connectivity, intactness, risk, cost, biodiversity, threat, opportunity, service, etc. But despite the fact that assigning numbers to things is an everyday Conservancy activity, we violate basic rules of metrology almost as frequently. Before you skip a few pages on the reasonable premise that this is just Eddie banging on about planning again, consider that at the very least I’m hoping to license you to add another expertise to your resume. Natural vs. Constructed Scales In conservation, our main purpose for measuring things is to compare them — generally to make decisions about which activities we should prioritize and where. Sometimes the things we want to measure have natural scales — these are the easy ones. Natural scales are obvious and pre-existing ways to measure something — stream flow in volume (m3/second), populations by number of individuals, cost in dollars. Natural scales are great because they are relatively objective; two people should be able to measure the same thing and get the same number. Frequently, however, we want to measure things — such as resilience or disturbance — that do not have natural scales. In these cases, we need to use constructed scales. We can construct a scale to measure anything. This is where many conservation scientists demonstrate their skill as metrologists. For instance, we might assess the disturbance to different areas or habitats in a region on a scale of 1-7, or alignment of a strategy or geography with TNC’s expertise on a scale of 1-4. Constructed scales can even be simple linguistic interpretations (e.g., threat classified as “high,” “medium,” or “low”) that are subsequently related to numerical values (e.g., high = 3, medium = 2, low = 1). The basic premise of constructed scales is that the measurement reflects underlying empirical relationships in the thing we are measuring. Constructed scales allow us to measure things for which there are neither natural scales nor established data. They also allow us to integrate data on a number of variables and from a variety of sources — including in many cases, a good degree of expert judgement. These strengths make constructed scales really useful in conservation. The Potential Issue with Constructed Scales But the scores assigned to things on constructed scales are essentially arbitrary — there is no objective reason why a relatively undisturbed habitat should be given a score of 4 rather than 5, for example. What these constructed scales typically represent is a set of ordinal numbers. They tell us that a score of 2 is better than a score of 1 and worse than a score of 3. If we restrict our interpretation of such scales to simple ordinal representations between alternatives (e.g., alternative X is better than alternative Y for things Z), then the arbitrary nature of the numbers is not problematic. However, because ordinal numbers do not tell us how much better 2 is than 1, constructed ordinal scales become an issue when we try to perform any arithmetic on them, such as adding scores together or taking the mean across a number of scores. Performing this sort of math on an ordinal scale assumes a strict relationship between the numbers (that 4 is twice as good as 2) that the constructed scale might never have possessed. Yet we perform math on our constructed scales all the time. Take the Conservation Action Planning (CAP) workbook or the software Miradi. To help compare target viability (amongst other things), both tools combine measurements of size, condition and landscape context using the following scale: Very Good = 4, Good = 3.5, Fair = 2.5 and Poor = 1. The overall rank is given by the arithmetic mean of these three categories. To illustrate the problem with doing this, consider two habitats, A and B. Habitat A receives three scores of Fair, whereas Habitat B receives two scores of Good and one of Poor. Taking the arithmetic mean, Habitat B (score of 8) would be ranked above Habitat A (score of 7.5). But if we adjusted our choice of scale such that Good was worth 3 rather than 3.5, Habitat A (score of 7.5) would now be ranked above Habitat B (score of 7). As Wolman (2006) eloquently puts it in an article on measurement theory: the “truth or falsity of results derived from measurements should not depend on a fortuitous choice of scale.” The above example shows how easily basic rules of metrology can be violated and the results rendered somewhat arbitrary. We should improve our science related to measurement, especially as measurement is so often the place where our great science meets actual management decisions. Here are some very simple ways to improve your measurement practices: • Recognize that you are effectively a metrologist and take pride in your expertise. • Be aware of the type of scale something is being measured on, what the numbers mean, and what sort of math you can admissibly perform on them. • To check whether the math you are doing is reasonable for that scale, go back to the underlying data and ask if “4” is unambiguously (in other words everyone would agree) twice as good as “2.” • Where possible, use natural scales. Even if data in the logical natural scale doesn’t exist (say for population numbers), ask experts to give you estimates in the natural scale rather than a constructed scale. • If you need to construct a scale and measure things on it, do so in a way that preserves interval relationships. This might require using a more resolved scale, say 0 – 100 rather than 1 – 4. • If things need to be combined, normalize rather than convert to constructed scales. Converting to a constructed scale usually just loses information. • Consider multiplication rather than addition. Multiplying has the interpretation of weighting one thing by another thing and can avoid some of the issues of meaningfulness that come with adding or So update your CV’s. And keep measuring. Wolman, A. G. 2006. Measurement and meaningfulness in conservation science. Conservation Biology 20:1626-1634. *Photo: Shelly S/Flickr via Creative Commons
{"url":"http://www.conservationgateway.org/News/Pages/SC-Game-Better-Metrology.aspx","timestamp":"2014-04-16T22:27:24Z","content_type":null,"content_length":"263338","record_id":"<urn:uuid:a1181678-17d5-4fd1-9e75-036c72c0596e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Department of Mathematics and Statistics Kimberly M. Childs, Ph.D. Professor and Dean of the College of Sciences and Mathematics Department of Mathematics and Statistics Stephen F. Austin State University Office: Miller Science Building 100 Phone: 936-468-2805 E-mail: kchilds@sfasu.edu Degrees Earned: Ph.D., Texas A&M University, Mathematics Education (1995) M.S., Stephen F. Austin State University, Mathematics Teaching (1988) B.S., Dallas Baptist University, Mathematics (1977)
{"url":"http://www2.sfasu.edu/math/people/faculty/childs.html","timestamp":"2014-04-18T00:13:54Z","content_type":null,"content_length":"7530","record_id":"<urn:uuid:c3341d04-25aa-4454-bee3-24def3e29a51>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Divisibility proof in random integer subset May 8th 2009, 11:52 PM #1 May 2009 Divisibility proof in random integer subset Can anyone help me getting started on this one? I figure it has something to do with the pigeonhole principle, but I don't know where to start. We can prove a more general result. If $A$ is a subset with $n+1$ elements from $\{1,2,3,...,2n\}$ then there exists $a,b\in A$ with $a|b$. Each element in $\{1,2,...,2n\}$ can be written as $n\ cdot 2^m$ where $n$ is odd and $m\geq 0$. Let these be our pigeonholes. We see that there are $n$ pigeonholes because there are $n$ odd numbers between $1,2,...,2n$. Thus, two numbers end up in the same pigeonhole. So $a=n\cdot 2^{m_1}, b=n\cdot 2^{m_2}$ not it is clear that $a|b$ if $m_1<m_2$. It took me a while to get this but it's a really nice proof. May 9th 2009, 10:52 AM #2 Global Moderator Nov 2005 New York City May 10th 2009, 11:14 AM #3 May 2009
{"url":"http://mathhelpforum.com/number-theory/88206-divisibility-proof-random-integer-subset.html","timestamp":"2014-04-19T08:53:20Z","content_type":null,"content_length":"39124","record_id":"<urn:uuid:2499047a-72be-4f92-a1ff-4fd0bf17ecef>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
[INS:** P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. D. Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentzcovitch, “QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials,” [[ http://dx.doi.org/10.1088/0953-8984/ 21/39/395502 | Journal of Physics: Condensed Matter, vol. 21, no. 39, p. 395502, 2009.]]:INS]
{"url":"http://volga.eng.yale.edu/index.php/Main/DFTBackground?action=diff","timestamp":"2014-04-20T16:36:22Z","content_type":null,"content_length":"52252","record_id":"<urn:uuid:2f00cd9e-f0cc-4ce8-8037-2e3904ddb6a0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there water flow against gravity? Hi all, I have been with this issue for some time and think I reached a conclusion but am not sure and was hoping that you guys could help me out with this. I was considering the scenario where you have a closed tube like the one you see in the image. Since there is no upper contact with atmosferic pressure, the water will not fall to the plate. It is also known that, from a specific height, the water column weight overcomes atmosferic pressure, thus allowing the water column to fall parcially. I have calculated that, for a specific atmosferic pressure of 10130 Pa, one would need a water column of 10.13 m to start producing vacuum. Now, the doubt comes in this case: imagine that, for some reason, the water level goes below the 10.13 m (consequently, adding more vaccum to the top of the tube). Will actually atmosferic pressure force the water up against gravity until it reaches the height of 10.13m? I hope I could explain this properly. Thanks in advance for your help guys
{"url":"http://www.physicsforums.com/showthread.php?p=4265155","timestamp":"2014-04-20T18:37:26Z","content_type":null,"content_length":"23284","record_id":"<urn:uuid:b8bfae9e-f477-40db-9ec8-051e20e8fb4e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Decision Trees You are here: Freetutes.com > Systems Analysis and Design Decision Trees Decision tree is a tree like structure that represents the various conditions and the subsequent possible actions. It also shows the priority in which the conditions are to be tested or addressed. Each of its branches stands for any one of the logical alternatives and because of the branch structure, it is known as a tree. The decision sequence starts from the root of the tree that is usually on the left of the diagram. The path to be followed to traverse the branches is decided by the priority of the conditions and the respectable actions. A series of decisions are taken, as the branches are traversed from left to right. The nodes are the decision junctions. After each decision point there are next set of decisions to be considered. Therefore at every node of the tree represented conditions are considered to determine which condition prevails before moving further on the path. This decision tree representation form is very beneficial to the analyst. The first advantage is that by using this form the analyst is able to depict all the given parameters in a logical format which enables the simplification of the whole decision process as now there is a very remote chance of committing an error in the decision process as all the options are clearly specified in one of the most simplest manner. Secondly it also aids the analyst about those decisions, which can only be taken when couple or more conditions should hold true together for there may be a case where other conditions are relevant only if one basic condition holds true. In our day-to-day life, many a times we come across complex cases where the most appropriate action under several conditions is not apparent easily and for such a case a decision tree is a great aid. Hence this representation is very effective in describing the business problems involving more then one dimension and parameters. They also point out the required data, which surrounds the decision process. All the data used in the decision making should be first described and defined by the analyst so that the system can be designed to produce correct output data. Consider for example the discount policy of a saree manufacturer for his customers. According to the policy the saree manufacturer give discount to his customers based on the type of customer and size of their order. For the individual, only if the order size is 12 or more, the manufacturer gives a discount of 50% and for less than 12 sarees the discount is 30%. Whereas in case of shopkeeper or retailers, the discount policy is different. If the order is less than 12 then there is 15% discount. For 13 to 48 sarees order, the discount is 30%, for 49 to 84 sarees 40% and for more than 85 sarees the discount is 50%. The decision policy for discount percentage can be put in the form of a decision tree displayed in the following figure. The decision trees are not always the most appropriate and the best tool for the decision making process. Representing a very complex system with this tool may lead to a huge number of branches with a similar number of possible paths and options. For a complex problem, analyzing various situations is very difficult and can confuse the analyst. << Previous Page | Contents | Next Page >>
{"url":"http://www.freetutes.com/systemanalysis/sa4-decision-trees.html","timestamp":"2014-04-18T02:57:27Z","content_type":null,"content_length":"9231","record_id":"<urn:uuid:70d032f5-c001-422a-98c8-e9a041bf538d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Diophantus of Alexandria Born: about 200 Died: about 284 Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Diophantus, often known as the 'father of algebra', is best known for his Arithmetica, a work on the solution of algebraic equations and on the theory of numbers. However, essentially nothing is known of his life and there has been much debate regarding the date at which he lived. There are a few limits which can be put on the dates of Diophantus's life. On the one hand Diophantus quotes the definition of a polygonal number from the work of Hypsicles so he must have written this later than 150 BC. On the other hand Theon of Alexandria, the father of Hypatia, quotes one of Diophantus's definitions so this means that Diophantus wrote no later than 350 AD. However this leaves a span of 500 years, so we have not narrowed down Diophantus's dates a great deal by these pieces of information. There is another piece of information which was accepted for many years as giving fairly accurate dates. Heath [3] quotes from a letter by Michael Psellus who lived in the last half of the 11^th century. Psellus wrote (Heath's translation in [3]):- Diophantus dealt with [Egyptian arithmetic] more accurately, but the very learned Anatolius collected the most essential parts of the doctrine as stated by Diophantus in a different way and in the most succinct form, dedicating his work to Diophantus. Psellus also describes in this letter the fact that Diophantus gave different names to powers of the unknown to those given by the Egyptians. This letter was first published by Paul Tannery in [7] and in that work he comments that he believes that Psellus is quoting from a commentary on Diophantus which is now lost and was probably written by Hypatia. However, the quote given above has been used to date Diophantus using the theory that the Anatolius referred to here is the bishop of Laodicea who was a writer and teacher of mathematics and lived in the third century. From this it was deduced that Diophantus wrote around 250 AD and the dates we have given for him are based on this argument. Knorr in [16] criticises this interpretation, however:- But one immediately suspects something is amiss: it seems peculiar that someone would compile an abridgement of another man's work and then dedicate it to him, while the qualification "in a different way", in itself vacuous, ought to be redundant, in view of the terms "most essential" and "most succinct". Knorr gives a different translation of the same passage (showing how difficult the study of Greek mathematics is for anyone who is not an expert in classical Greek) which has a remarkably different Diophantus dealt with [Egyptian arithmetic] more accurately, but the very learned Anatolius, having collected the most essential parts of that man's doctrine, to a different Diophantus most succinctly addressed it. The conclusion of Knorr as to Diophantus's dates is [16]:- ... we must entertain the possibility that Diophantus lived earlier than the third century, possibly even earlier that Heron in the first century. The most details we have of Diophantus's life (and these may be totally fictitious) come from the Greek Anthology, compiled by Metrodorus around 500 AD. This collection of puzzles contain one about Diophantus which says:- ... his boyhood lasted ^1/[6]th of his life; he married after ^1/[7]th more; his beard grew after ^1/[12]th more, and his son was born 5 years later; the son lived to half his father's age, and the father died 4 years after the son. So he married at the age of 26 and had a son who died at the age of 42, four years before Diophantus himself died aged 84. Based on this information we have given him a life span of 84 years. The Arithmetica is a collection of 130 problems giving numerical solutions of determinate equations (those with a unique solution), and indeterminate equations. The method for solving the latter is now known as Diophantine analysis. Only six of the original 13 books were thought to have survived and it was also thought that the others must have been lost quite soon after they were written. There are many Arabic translations, for example by Abu'l-Wafa, but only material from these six books appeared. Heath writes in [4] in 1920:- The missing books were evidently lost at a very early date. Paul Tannery suggests that Hypatia's commentary extended only to the first six books, and that she left untouched the remaining seven, which, partly as a consequence, were first forgotten and then lost. However, an Arabic manuscript in the library Astan-i Quds (The Holy Shrine library) in Meshed, Iran has a title claiming it is a translation by Qusta ibn Luqa, who died in 912, of Books IV to VII of Arithmetica by Diophantus of Alexandria. F Sezgin made this remarkable discovery in 1968. In [19] and [20] Rashed compares the four books in this Arabic translation with the known six Greek books and claims that this text is a translation of the lost books of Diophantus. Rozenfeld, in reviewing these two articles is, however, not completely convinced:- The reviewer, familiar with the Arabic text of this manuscript, does not doubt that this manuscript is the translation from the Greek text written in Alexandria but the great difference between the Greek books of Diophantus's Arithmetic combining questions of algebra with deep questions of the theory of numbers and these books containing only algebraic material make it very probable that this text was written not by Diophantus but by some one of his commentators (perhaps Hypatia?). It is time to take a look at this most outstanding work on algebra in Greek mathematics. The work considers the solution of many problems concerning linear and quadratic equations, but considers only positive rational solutions to these problems. Equations which would lead to solutions which are negative or irrational square roots, Diophantus considers as useless. To give one specific example, he calls the equation 4 = 4x + 20 'absurd' because it would lead to a meaningless answer. In other words how could a problem lead to the solution -4 books? There is no evidence to suggest that Diophantus realised that a quadratic equation could have two solutions. However, the fact that he was always satisfied with a rational solution and did not require a whole number is more sophisticated than we might realise today. Diophantus looked at three types of quadratic equations ax^2 + bx = c, ax^2 = bx + c and ax^2 + c = bx. The reason why there were three cases to Diophantus, while today we have only one case, is that he did not have any notion for zero and he avoided negative coefficients by considering the given numbers a, b, c to all be positive in each of the three cases above. There are, however, many other types of problems considered by Diophantus. He solved problems such as pairs of simultaneous quadratic equations. Consider y + z = 10, yz = 9. Diophantus would solve this by creating a single quadratic equation in x. Put 2x = y - z so, adding y + z = 10 and y - z = 2x, we have y = 5 + x, then subtracting them gives z = 5 - x. Now 9 = yz = (5 + x)(5 - x) = 25 - x^2, so x^2 = 16, x = 4 leading to y = 9, z = 1. In Book III, Diophantus solves problems of finding values which make two linear expressions simultaneously into squares. For example he shows how to find x to make 10x + 9 and 5x + 4 both squares (he finds x = 28). Other problems seek a value for x such that particular types of polynomials in x up to degree 6 are squares. For example he solves the problem of finding x such that x^3 - 3x^2 + 3x + 1 is a square in Book VI. Again in Book VI he solves problems such as finding x such that simultaneously 4x + 2 is a cube and 2x + 1 is a square (for which he easily finds the answer x = 3/2). Another type of problem which Diophantus studies, this time in Book IV, is to find powers between given limits. For example to find a square between 5/4 and 2 he multiplies both by 64, spots the square 100 between 80 and 128, so obtaining the solution 25/16 to the original problem. In Book V he solves problems such as writing 13 as the sum of two square each greater than 6 (and he gives the solution 66049/10201 and 66564/10201). He also writes 10 as the sum of three squares each greater than 3, finding the three squares 1745041/505521, 1651225/505521, 1658944/505521. Heath looks at number theory results of which Diophantus was clearly aware, yet it is unclear whether he had a proof. Of course these results may have been proved in other books written by Diophantus or he may have felt they were "obviously" true due to his experimental evidence. Among such results are [4]:- ... no number of the form 4n + 3 or 4n - 1 can be the sum of two squares; ... a number of the form 24n + 7 cannot be the sum of three squares. Diophantus also appears to know that every number can be written as the sum of four squares. If indeed he did know this result it would be truly remarkable for even Fermat, who stated the result, failed to provide a proof of it and it was not settled until Lagrange proved it using results due to Euler. Although Diophantus did not use sophisticated algebraic notation, he did introduce an algebraic symbolism that used an abbreviation for the unknown and for the powers of the unknown. As Vogel writes in [1]:- The symbolism that Diophantus introduced for the first time, and undoubtedly devised himself, provided a short and readily comprehensible means of expressing an equation... Since an abbreviation is also employed for the word "equals", Diophantus took a fundamental step from verbal algebra towards symbolic algebra. One thing will be clear from the examples we have quoted and that is that Diophantus is concerned with particular problems more often than with general methods. The reason for this is that although he made important advances in symbolism, he still lacked the necessary notation to express more general methods. For instance he only had notation for one unknown and, when problems involved more than a single unknown, Diophantus was reduced to expressing "first unknown", "second unknown", etc. in words. He also lacked a symbol for a general number n. Where we would write (12 + 6n)/(n^2 -3), Diophantus has to write in words:- ... a sixfold number increased by twelve, which is divided by the difference by which the square of the number exceeds three. Despite the improved notation and that Diophantus introduced, algebra had a long way to go before really general problems could be written down and solved succinctly. Fragments of another of Diophantus's books On polygonal numbers, a topic of great interest to Pythagoras and his followers, has survived. In [1] it is stated that this work contains:- ... little that is original, [and] is immediately differentiated from the Arithmetica by its use of geometric proofs. Diophantus himself refers to another work which consists of a collection of lemmas called The Porisms but this book is entirely lost. We do know three lemmas contained in The Porisms since Diophantus refers to them in the Arithmetica. One such lemma is that the difference of the cubes of two rational numbers is equal to the sum of the cubes of two other rational numbers, i.e. given any numbers a, b then there exist numbers c, d such that a^3 - b^3= c^3 + d^3. Another extant work Preliminaries to the geometric elements, which has been attributed to Heron, has been studied recently in [16] where it is suggested that the attribution to Heron is incorrect and that the work is due to Diophantus. The author of the article [14] thinks that he may have identified yet another work by Diophantus. He writes:- We conjecture the existence of a lost theoretical treatise of Diophantus, entitled "Teaching of the elements of arithmetic". Our claims are based on a scholium of an anonymous Byzantine European mathematicians did not learn of the gems in Diophantus's Arithmetica until Regiomontanus wrote in 1463:- No one has yet translated from the Greek into Latin the thirteen Books of Diophantus, in which the very flower of the whole of arithmetic lies hid... Bombelli translated much of the work in 1570 but it was never published. Bombelli did borrow many of Diophantus's problems for his own Algebra. The most famous Latin translation of the Diophantus's Arithmetica is due to Bachet in 1621 and it is that edition which Fermat studied. Certainly Fermat was inspired by this work which has become famous in recent years due to its connection with Fermat's Last Theorem. We began this article with the remark that Diophantus is often regarded as the 'father of algebra' but there is no doubt that many of the methods for solving linear and quadratic equations go back to Babylonian mathematics. For this reason Vogel writes [1]:- ... Diophantus was not, as he has often been called, the father of algebra. Nevertheless, his remarkable, if unsystematic, collection of indeterminate problems is a singular achievement that was not fully appreciated and further developed until much later. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (25 books/articles) A Quotation Mathematicians born in the same country Additional Material in MacTutor 1. The title page from the translation by Bachet of Arithmetica (1670) 2. and another page showing the transcription of Fermat's marginal note Honours awarded to Diophantus (Click below for those honoured in this way) Lunar features Crater Diophantus and Rima Diophantus Popular biographies list Number 62 Cross-references in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © February 1999 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Diophantus.html","timestamp":"2014-04-17T21:43:30Z","content_type":null,"content_length":"29961","record_id":"<urn:uuid:c0a98662-f84a-40e2-907c-055b1010aa35>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of linear-combination In mathematics, linear combinations are a concept central to linear algebra and related fields of mathematics. Most of this article deals with linear combinations in the context of a vector space over a field, with some generalisations given at the end of the article. Suppose that is a field and is a vector space over . As usual, we call elements of V vectors and call elements of K scalars . If are vectors and are scalars, then the linear combination of those vectors with those scalars as coefficients $a_1 v_1 + a_2 v_2 + a_3 v_3 + cdots + a_n v_n ,$ In a given situation, K and V may be specified explicitly, or they may be obvious from context. In that case, we often speak of a linear combination of the vectors v[1],...,v[n], with the coefficients unspecified (except that they must belong to K). Or, if S is a subset of V, we may speak of a linear combination of vectors in S, where both the coefficients and the vectors are unspecified, except that the vectors must belong to the set S (and the coefficients must belong to K). Finally, we may speak simply of a linear combination, where nothing is specified (except that the vectors must belong to V and the coefficients must belong to K). Note that by definition, a linear combination involves only finitely many vectors (except as described in Generalisations below). However, the set S that the vectors are taken from (if one is mentioned) can still be infinite; each individual linear combination will only involve finitely many vectors. Also, there is no reason that n cannot be zero; in that case, we declare by convention that the result of the linear combination is the zero vector in V. Examples and counterexamples Let the field K be the set R of real numbers, and let the vector space V be the Euclidean space R^3. Consider the vectors e[1] := (1,0,0), e[2] := (0,1,0) and e[3] = (0,0,1). Then any vector in R^3 is a linear combination of e[1], e[2] and e[3]. To see that this is so, take an arbitrary vector (a[1],a[2],a[3]) in R^3, and write: $\left(a_1 , a_2 , a_3\right) = \left(a_1 ,0,0\right) + \left(0, a_2 ,0\right) + \left(0,0, a_3\right) ,$ $= a_1 \left(1,0,0\right) + a_2 \left(0,1,0\right) + a_3 \left(0,0,1\right) ,$ $= a_1 e_1 + a_2 e_2 + a_3 e_3 ,$ Let K be the set C of all complex numbers, and let V be the set C[C](R) of all continuous functions from the real line R to the complex plane C. Consider the vectors (functions) f and g defined by f( t) := e^it and g(t) := e^−it. (Here, e is the base of the natural logarithm, about 2.71828..., and i is the imaginary unit, a square root of −1.) Some linear combinations of f and g are: • . $cosh t = begin\left\{matrix\right\}frac12end\left\{matrix\right\} e^\left\{i t\right\} + begin\left\{matrix\right\}frac12end\left\{matrix\right\} e^\left\{-i t\right\} ,$ • $2 sin t = \left(-i \right) e^\left\{i t\right\} + \left(i \right) e^\left\{-i t\right\} ,$ On the other hand, the constant function 3 is not a linear combination of f and g. To see this, suppose that 3 could be written as a linear combination of e^it and e^−it. This means that there would exist complex scalars a and b such that ae^it + be^−it = 3 for all real numbers t. Setting t = 0 and t = π gives the equations a + b = 3 and a + b = −3, and clearly this cannot happen. Let K be any field (R, C, or whatever you like best), and let V be the set P of all polynomials with coefficients taken from the field K. Consider the vectors (polynomials) p[1] := 1, p[2] := x + 1, and p[3] := x^2 + x + 1. Is the polynomial x^2 − 1 a linear combination of p[1], p[2], and p[3]? To find out, consider an arbitrary linear combination of these vectors and try to see when it equals the desired vector x^2 − 1. Picking arbitrary coefficients a[1], a[2], and a[3], we want $a_1 \left(1\right) + a_2 \left(x + 1\right) + a_3 \left(x^2 + x + 1\right) = x^2 - 1 ,$ Multiplying the polynomials out, this means $\left(a_1 \right) + \left(a_2 x + a_2\right) + \left(a_3 x^2 + a_3 x + a_3\right) = x^2 - 1 ,$ and collecting like powers of , we get $a_3 x^2 + \left(a_2 + a_3 \right) x + \left(a_1 + a_2 + a_3 \right) = 1 x^2 + 0 x + \left(-1\right) ,$ Two polynomials are equal if and only if their corresponding coefficients are equal, so we can conclude $a_3 = 1, quad a_2 + a_3 = 0, quad a_1 + a_2 + a_3 = -1 ,$ system of linear equations can easily be solved. First, the first equation simply says that is 1. Knowing that, we can solve the second equation for , which comes out to −1. Finally, the last equation tells us that is also −1. Therefore, the only possible way to get a linear combination is with these coefficients. Indeed, $x^2 - 1 = -1 - \left(x + 1\right) + \left(x^2 + x + 1\right) = - p_1 - p_2 + p_3 ,$ − 1 a linear combination of , and On the other hand, what about the polynomial x^3 − 1? If we try to make this vector a linear combination of p[1], p[2], and p[3], then following the same process as before, we’ll get the equation $0 x^3 + a_3 x^2 + \left(a_2 + a_3 \right) x + \left(a_1 + a_2 + a_3 \right) ,$ $= 1 x^3 + 0 x^2 + 0 x + \left(-1\right) ,$ However, when we set corresponding coefficients equal in this case, the equation for $0 = 1 ,$ which is always false. Therefore, there is no way for this to work, and − 1 is a linear combination of , and The linear span Main article: linear span Take an arbitrary field K, an arbitrary vector space V, and let v[1],...,v[n] be vectors (in V). It’s interesting to consider the set of all linear combinations of these vectors. This set is called the linear span (or just span) of the vectors, say S ={v[1],...,v[n]}. We write the span of S as span(S) or sp(S): $mathrm\left\{Sp\right\}\left(v_1 ,ldots, v_n\right) := \left\{ a_1 v_1 + cdots + a_n v_n : a_1 ,ldots, a_n subseteq K \right\}. ,$ Other related concepts Sometimes, some single vector can be written in two different ways as a linear combination of v[1],...,v[n]. If that is possible, then v[1],...,v[n] are called linearly dependent; otherwise, they are linearly independent. Similarly, we can speak of linear dependence or independence of an arbitrary set S of vectors. If S is linearly independent and the span of S equals V, then S is a basis for V. We can think of linear combinations as the most general sort of operation on a vector space. The basic operations of addition and scalar multiplication, together with the existence of an additive identity and additive inverses, cannot be combined in any more complicated way than the generic linear combination. Ultimately, this fact lies at the heart of the usefulness of linear combinations in the study of vector spaces. Another related concept is the affine combination, which is a linear combination with the additional constraint that the coefficients a[1],...,a[n] sum to unity. is a topological vector space , then there may be a way to make sense of certain linear combinations, using the topology of . For example, we might be able to speak of + ..., going on forever. Such infinite linear combinations do not always make sense; we call them when they do. Allowing more linear combinations in this case can also lead to a different concept of span, linear independence, and basis. The articles on the various flavours of topological vector spaces go into more detail about these. If K is a commutative ring instead of a field, then everything that has been said above about linear combinations generalises to this case without change. The only difference is that we call spaces like V modules instead of vector spaces. If K is a noncommutative ring, then the concept still generalises, with one caveat: Since modules over noncommutative rings come in left and right versions, our linear combinations may also come in either of these versions, whatever is appropriate for the given module. This is simply a matter of doing scalar multiplication on the correct side. A more complicated twist comes when V is a bimodule over two rings, K[L] and K[R]. In that case, the most general linear combination looks like $a_1 v_1 b_1 + cdots + a_n v_n b_n ,$ belong to belong to , and belong to
{"url":"http://www.reference.com/browse/linear-combination","timestamp":"2014-04-18T23:02:20Z","content_type":null,"content_length":"91771","record_id":"<urn:uuid:73fd44b3-f8b6-4b4a-8903-73ccc5cae1bd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Numpy Array of dtype=object with strings and floats question [Numpy-discussion] Numpy Array of dtype=object with strings and floats question Darryl Wallace darryl.wallace@prosensus... Tue Nov 10 12:09:30 CST 2009 Hello again, The best way so far that's come to my attention is to use: The problem with this is that it's looking for a specific instance of an object. So if the user had some elements of their array that were, for example, "randomString" , then it would not be picked up from numpy import * mixedArray=array([1,2, '', 3, 4, 'randomString'], dtype=object) mixedArrayMask = ma.masked_object(mixedArray, 'randomString').mask then mixedArrayMask will yield: array([ False, False, False, False, False, True]) Can anyone help me so that all strings are found in the array without having to explicitly loop through them in Python? On Fri, Nov 6, 2009 at 3:56 PM, Darryl Wallace > What I'm doing is importing some data from excel and sometimes there are > strings in the worksheet. Often times a user will use an empty cell or a > string to represent data that is missing. > e.g. > from numpy import * > mixedArray=array([1, 2, '', 3, 4, 'String'], dtype=object) > Two questions: > 1) Is there a quick way to find the elements in the array that are the > strings without iterating over each element in the array? > or > 2) Could I quickly turn it into a masked array of type float where all > string elements are set as missing points? > I've been struggling with this for a while and can't come across a method > that will all me to do it without iterating over each element. > Any help or pointers in the right direction would be greatly appreciated. > Thanks, > Darryl Darryl Wallace: Project Leader ProSensus Inc. McMaster Innovation Park 175 Longwood Road South, Suite 301 Hamilton, Ontario, L8P 0A1 Canada (GMT -05:00) Tel: 1-905-528-9136 Fax: 1-905-546-1372 Web site: http://www.prosensus.ca/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20091110/a244164a/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-November/046578.html","timestamp":"2014-04-17T18:37:38Z","content_type":null,"content_length":"5515","record_id":"<urn:uuid:c4ddf62f-8fef-4c9e-b8e9-298767984b08>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Positive martingale representation with jumps up vote 1 down vote favorite I am looking for a martingale representation theorem for positive semimartingales. Using the answer to this question: Martingale representation theorem for Levy processes My best guess is (subject to integrability condition, in one dimension for simplicity): $$ M_t = M_0 + \int_0^t M_s v_s dWs + \int_0^t \int_R M_s u_s(x) \tilde{N}(ds, dx)$$ where $\tilde{N}(ds, dx)$ is the compensated measure of the underlying Lévy process, but as I said its just a guess. Is it correct? Do I need any conditions for $u_s(x)$? pr.probability stochastic-calculus Hi, What do you mean exactly by "martingale representation theorem for positive semimartingales" ? What is the filtration with respect to which you have martngales ? If it is a filtration generated by a semimartingale with jumps, you can take a look at theorem 204 p.177 of the book Theory of SDEs with jumps and applications, by Rong Situ. There's also in the same book the martingale representation property with respect to a (brownian and poisson point process)-filtration on p.68. – user20368 Jan 5 '12 at 10:49 add comment 1 Answer active oldest votes The correct answer seems to be exponential martingales. E.g. for Levy exponential martingales we have the representation: $$ M_t = M_0 + \int_0^t M_s v_s dWs + \int_0^t \int_R M_s up vote 1 down (e^x-1) \tilde{N}(ds, dx) $$ add comment Not the answer you're looking for? Browse other questions tagged pr.probability stochastic-calculus or ask your own question.
{"url":"http://mathoverflow.net/questions/73014/positive-martingale-representation-with-jumps?sort=newest","timestamp":"2014-04-23T19:35:00Z","content_type":null,"content_length":"50820","record_id":"<urn:uuid:064e2c5c-6166-4d2f-989c-421096118fa3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Discussion: Do US Math Teachers Suck? Date: Mar 23, 2012 4:31 PM Author: Ken Abbott Subject: Discussion: Do US Math Teachers Suck? Does America leads the world in math education? Nope. The US is being kicked in the groin by China and most other countries. The USA is a joke in Math. I've travelled around and hear the jokes. Why is this? Is it because US Math teachers suck so bad? Or is it because US math students suck so bad? It has to be one or the other, because the US in Math is a world joke.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7752875","timestamp":"2014-04-19T12:21:14Z","content_type":null,"content_length":"1335","record_id":"<urn:uuid:695a9812-22ad-4039-8da3-7bf5ea79e353>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 188 - In Proceedings International Conference on Machine Learning , 1997 "... Abstract. One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show ..." Cited by 721 (52 self) Add to MetaCart Abstract. One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik’s support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition. 1 - Machine Learning , 1997 "... We analyze the "query by committee" algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queri ..." Cited by 336 (7 self) Add to MetaCart We analyze the "query by committee" algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease holds for query learning of , 1997 "... Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoy ..." Cited by 208 (1 self) Add to MetaCart Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gin'e, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or "agnostic") framework. Furthermore, we show a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class. - Journal of Artificial Intelligence Research , 2000 "... A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is suppl ..." Cited by 143 (0 self) Add to MetaCart A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is supplied by hand through the skill and insights of experts. In this paper a model for automatically learning bias is investigated. The central assumption of the model is that the learner is embedded within an environment of related learning tasks. Within such an environment the learner can sample from multiple tasks, and hence it can search for a hypothesis space that contains good solutions to many of the problems in the environment. Under certain restrictions on the set of all hypothesis spaces available to the learner, we show that a hypothesis space that performs well on a sufficiently large number of training tasks will also perform well when learning novel tasks in the same environment. Exp... - Journal of Computer and System Sciences , 1998 "... For several NP-complete problems, there have been a progression of better but still exponential algorithms. In this paper, we address the relative likelihood of sub-exponential algorithms for these problems. We introduce a generalized reduction which we call Sub-Exponential Reduction Family (SERF) t ..." Cited by 128 (5 self) Add to MetaCart For several NP-complete problems, there have been a progression of better but still exponential algorithms. In this paper, we address the relative likelihood of sub-exponential algorithms for these problems. We introduce a generalized reduction which we call Sub-Exponential Reduction Family (SERF) that preserves sub-exponential complexity. We show that CircuitSAT is SERF-complete for all NP-search problems, and that for any fixed k, k-SAT, k-Colorability, k-Set Cover, Independent Set, Clique, Vertex Cover, are SERF--complete for the class SNP of search problems expressible by second order existential formulas whose first order part is universal. In particular, sub-exponential complexity for any one of the above problems implies the same for all others. We also look at the issue of proving strongly exponential lower bounds for AC 0 ; that is, bounds of the form 2 \Omega\Gamma n) . This problem is even open for depth-3 circuits. In fact, such a bound for depth-3 circuits with even l... - Combinatorica , 1990 "... ..." - Machine Learning , 1994 "... In this paper we study a Bayesian or average-case model of concept learning with a twofold goal: to provide more precise characterizations of learning curve (sample complexity) behavior that depend on properties of both the prior distribution over concepts and the sequence of instances seen by the l ..." Cited by 108 (12 self) Add to MetaCart In this paper we study a Bayesian or average-case model of concept learning with a twofold goal: to provide more precise characterizations of learning curve (sample complexity) behavior that depend on properties of both the prior distribution over concepts and the sequence of instances seen by the learner, and to smoothly unite in a common framework the popular statistical physics and VC dimension theories of learning curves. To achieve this, we undertake a systematic investigation and comparison of two fundamental quantities in learning and information theory: the probability of an incorrect prediction for an optimal learning algorithm, and the Shannon information gain. This study leads to a new understanding of the sample complexity of learning in several existing models. 1 Introduction Consider a simple concept learning model in which the learner attempts to infer an unknown target concept f , chosen from a known concept class F of f0; 1g-valued functions over an instance space X.... - Proceedings of the 48th Annual Symposium on Foundations of Computer Science , 2007 "... We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differen ..." Cited by 103 (3 self) Add to MetaCart We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacy [15, 14], in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero. 1 , 1992 "... We show that with recently developed derandomization techniques, one can convert Clarkson's randomized algorithm for linear programming in fixed dimension into a lineartime deterministic one. The constant of proportionality is d O(d) , which is better than for previously known such algorithms. We s ..." Cited by 94 (11 self) Add to MetaCart We show that with recently developed derandomization techniques, one can convert Clarkson's randomized algorithm for linear programming in fixed dimension into a lineartime deterministic one. The constant of proportionality is d O(d) , which is better than for previously known such algorithms. We show that the algorithm works in a fairly general abstract setting, which allows us to solve various other problems (such as finding the maximum volume ellipsoid inscribed into the intersection of n halfspaces) in linear time.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=119234","timestamp":"2014-04-18T01:04:56Z","content_type":null,"content_length":"37565","record_id":"<urn:uuid:dce44fdd-fa35-41de-a6ed-60e6491a6732>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
pushforward of injective sheaf acyclic for cohomology with supports up vote 2 down vote favorite Let $\pi: X \to Y$ be a morphism of sites (I need this for the étale topology) and $Z \hookrightarrow Y$ be a closed subscheme. Since an injective sheaf is flabby (in the sense that $H^i(U,F) = 0$, $i > 0$ for all $U$ of a site), and the pushforward of a flabby sheaf is flabby, it follows from the long exact localisation sequence that $H^i_Z(Y,\pi_*F) = 0$ for $i > 1$. Now my question is: Does this also hold for $i = 1$? (Equivalently, is $(\pi_*F)(Y) \to (\pi_*F)(U)$ surjective for all $U$ and $F$ injective? For the Zariski topology, this can be proved as follows: Let $U \hookrightarrow X$ be open, $F$ an injective sheaf on $X$. Then one has an exact sequence $0 \to j_!\mathcal{O}_U \to \mathcal{O}_X$ and $F(X) = \mathrm{Hom}(\mathcal{O}_X, F) \to \mathrm{Hom}(j_!\mathcal{O}_U, F) = \mathrm{Hom}(\mathcal{O}_U, F|_U) = F(U)$. Here, the middle arrow is surjective.) 1 It seems to the me that the étale case should reduced to the Zariski case, because the pushforward from the étale site to the Zariski site is also flabby. Am I missing something? – Angelo Apr 20 '13 at 9:35 1 A variation on Angelo's question: where does the argument in the last paragraph use the Zariski topology? All you need is that $\pi_* F(Y) \to \pi_* F(Y - Z)$ is surjective, which is the same as asking that $F(X) \to F(X - f^{-1} Z)$ be surjective, which is what the last paragraph shows. – anon Apr 20 '13 at 17:12 Thanks to both of you. – Timo Keller Apr 22 '13 at 14:59 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/128069/pushforward-of-injective-sheaf-acyclic-for-cohomology-with-supports","timestamp":"2014-04-18T06:15:38Z","content_type":null,"content_length":"50395","record_id":"<urn:uuid:8712aae6-18b4-4e50-a638-ca4825e089e8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
Log/Exponential problem April 18th 2010, 08:24 PM Log/Exponential problem $<br /> \frac{e^x-e^{-x}}{2}=10<br />$ $<br /> x = ln (10 + \sqrt{101} )<br />$ What I'm doing (More than likely wrong) $<br /> \frac{e^x - e}{2^x} = 10<br /> <br />$ $<br /> e^x-e=10(2^x)<br />$ $<br /> e^x-e=20^x<br />$ April 18th 2010, 08:33 PM According to the answer, the given problem is wrong. The equation should be equated to 10 rather than 7. April 18th 2010, 08:37 PM This answer is wrong, assuming that you have managed to write down the correct equation that you were asked to solve. What I'm doing (More than likely wrong) $<br /> \frac{e^x - {\color{blue}e}}{{\color{red}2^x}} = 7<br />$ The above equation is very different from the one that you were given. $<br /> e^x-e=7(2^x)<br />$ $<br /> e^x-e=14^x<br />$ That last step is not valid. More generally, you have that in general $a\cdot b^neq (a\cdot b)^n$. April 18th 2010, 08:58 PM Sorry, there are two identical problems, one equating to 10 and the other 7, I misread while typing. The problem is: $<br /> \frac{e^x-e^{-x}}{2}=10<br />$ April 19th 2010, 05:03 AM The easy way to solve this equation would be to recognize that the left side really is $\sinh(x)$, and that the solution is simply $x=\arsinh(10)$. Or else you can first multiply the equation by $2e^x$ to get a quadratic equation for $e^x$: $\frac{e^x-e^{-x}}{2}=10 \qquad | \cdot 2e^x$ $\left(e^x\right)^2-1=20 e^x \qquad | -20e^x$ Thus, we get $\left(e^x\right)_{1,2} = \frac{-(-20)\pm \sqrt{(-20)^2-4\cdot 1\cdot (-1)}}{2\cdot 1}=10\pm \sqrt{101}$ But only the $+$ case is possible, thus we have $e^x=10+\sqrt{101}\qquad | \ln$
{"url":"http://mathhelpforum.com/algebra/139993-log-exponential-problem-print.html","timestamp":"2014-04-18T21:44:37Z","content_type":null,"content_length":"11827","record_id":"<urn:uuid:ebe74043-730a-43cf-8caf-a11f9fb2cbfc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Help finding the domain of a multivariable function November 10th 2008, 08:01 PM #1 Mar 2008 Help finding the domain of a multivariable function Find the domain of the function f(x,y) = arcsin(x^2+y^2-2) From what I've seen in an example problem arcsin(x^2+y^2-2) must be greater than 0 (not sure why this is) but I don't know for what values arcsin > 0 since it's a trig function that has positive and negative values...if someone could help, it would be greatly appreciated. Find the domain of the function f(x,y) = arcsin(x^2+y^2-2) From what I've seen in an example problem arcsin(x^2+y^2-2) must be greater than 0 (not sure why this is) but I don't know for what values arcsin > 0 since it's a trig function that has positive and negative values...if someone could help, it would be greatly appreciated. You require $-1 \leq x^2 + y^2 - 2 \leq 1 \Rightarrow 1 \leq x^2 + y^2 \leq 3$, which is the area between the circles $x^2 + y^2 = 1$ and $x^2 + y^2 = 3$ November 10th 2008, 08:27 PM #2
{"url":"http://mathhelpforum.com/calculus/58862-help-finding-domain-multivariable-function.html","timestamp":"2014-04-20T09:21:16Z","content_type":null,"content_length":"34596","record_id":"<urn:uuid:a0589f8d-f5c0-4658-8c42-3cb432c91e42>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Clinton, MD Algebra 2 Tutor Find a Clinton, MD Algebra 2 Tutor ...As a professor at the U. S. Naval Academy, I am well suited as a mentor for anyone considering a military future. 15 Subjects: including algebra 2, chemistry, physics, calculus ...I have been an accounting tutor for over 7 years and have helped both accounting and non-accounting students to understand both the basic and complex processes of accounting using simple, common sense methodologies. Accounting is an unusual subject for most people but I find that the problem mos... 28 Subjects: including algebra 2, English, physics, writing ...I minored in economics and went on to study it further in graduate school. My graduate work was completed at the University of Maryland College Park, where I specialized in international development and quantitative analysis. I currently work as a professional economist. 16 Subjects: including algebra 2, calculus, statistics, geometry ...Throughout high school and college I performed extremely well in Geometry. I have tutored pre algebra for the past 3 years. I'm also a certified math teacher in the state of Maryland. 20 Subjects: including algebra 2, reading, calculus, geometry I am a graduate of Duke University with a B.S. in Psychology & George Washington University with an M.S. in Forensic Science. I have spent time working in public policy on issues dealing with bioethics-stem cell research and cloning, medical research, and am currently an instructor for a museum in Washington DC. In the latter position, I routinely work with and teach school groups of all ages. 21 Subjects: including algebra 2, chemistry, physics, geometry Related Clinton, MD Tutors Clinton, MD Accounting Tutors Clinton, MD ACT Tutors Clinton, MD Algebra Tutors Clinton, MD Algebra 2 Tutors Clinton, MD Calculus Tutors Clinton, MD Geometry Tutors Clinton, MD Math Tutors Clinton, MD Prealgebra Tutors Clinton, MD Precalculus Tutors Clinton, MD SAT Tutors Clinton, MD SAT Math Tutors Clinton, MD Science Tutors Clinton, MD Statistics Tutors Clinton, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Clinton_MD_Algebra_2_tutors.php","timestamp":"2014-04-17T11:12:03Z","content_type":null,"content_length":"24030","record_id":"<urn:uuid:e440373e-88be-4e38-b5af-3e6725e6e4d5>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 2010 [00718] [Date Index] [Thread Index] [Author Index] Re: FindRoot with NDSolve inside of it doesn't work • To: mathgroup at smc.vnet.net • Subject: [mg111363] Re: FindRoot with NDSolve inside of it doesn't work • From: Leonid Shifrin <lshifr at gmail.com> • Date: Thu, 29 Jul 2010 06:42:14 -0400 (EDT) The difference is that in your last example with exponents the r.h.s of the does not necessarily involve numerical steps, while when you invoke it does. So, for functions of the former type, you can get away with generic while for functions of the latter type you can not. This is related to the order in which expressions are evaluated. Inside FindRoot there is f[x0, y0] == {2.7, 5.4}. When this is evaluated, first f[x0, y0] is evaluated. If you have generic patterns x_,y_ in the definition of <f>, then the definition for <f> happily applies at this moment, while x0 and y0 are still symbolic. This is wrong since then can not do its job and returns back the set of equations. To postpone the evaluation of <f> until numeric values are supplied by FindRoot in place of x0, y0, we restrict the pattern. Then, evaluation of f[x0, y0] == {2.7, 5.4} just yields back f[x0, y0] == {2.7, 5.4}, since the pattern is not matched by symbolic x0,y0, and then later x0 and y0 are bound to (replaced by) the numeric values generated by FindRoot, which results in correct Cases where one needs to use this trick are interesting since the interaction between built-in functions (FindRoot here) and user - defined ones (f here) mediated by this trick are rather non-trivial. In some sense, we go a little under the hood of the built-in commands here. However they work, we know for sure that they are bound to call Mathematica evaluator on <f> at some point since <f> was defined at top level. And that means that we have all the usual possibilities to alter that part of evaluation that the evaluator normally gives us. Now, since FindRoot holds all its arguments, one may ask why does it not bind equations to numerical arguments right away. My guess is that sometimes it attempts to do some symbolic preprocessing with the equations, a possibility which would be ruled out in the latter scenario. FindRoot does not really care which type the equation is, symbolic or numeric or mixed, but the real problem was in the way you defined <f> - on some (symbolic say) arguments it can produce nonsense, and that means a flawed design of your function. If it makes sense to call it only on numeric arguments (like in your original case), this should be reflected in its definition. In all other cases, it should return unevaluated (this is a general rule for functions in Mathematica), and this is what the pattern-based definition semantics gives you. Hope this helps. On Wed, Jul 28, 2010 at 9:17 PM, Sam Takoy <sam.takoy at yahoo.com> wrote: > Hi, > Thanks for the response. > I have determined that introducing ?NumericQ alone works and I am > wondering why? > How is the "f" that involves NDSolve fundamentally different from > f[x0_,y0_]:={x0 Exp[1],y0 Exp[1]} > ? > Thanks again, > Sam > ------------------------------ > *From:* Leonid Shifrin <lshifr at gmail.com> > *To:* Sam Takoy <sam.takoy at yahoo.com>; mathgroup at smc.vnet.net > *Sent:* Wed, July 28, 2010 6:03:47 AM > *Subject:* Re: [mg111343] FindRoot with NDSolve inside of it doesn't work > Sam, > This modification will work: > In[31]:= > Clear[f]; > f[x0_?NumericQ, y0_?NumericQ] := > Module[{sol, x, y}, > (sol = NDSolve[{x'[t] == x[t], y'[t] == y[t], x[0] == x0, > y[0] == y0}, {x, y}, {t, 0, 1}]; > {x[1], y[1]} /. sol[[1]])] > In[33]:= > (*f[x0_,y0_]:={x0 Exp[1],y0 Exp[1]};(*Works for this f*)*) > FindRoot[f[x0, y0] == {2.7, 5.4}, {{x0, 1}, {y0, 2}}] > Out[33]= {x0 -> 0.993274, y0 -> 1.98655} > Apart from localizing your variables, the main thing here was to restrict > x0 and y0 > as input parameters to <f>, to only numeric values, by appropriate > patterns. > Regards, > Leonid > On Wed, Jul 28, 2010 at 10:54 AM, Sam Takoy <sam.takoy at yahoo.com> wrote: >> Hi, >> I think it's completely self-explanatory what I'm trying to do in this >> model example: >> f[x0_, y0_] := ( >> sol = NDSolve[{x'[t] == x[t], y'[t] == y[t], x[0] == x0, >> y[0] == y0}, {x, y}, {t, 0, 1}]; >> {x[1], y[1]} /. sol[[1]] >> ) >> (*f[x0_, y0_]:={x0 Exp[1], y0 Exp[1]}; (* Works for this f *) *) >> FindRoot[f[x0, y0] == {2.7, 5.4}, {{x0, 1}, {y0, 2}}] >> Can someone please help me fix this and explain why it's not currently >> working? >> Many thanks in advance! >> Sam
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Jul/msg00718.html","timestamp":"2014-04-17T21:34:30Z","content_type":null,"content_length":"29792","record_id":"<urn:uuid:25b5e166-20f0-4b5b-ac0f-9ae90211de9a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
On my brother's wall is a map of Earth with the South Pole at the top. It uses an equal-area projection to show the true relative sizes of, say, Africa and Greenland. It aims to make a point about the more common North-up, Mercator projection. Mercator projection maps enlarge areas near the poles so that shapes are preserved. I like the sentiment but I think switching to an equal area projection and printing it upside-down is a little unimaginative. So I created this: This is a map of Earth created using a Mercator projection, but with the magnified poles moved to the Atlantic and Pacific oceans. I created it using the Blue Marble map from Wikipedia. The dashed lines are the conventional latitude and longitude lines. You get a much better feel for the shape of Antarctica than normal Mercator maps give you, instead exaggerating Kamchatka. Because, the Mercator projection doesn't need to make Africa small and Greenland big. It can do anything you want it to. So for example, here is my little rebellion against Mercator's underplaying of Africa's troubles: a Mercator map in which the continent is infinite in area: I've cropped the image here. In principle a Mercator projection can be continued infinitely in the vertical direction, and in this case the 'north' pole is in Africa, so the map would be Africa all the way up. The level of detail would, source image notwithstanding, get bigger and bigger until eventually sub-atomic particles started to appear. Theoretically, you could exploit this to produce a map where Britain opened out as Africa has at the top, and extend the map up to include a road map of England, including a large-scale street map of Manchester, eventually opening out to provide a floor-plan of one particular building, then room, and eventually the layout of one table. This, however, seems like it would be very difficult so I haven't bothered. Here's another Mercator map, this time with the poles near Australia and Africa again: And here's one with poles in Asia and South America, creating a world with one central ocean: Here's the maths in case you want to make some maps yourself. Feel free to stop reading here if you wrongly find maths boring. I haven't worried about sign conventions or being especially rigorous, though. It's just enough to make some nice pictures. I treated the Earth as a sphere of unit radius centred at the origin. Converting between (x, y) and (latitude, longitude) is fairly trivial if you read Wikipedia, although I did have to kludge the formula a bit for 'negative' (south) latitudes. To reorient the map, I converted the latitudes φ and longitudes λ to three-dimensional Cartesian coordinates using the following formulae: x = cos φ × cos λ y = cos φ × sin λ z = sin φ I'd planned to rotate these and then convert them back into spherical coordinates, but in the event I found it easier to go directly into rotated spherical coordinates. They're defined by two points, called North Pole N and Greenwich G, each chosen at random from the surface of the sphere, and described by a set of (x, y, z) coordinates. So the new latitude and longitude are given as follows: φ′ = ½π − cos^−1 P⋅N = ½π − cos^−1 (x x[N] + y y[N] + z z[N]) λ′ = tan^−1 (P⋅G′ ÷ P⋅G″) where G′ = N × G ÷ |N × G| and G″ = N × G′. Using the cross product this way generates two points on the new equator separated by 90°. In fact the conversion to new-longitude is more complex than this, because you have to mess around with quadrants, but I used the atan2 function to do that for me so I've not bothered working out all the steps. I don't know if this is the best way of doing this but welcome to stream-of-conciousness mathematics. • Acilius I was just thinking someone should launch an application called "Mercator Rotator" that would create maps like these. So I went looking, and found this magnificent post of yours. • Mercator Rotator « Panther Red [...] ago, this picture appeared on a site called Apathy [...] • Anonymous Coward This is awesome! I imagine your last map (poles in Asia and South America) would be the one that fish would like to use! • Tyler What happened to Greenland and Antarctica? I mean what's with all the triangles? • Andrew Because the Mercator Projection is theoretically infinite in height, it is always presented cropped. The North and South Poles aren't part of the source image I used, so I simply used the nearest point available. That means any point on the top or bottom edge of the original map got stretched out into the triangles you can see around the poles.
{"url":"http://www.andrewt.net/blog/index.php/posts/fun-with-the-mercator-projection/","timestamp":"2014-04-21T07:04:29Z","content_type":null,"content_length":"11923","record_id":"<urn:uuid:949ac51a-b462-4b40-ae69-ee416655a5f3>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Lag Plot 1. Exploratory Data Analysis 1.3. EDA Techniques 1.3.3. Graphical Techniques: Alphabetic Purpose: Check for A lag plot checks whether a data set or time series is random or not. Random data should not exhibit any identifiable structure in the lag plot. Non-random structure in the lag randomness plot indicates that the underlying data are not random. Several common patterns for lag plots are shown in the examples below. Sample Plot This sample lag plot exhibits a linear pattern. This shows that the data are strongly non-random and further suggests that an autoregressive model might be appropriate. Definition A lag is a fixed time displacement. For example, given a data set Y[1], Y[2] ..., Y[n], Y[2] and Y[7] have lag 5 since 7 - 2 = 5. Lag plots can be generated for any arbitrary lag, although the most commonly used lag is 1. A plot of lag 1 is a plot of the values of Y[i] versus Y[i-1] • Vertical axis: Y[i] for all i • Horizontal axis: Y[i-1] for all i Questions Lag plots can provide answers to the following questions: 1. Are the data random? 2. Is there serial correlation in the data? 3. What is a suitable model for the data? 4. Are there outliers in the data? Importance Inasmuch as randomness is an underlying assumption for most statistical estimation and testing techniques, the lag plot should be a routine tool for researchers. Related Techniques Autocorrelation Plot Spectrum Runs Test Case Study The lag plot is demonstrated in the beam deflection data case study. Software Lag plots are not directly available in most general purpose statistical software programs. Since the lag plot is essentially a scatter plot with the 2 variables properly lagged, it should be feasible to write a macro for the lag plot in most statistical programs.
{"url":"http://www.itl.nist.gov/div898/handbook/eda/section3/lagplot.htm","timestamp":"2014-04-18T05:30:25Z","content_type":null,"content_length":"6467","record_id":"<urn:uuid:cfc09ef0-a3a2-4021-8489-cb6687db9cfa>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there water flow against gravity? Hi all, I have been with this issue for some time and think I reached a conclusion but am not sure and was hoping that you guys could help me out with this. I was considering the scenario where you have a closed tube like the one you see in the image. Since there is no upper contact with atmosferic pressure, the water will not fall to the plate. It is also known that, from a specific height, the water column weight overcomes atmosferic pressure, thus allowing the water column to fall parcially. I have calculated that, for a specific atmosferic pressure of 10130 Pa, one would need a water column of 10.13 m to start producing vacuum. Now, the doubt comes in this case: imagine that, for some reason, the water level goes below the 10.13 m (consequently, adding more vaccum to the top of the tube). Will actually atmosferic pressure force the water up against gravity until it reaches the height of 10.13m? I hope I could explain this properly. Thanks in advance for your help guys
{"url":"http://www.physicsforums.com/showthread.php?p=4265155","timestamp":"2014-04-20T18:37:26Z","content_type":null,"content_length":"23284","record_id":"<urn:uuid:b8bfae9e-f477-40db-9ec8-051e20e8fb4e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Venture Capital Exit Times My two previous posts: Venture Capital Firms are Too Big and Venture Capital Funds - How the Math Works, described how venture capital investors will want to invest too much and exit only for very high returns. Why are those bad things for entrepreneurs and angel investors? Well, it turns out those can be extremely bad. These VC tenancies mean that: 1. Venture capital exit times are extremely long - much longer than you probably realize 2. The risks of actually achieving an exit decrease dramatically This post describes why these factors make venture capital exit times so long. Time Required to Generate 10x to 30x Returns If the successful venture capital investments need to return 30x on average, or at the very least 10x, to generate a minimum VC fund return of 20% per year, how does that affect venture capital exit The graph below shows venture capital exit times required to generate a minimally acceptable VC fund return from the winning investments. Some companies will create increases in share value faster than 30 or 40% per year, but these are extremely rare. Everyone who has run a company knows that generating consistent 30 to 40% annual increases in value requires a great deal of hard work and some luck. This is especially true when you realize that these are not just the increases in the overall enterprise value, but instead the increase in the value per share of the company. The difference of course is the additional dilution from any future financings or employee equity plans. The 8 to 10 years shown in the graph above seems almost impossibly long. Could venture capital exit times really be that long? The fascinating graph below shows actual venture capital exit times for US VCs. It shows that the median time from when a VC initially invested to an M&A transaction had been pretty stable in the late 1990s at around three years. The time to exit dipped to about two years in 2000. This was at the peak of the tech equity bubble when the velocity of transactions was incredibly high. In the years since the tech bubble burst, venture capital exit times have steadily climbed to where they are now at seven years. The practical implication of the data in this graph is more complicated than it appears on the surface. What this Means for Entrepreneurs and Angels To really understand how the decision to accept VC money will affect the founders, friends and family investors and angels, have another look at the graph above. Notice that graph shows that today the median venture capital exit time is seven years. When you look quickly at the data, it might be tempting to think that the decision to add VC investors would add seven years to the time to exit, on average. A closer look shows that the real implications are quite different. A Simple Model The table below is a simple model to illustrate what is actual happening. Most VC financings involve multiple VCs. As rounds progress from series A to B to C and the rounds get larger, the number of VC investors in each round tends to increase. There is a strong tendency for VCs in an earlier round to participate in the next round. When does each type of investor actually invest? The friends and family investors invest at startup, in year zero. The angels invest at year two (in this example). The company progresses and decides to accept a VC series-A round in year four. Things go well and the company accepts subsequent series-B and -C rounds in years eight and ten. Things continue to go well and the VCs approve an exit that will give each of the VC investors the return they need in year sixteen. What was the actual hold period for each investor type? The first VC investment was in year four. So, the series-A VCs were invested for twelve years, the series-B VCs for eight years and the series-C VCs for six years. There were two VCs in the series-A round, two new VCs in the series-B round and four new VCs in the series-C round. This combination of VC investments totaled $25 million, the actual amount shown in the graph in the post Venture Capital Firms are Too Big. So this model correctly illustrates median amount invested by VCs prior to an M&A exit. It also correctly shows the median time from investment to exit of seven years, the current time in graph But, let’s look a little closer at What’s Happening to the Angels and Entrepreneurs What this example shows is that the decision to accept VC investment increases the time to exit by approximately 12 years, not the median time of 7 years. When VC investment was added to the corporate DNA, the time to exit increased to somewhere around sixteen years after the entrepreneurs started and twelve years after the angels invested. This is, of course, just a model. There is no actual data available to prove this. But this model is a reasonably good approximation of what really happens to exit times when venture capitalists My next post will continue the exploration of venture capital exit times. This is also a main theme in my new book "Early Exits - Exit Strategies for Entrepreneurs and Angel Investors - But Maybe Not
{"url":"http://www.angelblog.net/Venture_Capital_Exit_Times.html","timestamp":"2014-04-16T21:51:37Z","content_type":null,"content_length":"17412","record_id":"<urn:uuid:caa2566e-161c-4601-a4be-18b1fa0a2e45>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Beginner's problems These problems are intended for beginners, who are only trying their first steps in new programming language or programming itself. Note that similar simple problems dedicated to string operations are put in separate volume String Tricks. Note that you can solve most task in any order you want, though few tasks may require some simpler first...
{"url":"http://www.codeabbey.com/index/task_list/beginners-problems","timestamp":"2014-04-18T00:14:09Z","content_type":null,"content_length":"7321","record_id":"<urn:uuid:08422dc2-2755-41d8-9764-6ec197ab5057>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
F303 INTERMEDIATE FINANCE - SPRING 1997 - HOLDEN, MYSKER, WEDIG PROJECT 1: EX-ANTE PORTFOLIO FORMATION You may use either Excel (5.0 or higher) or Lotus for Windows (4.0 or higher) for steps (1)-(3). Steps (4)-(8) require the use of the Excel-based Interactive Optimizer. The F303 Home Page provides a data file for this project in both formats and the Interactive Optimizer (see access directions below). The data file contains: • 60 months of returns for twenty three stock indices • 60 months of returns for the U.S. one month Treasury Bill (the riskfree asset) and • market capitalizations for the twenty three countries that can be used to calculate the weights of a "value weighted" portfolio. The project asks you to download the Excel-based Interactive Optimizer plus either the Excel data file or the Lotus data file. Then do the following things: 1. pick any five of the twenty three stock indices and calculate their means, standard deviations, and correlations and calculate the average riskfree rate, 2. for your five stock indices, calculate portfolio weights for the following three portfolios: □ value-weighted, □ equally-weighted, and □ precision-weighted, 1. print out your results for steps (1) and (2), but there is no need to print out the dataset, 2. enter the following info into the Excel-based Interactive Optimizer: □ the means, standard deviations, and correlations of the five indices, □ the average riskfree rate, □ the value-weights in the "Own Portfolio" section, 1. print the range A1:T14 using a Landscape page orientation – this range contains: □ the Mean-Std Dev graph, □ the optimal risky portfolio weights graph, □ the numerical weights, portfolio mean and portfolio standard deviation of: ☆ the optimal risky portfolio, ☆ the minimum variance portfolio, ☆ the "own portfolio," 1. enter the equal-weights in the "Own Portfolio" section and print the range A1:T14 using a Landscape page orientation, 2. enter the precision-weights in the "Own Portfolio" section and print the range A1:T14 using a Landscape page orientation, 3. add a written comment on one of the printed pages as to which of the three portfolios (value-weighted, equally-weighted, or precision-weighted) offers the best risk-return trade-off and why. Turn in your printouts and both of your spreadsheets on an 3 &1/2 inch disc. Key Dates │ Date │ Activity │ │ 1/21 │ Kickoff of project 1 │ │ 1/24 │ Optional computer lab for Project 1, 1:00-5:00 p.m., in BU415, BU417, BU419, and PV151. │ │ 1/28 │ Project 1 due date │ Relevant Spreadsheet Functions Goal Excel Functions Lotus Function 1. To calc. the mean, type: =AVERAGE(C3:AZ3) @AVG(C3.AZ3) 2. To calc. the std dev of the sample, type: =STDEV(C3:AZ3) @STDS(C3.AZ3) 2. (to calc. the std dev of the population,) =STDEVP(C3:AZ3) @STD(C3.AZ3) To calc. the correlation, type: =CORREL(C3:AZ3,C4:AZ4) @CORREL(C3.AZ3,C4.AZ4) To calc. the sum, type =SUM(BO3:BO6) @SUM(BO3:BO6) Relevant Spreadsheet Commands Goal Excel Commands Lotus Commands 1. To lock row and column headings Window Freeze Panes View Freeze Titles 2. To cycle relative/absolute versions F2 Put Cursor on Term F4 (Repeat) F2 Put Cursor on Term F4 (Repeat) Directions for downloading the Project 1 dataset and the Excel-based Interactive Optimizer from the F303 Home Page. 1. Double-click on the Netscape icon. 2. Enter the Web address: www.bus.indiana.edu/finweb/f303home.htm 3. Click on Download the Project 1 dataset. 4. When you see the Netscape message "No Viewer Configured," click on "Save to Disk." Alternatively, if you see the Microsoft Internet Explorer message "Confirm File Open," click on "Save As." 5. Click on Download the Excel-based Interactive Optimizer. 6. When you see the Netscape message "No Viewer Configured," click on "Save to Disk." Alternatively, if you see the Microsoft Internet Explorer message "Confirm File Open," click on "Save As." 7. Open the downloaded files in Excel as usual. Important Rule for working with files in the UCS Clusters: Before you walk away from the PC, go into the Window’s Explorer and check that you have saved the final versions of your spreadsheets on your floppy disk. It is easy to accidentially save your files to the Hard Disk and lose your work if you did not save it to a floppy disk.
{"url":"http://www.kelley.iu.edu/cholden/F303_S97_Project1.html","timestamp":"2014-04-20T08:14:57Z","content_type":null,"content_length":"7896","record_id":"<urn:uuid:c3f1b976-dab4-468f-acf6-e7d3950ce5cc>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help June 12th 2013, 12:33 PM #1 May 2013 what is the best way to find the basis of subspace V(x,y,z,w) which is an element of R^4 for 2x-y=z-3w Re: basis Hey n22. Try constructing a matrix and row-reduce the matrix to find the number of free parameters. Then take that and construct a basis using the free parameters. Re: basis in this case are there 3 parameters? you cant really row reduce ..so what now?please specify.thanks. Re: basis This is the same as z= 2x- y+ 3w. Such a vector can be written as <x, y, z, w>= <x, y, 2x- y+3w, w>= <x, 0, 2x, 0>+ <0, y, -y, 0>+ <0, 0, 3w, w>= x<1, 0, 2, 0>+ y<0, 1, -1, 0>+ w<0, 0, 3, 1>. From that it should be obvious what a basis is and what the dimension is. June 12th 2013, 05:20 PM #2 MHF Contributor Sep 2012 June 12th 2013, 09:09 PM #3 May 2013 June 13th 2013, 05:40 AM #4 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/advanced-algebra/219782-basis.html","timestamp":"2014-04-18T21:16:56Z","content_type":null,"content_length":"37577","record_id":"<urn:uuid:b0a81dd7-c0c1-4c54-9dbb-24f076a9f31c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Cloth Diapers & Parenting Community - DiaperSwappers.com - fun math related activities for interested 3 YO Parenting Talk mrosehughes 02-21-2013 03:27 PM fun math related activities for interested 3 YO At DS's parent-teacher conference today, one of his teachers brought up that he seems really interested in math (and puzzles, but we knew about that one), and wondered if we had ideas for things he could do to foster that interest. Of course, aside from counting games, etc, I could think of nothing. The teachers are going to look in the older classrooms for ideas, but I thought I'd ask my wonderful DS resource, too :) Activity suggestions for both school and home would be great! More info: DS is nearly 3.5, really into puzzles (does 48+ pieces on his own) and can count pretty high (30+). He's the oldest in his class (youngest will be 3 in June), but there is time during the day that he can play with things the other kids can't do. He can't, however, have anything with small pieces at school (because of state law). EmilytheStrange 02-21-2013 03:31 PM Re: fun math related activities for interested 3 YO we love our abacus around here :) most of the things I think about have small pieces, but there's always flashcards with like balls or whatever to count and then add up, etc. might be able to use some of those velcro food pieces to make math sheets. Like have him reproduce 2 apple slices plus 1 orange slice to equal something.. I dunno.. just thinking out loud here :) just depends on what they already have in the classroom. cmarsh31 02-21-2013 04:45 PM Re: fun math related activities for interested 3 YO My DS LOVED connect the dots at that age (once he could recognize the numbers 1-30) Loved, loved, loved them. He always enjoyed the hundred board at school too (easy to do at home - felt squares, tiles, whatever, numbered - then put them in order) joslin 02-21-2013 05:19 PM Re: fun math related activities for interested 3 YO At that age ds was loving helping me cook, and I take the approach of working these types of things into regular life, so I introduced fractions and bought him his own kid knives and worked with him to learn half, quarter, etc. Then we extended that to the clock which really helped.aa Also he got the learning resources cash register from santa which also has number games built in, and then there are always apps on the i-whatever. jeebee 02-21-2013 11:03 PM Re: fun math related activities for interested 3 YO Time telling. What about chess? It's not exactly maths but it is a game of logic. My daughter loves it. 7mom7 02-21-2013 11:25 PM Re: fun math related activities for interested 3 YO DS1 LOVES math too. At that age we did a lot of sorting, adding, subtracting with food. It actually worked really well and he thought it was funny. We'd count everything in front of him and either subtract by eating the food or add to the pile and count again. He started grasping multiplication really early by making 2 rows of 6 etc. We'd use anything...chips, grapes, apple slices. We'd practice counting to 100 in the car and reading speed limit signs/road signs. I think all of our math took place while we were eating or driving...lol! MamaLump 02-22-2013 12:16 PM Usually not for school, but maybe his teachers could ask a couple questions here and there: bobbyjk 02-22-2013 01:43 PM Re: fun math related activities for interested 3 YO pattern activities, tanagrams, a scale that he can put things on to add/subtract and visually see which is more All times are GMT -6. The time now is 01:34 AM. Powered by vBulletin® Version 3.8.4 Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
{"url":"http://diaperswappers.com/forum/printthread.php?t=1493846","timestamp":"2014-04-21T07:34:56Z","content_type":null,"content_length":"9864","record_id":"<urn:uuid:ea2f5ef4-14a9-49de-9b26-bf96f4333b3a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Elgin, IL Precalculus Tutor Find an Elgin, IL Precalculus Tutor ...Perimeter and Area 21. Circles 22. Solid Figures 23. 17 Subjects: including precalculus, reading, calculus, geometry ...I am able to tutor all Math subjects from Pre-Algebra to Calculus and Basic Microsoft Excel. I am available in the Roselle Area after 7pm on weekdays and weekends as needed. I am very passionate about mathematics and connecting it with students learning. 10 Subjects: including precalculus, calculus, algebra 2, algebra 1 ...I have assisted in Pre-Algebra, Algebra, and Pre-Calculus classes. I have also tutored Geometry and Calculus students. I have a degree in Mathematics from Augustana College. 7 Subjects: including precalculus, geometry, algebra 2, trigonometry ...I am certified to teach grades K-8 in Michigan and grades 1-6 in NY. Lastly, I have completed 31/41 credits for my M.S. Special Education degree from Western Governors University. 19 Subjects: including precalculus, reading, English, writing Hi there, I am a recent graduate from Knox College with a major in Physics and a minor in Mathematics. I hope that I can be of service to anyone who requires aid with these disciplines. They offer life-long skills by offering problem-solving techniques and numerical tools. 16 Subjects: including precalculus, chemistry, algebra 2, calculus Related Elgin, IL Tutors Elgin, IL Accounting Tutors Elgin, IL ACT Tutors Elgin, IL Algebra Tutors Elgin, IL Algebra 2 Tutors Elgin, IL Calculus Tutors Elgin, IL Geometry Tutors Elgin, IL Math Tutors Elgin, IL Prealgebra Tutors Elgin, IL Precalculus Tutors Elgin, IL SAT Tutors Elgin, IL SAT Math Tutors Elgin, IL Science Tutors Elgin, IL Statistics Tutors Elgin, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Elgin_IL_precalculus_tutors.php","timestamp":"2014-04-20T23:35:44Z","content_type":null,"content_length":"23629","record_id":"<urn:uuid:70839b31-03f1-4997-9e47-9d269cdd2467>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Goes Pop! Dessert aside, long-time readers are probably already aware of my decidedly mixed feelings towards Pi Day (see, for example, here). Nevertheless, the holiday seems only to be growing in popularity, and so I feel compelled to take it to task once again. In my earlier post, I complained about mathematical mistakes that frequently appeared in Pi Day articles aimed at a general audience; these errors still exist, but rather than nitpick, let me instead focus on the most bothersome activity of the day. I’m speaking, of course, about pi recitation competitions. Reciting the digits of pi is, unfortunately, becoming a popular activity – dare I say even a tradition – on Pi Day. Competitors recite as many digits of pi as they can, and the person who can recite the most digits is declared the winner. As I’ve said before, I fail to see the point of this exercise. From . . . → Read More: Pi Day Post Mortem When shopping for gifts for someone, there are a few wells from which one frequently draws inspiration. A person’s favorite TV show, for example, or favorite band; such preferences can often provide good fodder for gift ideas. One’s career can also be included in this list – in my case, the result is that I am frequently the recipient of math-themed paraphernalia. I’ve written before about my mixed feelings regarding math t-shirts. Today, though, I’d like to tackle a different type of gift: the math clock. This is inspired, in part, by a gift I received from my grandmother (bless her heart) over the holiday. The gift, pictured below, was an analog clock in which the numbers have been replaced by (what one would hope to be) mathematically equivalent Don’t tell her, but we haven’t yet put this clock up in . . . → Read More: Math Clock Showdown Another year, another night of dressing in costumes on a quest for candy and/or debauchery. In previous years, I’ve tried to encourage mathematically influenced Halloween costumes (see here and here), and so if for no other reason than the sake of consistency, this year will be no different. Here are some new ideas for 2010: 1. The Count This costume idea was suggested to me in the comments section of last year’s list. Known and loved by children and adults alike, this costume would give the wearer ample opportunity to teach children about the wonders of math. If you’re one of those people who give out pennies or toothbrushes, though, I would caution you against this costume decision, since the combination of a lack of candy and an insistence on discussing mathematics may dramatically increase the likelihood of you being at the receiving end of a “trick.” . . . → Read More: Math Goes Trick Or Treating Yet Again In the past, I’ve used this blog as a platform to make clear my mixed feelings about Pi Day, a math themed holiday celebrated every year on March 14th (3/14, har har) in honor of the beloved mathematical constant . My thoughts on the subject can be found here. It would seem that I am not alone in my frustration. Michael Hartl, an educator and entrepreneur (as well as a Ph.D. graduate from Caltech), has just today launched a website in favor of Tau Day as a replacement for Pi Day. However, his argument (based on a 2001 paper by Bob Palais) goes a step farther – he argues that day shouldn’t be celebrated because isn’t the fundamental constant we should be considering! Rather, he argues that the true fundamental constant is , which is approximately 6.283185… . Hartl argues that this should be the fundamental constant of interest, and . . . → Read More: Happy Tau Day? If you come here regularly, you know of my complaints regarding so-called “math holidays” that get plenty of press, but rarely have anything to do with actual mathematics. The most well known is pi day, celebrated here in the states on March 14th, also known here as 3/14. Aside from the mathematical arguments one can make for or against this holiday, there is a larger problem. It’s all well and good to celebrate pi day on the date representing the first three digits of pi, but this is only possible if we write dates in the MM/DD format. Most of the world, however, uses the (more logical) DD/MM format, therefore depriving them of such a delicious play on numbers. Many loyal international fans of this holiday no doubt decry the fact that April has only 30 days, for otherwise they could simply celebrate pi day on 31/4. As it is, . . . → Read More: e day? Big ups to Liz Landau for bringing attention to one of the most important unsolved math problems of our time, the Riemann Hypothesis. Over at the CNN SciTechBlog, she has written a nice article on the problem aimed at a general audience. This year marks the 150th anniversary of the publication of Riemann’s manuscript, where he proposed the now famous conjecture on the zeros of the Riemann-zeta function, and November was the month in which it was published. However, as Landau points out, the exact date of publication isn’t known, which makes having a birthday celebration a little tricky. The American Institute of Mathematics picked today to celebrate, and in honor of Riemann talks were held all around the world. The Riemann Hypothesis has held the attention of the mathematical community for a century and a half, but it’s also made occasional forays into the realm of popular culture. For . . . → Read More: Happy Birthday, Riemann Hypothesis! UPDATE: 2010 costume ideas can be found here! Around this time last year, I wrote up some suggestions for math-themed Halloween costumes. Based on the traffic I received from that article, I can tell that many people are desperate to integrate their holiday festivities with mathematics. For this reason, and in the interest of not breaking tradition, I thought it would be fitting to suggest a few more ideas for this year. 1) Mathemagician. In the strictest sense, a mathemagician is simply a mathematician who does magic. Or, perhaps it is a magician who does mathematics. You may (rightfully) be tempted to say that every mathematician does magic, but the tricks of the mathemagician are geared more towards a general audience, although they do often feature mathematics in a starring role. Sadly, the same cannot always be said for the typical magician. There are examples of mathemagicians in real life, . . . → Read More: Math Goes Trick Or Treating Again Let me begin by saying that, in response to the question Why is 9/09/09 so special?, my response is simple: it’s not. In fact, I would argue that 09/08/09 is much more interesting. This claim has nothing to do with numerology, and everything to do with President Obama’s speech to the youth of America on the value of education. The speech made very clear the importance of taking education seriously, and hopefully convinced students that a good education benefits not only themselves, but also society at large. In case you missed the speech, the transcript can be found here. Although the speech was about education in general, mathematics got a little bit of love too. Here’s one such example: What you make of your education will decide nothing less than the future of this country. What you’re learning in school today will determine whether we as a nation . . . → Read More: Make Money Money, Make Money Money Money! (and Learn Math, too) Math Clock Showdown Happy Tau Day? e day?
{"url":"http://www.mathgoespop.com/category/holidays/page/2","timestamp":"2014-04-18T13:27:44Z","content_type":null,"content_length":"101144","record_id":"<urn:uuid:efda31ff-421d-4529-8a97-ce2961168165>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
The Cosecant Function The Cosecant Function The cosecant function, abbreviated csc, is called a reciprocal function because it is the reciprocal of one of the three basic trig functions, sine. The cosecant function uses the ratio The hypotenuse of a right triangle is always the longest side, so, when working with triangles, the numerator of this fraction is always larger than the denominator. As a result, the cosecant function always produces values bigger than 1. You can use the values in the preceding figure to determine the cosecants of the two acute angles: Suppose someone asks you to find the cosecant of angle if you know that the hypotenuse is 1 unit long and that the right triangle is isosceles. Remember that an isosceles triangle has two congruent sides. These two sides have to be the two legs, because the hypotenuse has to have the longest side. So to find the cosecant, you would follow these steps: 1. Find the lengths of the two legs. The Pythagorean theorem says that a^2 + b^2 = c^2, but because two sides are congruent, you can take out one variable and write the equation as a^2 + a^2 = c^2. Put in 1 for c and solve for a. You can leave the radical in the denominator and not worry about rationalizing, because you’re going to input the whole thing into the cosecant ratio anyway, and things can change. 2. Use the length of the opposite side in the ratio for cosecant. The square root of 2 is about 1.4, so it fits right in there with the possible values of the cosecant of an angle.
{"url":"http://www.dummies.com/how-to/content/the-cosecant-function.navId-420744.html","timestamp":"2014-04-19T15:09:36Z","content_type":null,"content_length":"52836","record_id":"<urn:uuid:9954918f-b854-4793-9570-9ee053ab2355>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Triangle condition for quantum 6j symbols? up vote -1 down vote favorite For SU2 and even SU2(q) the triangle condition is, well, the triangle condition (conveniently, all irreps are described by (half)integer J completely). Additionally, all three J of a triple must add to integer. But what is the analogue to that for some arbitrary (quantum) group? As usual, I don't have a clue but an assumption :-) - random example in group A3: Let the reps be 1R1+2R2+1R3,2R1+R2+1R3 and 5R1+2R2+2R3, then you check every irrep separately: 125 - triangle fail, 212 - parity fail, 112 - pass, overall: fail. Of course, I could be totally wrong...What is the correct condition? (Maybe it's not even a simple one: outer multiplicity and that.) add comment 1 Answer active oldest votes As you probably guessed the answer is "its complicated." I guess you mean what is the rule for when the tensor product of two irreps $V_\lambda \otimes V_\gamma$ contains an irrep $V_\ delta$. The answer involves identifying the irrep with its highest weight vector $\lambda$ and viewing it in the Weyl chamber. Humphreys "Intro to Lie Algebra and ..." is the a good place to learn this stuff. Roughly the analog of the triangle inequality says that if you add all images of $\lambda$ under the Weyl group to $\gamma$, $\delta$ will fall in the convex hull. the analog of the integrality condition is that $\lambda+\gamma - \delta$ must be in the root lattice: in general the weight lattice mod the root lattice is a small abelian group that Humphreys will tell you about. The full story for the tensor product decomposition is given by the beautiful Racah formula, which can be visualized in rank 2 thus: mark the dimensions of the weights up vote 6 of $V_\lambda$ in tracing paper in the weight lattice. Slide it to put the 0 over $\gamma$. Fold the tracing paper along the walls of the extended Weyl chamber until it lies entirely in down vote that chamber. add the numbers over $\delta$, with minus signs on the ones written backwards (i.e. an odd number of folds). That is the multiplicity of $V_\delta$. Similar story holds for quantum groups at roots of unity (you have to throw away pesky nonirreducible reps), and there is a perfectly analogous quantum Racah formula with the Weyl alcove taking the role of the chamber. To toot my own horn, see http://arxiv.org/abs/math/0308281 on archive for a pretty full treatment. Too bad it's not as simple as SU2. BTW, I read about Weyl chambers before, but I always thought they contained instruments of math torture designed for amateurs like me :-) – Hauke Reddmann Oct 5 '11 at 9:31 add comment Not the answer you're looking for? Browse other questions tagged qa.quantum-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/77119/triangle-condition-for-quantum-6j-symbols?sort=oldest","timestamp":"2014-04-17T12:39:25Z","content_type":null,"content_length":"52375","record_id":"<urn:uuid:b2a119c3-3bcc-4825-b662-c3168a21dd4b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Angles of Elevation and Depression March 2nd 2010, 01:09 PM #1 Mar 2010 Angles of Elevation and Depression Here's the question. A 50-meter vertical tower is braced with a cable secured at the top of the tower and tied 30 meters from the base. What angles does the cable form with the vertical tower? How would I go about solving this problem? Hi inneedofserioushelp2013, Nice user name. Did you draw a triangle? Did you label it? Use Inverse Tangent to find the angle the cable makes with the tower. March 2nd 2010, 01:28 PM #2 A riddle wrapped in an enigma Jan 2008 Big Stone Gap, Virginia
{"url":"http://mathhelpforum.com/geometry/131673-angles-elevation-depression.html","timestamp":"2014-04-17T05:36:11Z","content_type":null,"content_length":"34207","record_id":"<urn:uuid:7e65736b-a0ac-439e-ac2f-28f4c745f7dc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
The Kochen-Specker Theorem 1. One of the basic ideas of Bohmian Mechanics is that position is the only basic observable to which all other observables of orthodox QM can be reduced. So, Bohmian Mechanics will qualify VD as follows: “Not all observables defined in orthodox QM for a physical system are defined in Bohmian Mechanics, but those that are (i.e. only position) do have definite values at all times.” Both this modification of VD and the rejection of NC immediately immunize Bohmian Mechanics against any no HV argument from the Kochen Specker Theorem. See sec.s 10 and 11 of the entry on Bohmian Mechanics for further discussion. 2. Note that if we understand the ‘expectation’ values deriving from the hidden states as what someone knowing such a state should expect as the average measurement result, then this claim is correct only given an assumption of faithful measurement (FM). FM is another typical assumption of a realist (or noncontextualist HV) interpretation of QM. The KS argument can be given without FM, since it makes claims about possessed values, not measured values. Only in statistical arguments, like von Neumann's and Clifton's, must FM be assumed. We suppress FM in the main text with the exception of Contextuality (see Section 5.3). 3. See Bell (1966: 4) and Jammer (1974: 274, 304); see also Kochen and Specker (1967: 82/322, theorem 3) for a parallel example and criticism of von Neumann. This criticism of von Neumann's argument was first raised by Grete Hermann (1935). Einstein is also reported to have made the same criticism (see Shimony 1993: 89). 4. Apart from being positive and normalised (μ(id)=1, id the identity operator), a probability measure in Gleason's sense must be (countably) additive for families of mutually orthogonal projections. This natural assumption is the analogue of KS's assumption (3) for compatible observables. 5. According to Bell (1987: 167), Kochen did not know of Gleason's work at the time. 6. For a discussion of the relationship between contextuality and nonlocality, see Mermin (1990a) and Clifton (1993). 7. See Kochen and Specker (1967: 71-72/310-11). 8. By elementary vector algebra, viz.: a·b=|a| |b| cos φ, where a·b is the inner product of vectors a and b and φ is the angle between them. 9. Both arguments, i.e., von Neumann's and Clifton's, presuppose faithful measurement (FM) (see also fn 1). Indeed, an eventual failure of FM would be the natural way to explain why a certain constraint on possessed values does not show up in the measurement statistics. 10. A state of a spin-1 particle where prob(v(S[x]^2)=0)=1 is |S[x]=0>. Expanding this state in the eigenvectors of S[x′] yields cos φ (1 + sin^2 φ) ^−½ |S[x′] = 0> + sin φ (1 + sin^2 φ) ^−½ (|S[x′] = −1> − |S[x′] = +1>) Now, with cos φ=1/3, we get: prob(v(S[x′]^2) = 0) = prob(v(S[x′]) = 0) = (cos φ)^2 / (1 + sin^2 φ) = 1/9 (1 + 8/9)^−1 = 1/17 11. See Redhead (1987: 121). See also Kochen and Specker (1967: 64/299, eq. 4) and Fine and Teller (1978: 631) where the principle appears under the name "functional relation condition". 12. The argument here is due to Redhead (1987: 132). Redhead himself suggests that there is an internal tension in his argument. He remarks that STAT FUNC, as derived from the QM formalism, is a statement about (probabilities for) measured values, (see his remark that when QM → STAT FUNC, the latter is "understood in terms of the statistics of measurement results", p. 132), while FUNC talks about "the possessed values of the observables" (p. 121, my italics). This distinction seems to devaluate Redhead's reasoning to some extent. If there is a real difference between possessed values and measured values and no explicit assumption equating both types of values is made, then STAT FUNC (version 1) derivable from FUNC talks about a statistical relation among functions of possessed values, while STAT FUNC (version 2) is derived from QM, hence by definition talks about the statistics of measured values. The two versions initially are entirely unconnected and FUNC → STAT FUNC (version 1) cannot lend plausibility to STAT FUNC (version 1). Now, don't we have good reason to carefully distinguish measured and possessed values in QM? Fortunately, no such distinction is really necessary. First, distinguish a strong sense of possessed values where possessed value are values a system possesses independently from being measured and a weak sense of possessed values where possessed values are possessed simpliciter, i.e. the system just has them, perhaps as a result of measurement, perhaps not. Second, consider the hedge phrase of a system ‘displaying a value’ as a result of a measurement. It is minimally a phrase like this which must go into the description of the events for which the Statistical Algorithm yields probabilities. But the hedge phrase is actually superfluous. Physical systems do not display anything unless they have it. Accordingly, if a system displays a value upon measurement, then it possesses this value (in the weak sense). By contrast, if a system displays a value upon measurement and measurement is faithful, then it possesses that value independently of measurement (thus, possessed it in the strong sense). (See the main text below, § 5.3, for more on faithful measurement; see also Redhead, p. 131 after eq. (11), for the connection between faithful measurement and possessed values.) Now, the weak sense of possessing a value is fully sufficient to formulate FUNC and Redhead's plausibility argument for STAT FUNC remains untouched. (Indeed, the derivation FUNC → STAT FUNC itself uses the Statistical Algorithm, which would be an illegitimate procedure if the latter did not refer to probabilities for possessed values, in some sense.) 13. It is not exactly true that the QM formalism prescribes any observable to have a value. In fact, the formalism itself is entirely silent about values, apart from the statistical predictions it entails via the statistical algorithm (see Redhead 1987: 8). But a crucial assumption of orthodox interpretations is the eigenstate-eigenvalue link (see Fine 1973: 20): Observable A on a system has value a[k] iff the system is in state |a[k]>. The only if direction of this principle which leads to a minimum of value ascriptions is endorsed in van Fraassen's original version of the modal interpretation (this direction is equivalent to the Eigenvector Rule in Redhead 1987: 73, 120) while it denies the if direction, which prevents that more than a minimum of values are ascribed to a 14. This is Redhead's (1987: 135-36) construal of a proposal by Fine (1974). 15. See Fine and Teller (1978: 636), Redhead (1987: 138) and references therein. 16. What we mean here is the following. A definition of observable f(A) (or observable A + B, or observable A·B, both constructed from observables A and B) would be that it is an observable which takes a value calculated by measuring v(A) (or measuring v(A) and v(B)) and applying f to the result (or calculating v(A) + v(B), or calculating v(A) v(B)). FUNC, the Sum Rule and the Product Rule, as restricted to one measurement context, trivially repeat these definitions, and there obviously is no point in testing, e.g., whether v(A·B) really equals v(A) v(B), if the former expression is defined by the latter. 17. The proviso ‘in a sense’ comes from the fact that a colourable subset of the set of observables corresponding to all directions in R3, since it is a proper subset of the latter, is not itself continuous in the intuitive sense. We can, however, define a probability function from such a colourable subset into [0, 1] which obeys the usual continuity definition of elementary calculus.
{"url":"http://plato.stanford.edu/entries/kochen-specker/notes.html","timestamp":"2014-04-21T09:37:25Z","content_type":null,"content_length":"22797","record_id":"<urn:uuid:efca8496-916d-4533-b2b7-ce75ea7b7d17>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Convergence of multiple ergodic averages along polynomials of several variables Results 1 - 10 of 22 - Acta Math "... Abstract. We establish the existence of infinitely many polynomial progressions in the primes; more precisely, given any integer-valued polynomials P1,..., Pk ∈ Z[m] in one unknown m with P1(0) =... = Pk(0) = 0 and any ε> 0, we show that there are infinitely many integers x, m with 1 ≤ m ≤ x ε suc ..." Cited by 30 (4 self) Add to MetaCart Abstract. We establish the existence of infinitely many polynomial progressions in the primes; more precisely, given any integer-valued polynomials P1,..., Pk ∈ Z[m] in one unknown m with P1(0) =... = Pk(0) = 0 and any ε> 0, we show that there are infinitely many integers x, m with 1 ≤ m ≤ x ε such that x+P1(m),..., x+Pk(m) are simultaneously prime. The arguments are based on those in [18], which treated the linear case Pi = (i − 1)m and ε = 1; the main new features are a localization of the shift parameters (and the attendant Gowers norm objects) to both coarse and fine scales, the use of PET induction to linearize the polynomial averaging, and some elementary estimates for the number of points over finite fields in certain algebraic varieties. Contents , 2005 "... A long-standing and almost folkloric conjecture is that the primes contain arbitrarily long arithmetic progressions. Until recently, the only progress on this conjecture was due to van der Corput, who showed in 1939 that there are infinitely many triples of primes in arithmetic progression. In an a ..." Cited by 18 (2 self) Add to MetaCart A long-standing and almost folkloric conjecture is that the primes contain arbitrarily long arithmetic progressions. Until recently, the only progress on this conjecture was due to van der Corput, who showed in 1939 that there are infinitely many triples of primes in arithmetic progression. In an amazing fusion of methods from analytic number theory and ergodic theory, Ben Green and Terence Tao showed that for any positive integer k, there exist infinitely many arithmetic progressions of length k consisting only of prime numbers. This is an introduction to some of the ideas in the proof, concentrating on the connections to ergodic theory. , 2006 "... Abstract. We find the smallest characteristic factor and a limit formula for the multiple ergodic averages associated to any family of three polynomials and polynomial families of the form {p, 2p,..., kp}. We then derive several combinatorial implications, including an answer to a question of Brown, ..." Cited by 9 (2 self) Add to MetaCart Abstract. We find the smallest characteristic factor and a limit formula for the multiple ergodic averages associated to any family of three polynomials and polynomial families of the form {p, 2p,..., kp}. We then derive several combinatorial implications, including an answer to a question of Brown, Graham, and Landman, and a generalization of the Polynomial Szemerédi Theorem of Bergelson and Leibman for families of three polynomials with not necessarily zero constant term. We also simplify and generalize a recent result of Bergelson, Host, and Kra, showing that for all ε> 0 and every subset of the integers Λ the set n ∈ N: d ∗ ( Λ ∩ (Λ + p1(n)) ∩ (Λ + p2(n)) ∩ (Λ + p3(n)) )> (d ∗ (Λ)) 4 − ε} has bounded gaps for “most ” choices of integer polynomials p1, p2, p3. Contents , 2008 "... Let P = {p1,..., pr} ⊂ Q[n1,..., nm] be a family of polynomials such that pi(Zm) ⊆ Z, i = 1,..., r. We say that the family P has the PSZ property if for any set E ⊆ Z with d ∗ |E∩[M,N−1]| (E) = lim supN−M→ ∞ N−M> 0 there exist infinitely many n ∈ Zm such that E contains a polynomial progression o ..." Cited by 7 (1 self) Add to MetaCart Let P = {p1,..., pr} ⊂ Q[n1,..., nm] be a family of polynomials such that pi(Zm) ⊆ Z, i = 1,..., r. We say that the family P has the PSZ property if for any set E ⊆ Z with d ∗ |E∩[M,N−1]| (E) = lim supN−M→ ∞ N−M> 0 there exist infinitely many n ∈ Zm such that E contains a polynomial progression of the form {a, a + p1(n),..., a + pr(n)}. We prove that a polynomial family P = {p1,..., pr} has the PSZ property if and only if the polynomials p1,..., pr are jointly intersective, meaning that for any k ∈ N there exists n ∈ Zm such that the integers p1(n),..., pr(n) are all divisible by k. To obtain this result we give a new ergodic proof of the polynomial Szemerédi theorem, based on the fact that the key to the phenomenon of polynomial multiple recurrence lies with the dynamical systems defined by translations on nilmanifolds. We also obtain, as a corollary, the following generalization of the polynomial van der Waerden theorem: If p1,..., pr ∈ Q[n] are jointly intersective integral polynomials, then for any finite partition Z = ⋃k i=1 Ei of Z, there exist i ∈ {1,..., k} and a, n ∈ Ei such that {a, a+p1(n),..., a+pr(n)} ⊂ Ei. , 2008 "... We study recurrence, and multiple recurrence, properties along the k-th powers of a given set of integers. We show that the property of recurrence for some given values of k does not give any constraint on the recurrence for the other powers. This is motivated by similar results in number theory co ..." Cited by 5 (5 self) Add to MetaCart We study recurrence, and multiple recurrence, properties along the k-th powers of a given set of integers. We show that the property of recurrence for some given values of k does not give any constraint on the recurrence for the other powers. This is motivated by similar results in number theory concerning additive basis of natural numbers. Moreover, motivated by a result of Kamae and Mendès-France, that links single recurrence with uniform distribution properties of sequences, we look for an analogous result dealing with higher order recurrence and make a related conjecture. "... Abstract. For any measure preserving system (X, X, µ, T) and A ∈ X with µ(A)> 0, we show that there exist infinitely many primes p such that µ ` A ∩ T −(p−1) A ∩ T −2(p−1) A ´> 0 (the same holds with p − 1 replaced by p + 1). Furthermore, we show the existence of the limit in L 2 (µ) of the associat ..." Cited by 5 (1 self) Add to MetaCart Abstract. For any measure preserving system (X, X, µ, T) and A ∈ X with µ(A)> 0, we show that there exist infinitely many primes p such that µ ` A ∩ T −(p−1) A ∩ T −2(p−1) A ´> 0 (the same holds with p − 1 replaced by p + 1). Furthermore, we show the existence of the limit in L 2 (µ) of the associated ergodic average over the primes. A key ingredient is a recent result of Green and Tao on the von Mangoldt function. A combinatorial consequence is that every subset of the integers with positive upper density contains an arithmetic progression of length three and common difference of the form p − 1 (or p + 1) for some prime p. 1. , 2006 "... Abstract. Szemerédi’s Theorem states that a set of integers with positive upper density contains arbitrarily long arithmetic progressions. Bergelson and Leibman generalized this, showing that sets of integers with positive upper density contain arbitrarily long polynomial configurations; Szemerédi’s ..." Cited by 5 (3 self) Add to MetaCart Abstract. Szemerédi’s Theorem states that a set of integers with positive upper density contains arbitrarily long arithmetic progressions. Bergelson and Leibman generalized this, showing that sets of integers with positive upper density contain arbitrarily long polynomial configurations; Szemerédi’s Theorem corresponds to the linear case of the polynomial theorem. We focus on the case farthest from the linear case, that of rationally independent polynomials. We derive results in ergodic theory and in combinatorics for rationally independent polynomials, showing that their behavior differs sharply from the general situation. , 2008 "... In 1975 Szemerédi proved that a set of integers of positive upper density contains arbitrarily long arithmetic progressions. Bergelson and Leibman showed in 1996 that the common difference of the arithmetic progression can be a square, a cube, or more generally of the form p(n) where p(n) is any i ..." Cited by 4 (1 self) Add to MetaCart In 1975 Szemerédi proved that a set of integers of positive upper density contains arbitrarily long arithmetic progressions. Bergelson and Leibman showed in 1996 that the common difference of the arithmetic progression can be a square, a cube, or more generally of the form p(n) where p(n) is any integer polynomial with zero constant term. We produce a variety of new results of this type related to sequences that are not polynomial. We show that the common difference can be of the form [n δ] where δ is any positive real number and [x] denotes the integer part of x. More generally, the common difference can be of the form [a(n)] where a(x) is any function from a Hardy field which is sandwiched between two consecutive powers of x, that is, a(x)/x k → ∞ and a(x)/x k+1 → 0 for some non-negative integer k. This allows us for example to deal with functions that can be constructed by a finite combination of the ordinary arithmetical symbols, the real constants, the real variable x, and the functional symbols exp and log, and satisfy the previous growth assumptions. The proof combines a new structural result for Hardy sequences, techniques from ergodic theory, and some recent equidistribution results of sequences on nilmanifolds.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4484091","timestamp":"2014-04-19T06:07:57Z","content_type":null,"content_length":"35225","record_id":"<urn:uuid:1e652ddd-4947-402e-96b4-2c461acfbe78>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Whos frame of reference? What if he shines a laser pointer out of the front windshield? Does he see it escaping him at the speed of light? I think yes. However the observer sees it just overtaking the ship barely. This seems too bizarre... Actually, you cannot see light escaping from you. Once it has left the laser pointer, it's gone and you'll never see it again or have any awareness of its progress, unless it hits something and illuminates it and reflects back to you. Then what you see is a result of the light making a round trip from your laser pointer, to an object, and back to you. So the only way you can measure the speed of light is to have it reflect off of something and measure how far away that something is (with a ruler) and measure how long the round trip took (with a clock or other timing device) and then you can calculate the "average" speed of light during the round trip. But you have no idea whether the light took the same time to get to the object as it did to get back to you. The second postulate of Special Relativity defines those two times to be equal for any inertial measurement and this is the basis for a Frame of Reference. So when we say that the light propagates away from a high speed spacecraft at c, we mean that it is defined to be traveling at that speed according to the rest frame of the spacecraft and according to the same definition of a different rest frame for the earth, it is also traveling at c. It's not the least bit bizarre once you grasp what a Frame of Reference is. Einstein's 1905 paper introducing relativity is a good place to learn about this.
{"url":"http://www.physicsforums.com/showthread.php?t=563743","timestamp":"2014-04-19T09:46:48Z","content_type":null,"content_length":"35681","record_id":"<urn:uuid:0b0c7c47-1fd7-4112-8870-eacce43077a3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Lake Barrington, IL Algebra Tutor Find a Lake Barrington, IL Algebra Tutor Hi!! My name is Harry O. I have been tutoring high school and college students for the past six years. Previously I taught at Georgia Institute of Technology from which I received a Bachelor's in Electrical Engineering and a Master's in Applied Mathematics. 18 Subjects: including algebra 1, algebra 2, physics, GRE ...Many students who hated math started liking it after my tutoring. That is my specialty. I provide guidance based on their attitude interest and ability. 12 Subjects: including algebra 2, algebra 1, calculus, trigonometry I am very experienced and knowledgeable in many areas of math. I have a Bachelors Degree in Mathematics Education and will be certified as a Secondary Mathematics Teacher in Illinois. I have done private tutoring for four years with all different levels of math and age groups. 10 Subjects: including algebra 1, algebra 2, calculus, geometry Hello and thank you for taking the time to read my profile! I graduated from Illinois State University with a bachelor's degree in secondary mathematics education. I was one of two individuals accepted into a year long pre-teaching internship at the college's laboratory school where I completed my student teaching. 4 Subjects: including algebra 1, algebra 2, geometry, prealgebra I have a PhD in microbial genetics and have worked in academic research as a university professor and for commercial companies in the biotechnology manufacturing sector. I have a broad background in science and math, a love of written and oral communication and a strong desire to share the knowledg... 35 Subjects: including algebra 2, SAT math, geometry, physics Related Lake Barrington, IL Tutors Lake Barrington, IL Accounting Tutors Lake Barrington, IL ACT Tutors Lake Barrington, IL Algebra Tutors Lake Barrington, IL Algebra 2 Tutors Lake Barrington, IL Calculus Tutors Lake Barrington, IL Geometry Tutors Lake Barrington, IL Math Tutors Lake Barrington, IL Prealgebra Tutors Lake Barrington, IL Precalculus Tutors Lake Barrington, IL SAT Tutors Lake Barrington, IL SAT Math Tutors Lake Barrington, IL Science Tutors Lake Barrington, IL Statistics Tutors Lake Barrington, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Lake_Barrington_IL_Algebra_tutors.php","timestamp":"2014-04-21T02:14:01Z","content_type":null,"content_length":"24272","record_id":"<urn:uuid:6ab5ccba-2185-4258-91ec-27c927ace87f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Arithmetical Classification of Perfect Models of Stratified Programs, Fundamenta Informaticae 13 Results 1 - 10 of 42 , 1997 "... This paper surveys various complexity results on different forms of logic programming. The main focus is on decidable forms of logic programming, in particular, propositional logic programming and datalog, but we also mention general logic programming with function symbols. Next to classical results ..." Cited by 281 (57 self) Add to MetaCart This paper surveys various complexity results on different forms of logic programming. The main focus is on decidable forms of logic programming, in particular, propositional logic programming and datalog, but we also mention general logic programming with function symbols. Next to classical results on plain logic programming (pure Horn clause programs), more recent results on various important extensions of logic programming are surveyed. These include logic programming with different forms of negation, disjunctive logic programming, logic programming with equality, and constraint logic programming. The complexity of the unification problem is also addressed. - In The Logic Programming Paradigm: a 25-Year Perspective , 1999 "... In this paper we reexamine the place and role of stable model semantics in logic programming and contrast it with a least Herbrand model approach to Horn programs. We demonstrate that inherent features of stable model semantics naturally lead to a logic programming system that offers an interesting ..." Cited by 250 (18 self) Add to MetaCart In this paper we reexamine the place and role of stable model semantics in logic programming and contrast it with a least Herbrand model approach to Horn programs. We demonstrate that inherent features of stable model semantics naturally lead to a logic programming system that offers an interesting alternative to more traditional logic programming styles of Horn logic programming, stratified logic programming and logic programming with well-founded semantics. The proposed approach is based on the interpretation of program clauses as constraints. In this setting programs do not describe a single intended model, but a family of stable models. These stable models encode solutions to the constraint satisfaction problem described by the program. Our approach imposes restrictions on the syntax of logic programs. In particular, function symbols are eliminated from the language. We argue that the resulting logic programming system is well-attuned to problems in the class NP, has a well-defined domain of applications, and an emerging methodology of programming. We point out that what makes the whole approach viable is recent progress in implementations of algorithms to compute stable models of propositional logic programs. 1 - JOURNAL OF LOGIC PROGRAMMING , 1994 "... We survey here various approaches which were proposed to incorporate negation in logic programs. We concentrate on the proof-theoretic and model-theoretic issues and the relationships between them. ..." Cited by 245 (8 self) Add to MetaCart We survey here various approaches which were proposed to incorporate negation in logic programs. We concentrate on the proof-theoretic and model-theoretic issues and the relationships between them. - Journal of Logic Programming , 1994 "... In this paper, we review recent work aimed at the application of declarative logic programming to knowledge representation in artificial intelligence. We consider exten- sions of the language of definite logic programs by classical (strong) negation, disjunc- tion, and some modal operators and sh ..." Cited by 224 (21 self) Add to MetaCart In this paper, we review recent work aimed at the application of declarative logic programming to knowledge representation in artificial intelligence. We consider exten- sions of the language of definite logic programs by classical (strong) negation, disjunc- tion, and some modal operators and show how each of the added features extends the representational power of the language. - Abstract in Proc. PODS 90 , 1995 "... We study the expressive powers of two semantics for deductive databases and logic programming: the well-founded semantics and the stable semantics. We compare them especially to two older semantics, the two-valued and three-valued program completion semantics. We identify the expressive power of the ..." Cited by 86 (5 self) Add to MetaCart We study the expressive powers of two semantics for deductive databases and logic programming: the well-founded semantics and the stable semantics. We compare them especially to two older semantics, the two-valued and three-valued program completion semantics. We identify the expressive power of the stable semantics, and in fairly general circumstances that of the well-founded semantics. In particular, over infinite Herbrand universes, the four semantics all have the same expressive power. We discuss a feature of certain logic programming semantics, which we call the Principle of Stratification, a feature allowing a program to be built easily in modules. The three-valued program completion and well-founded semantics satisfy this principle. Over infinite Herbrand models, we consider a notion of translatability between the three-valued program completion and well-founded semantics which is in a sense uniform in the strata. In this sense of uniform translatability we show the well-founded semantics to be more expressive than the three-valued program completion. The proof is a corollary of our result that over non-Herbrand infinite models, the well-founded semantics is more expressive than the three-valued program completion semantics. 1 - Journal of Logic Programming , 1993 "... This paper surveys the main results appeared in the literature on the computational complexity of non-monotonic inference tasks. We not only give results about the tractability/intractability of the individual problems but we also analyze sources of complexity and explain intuitively the nature of e ..." Cited by 82 (5 self) Add to MetaCart This paper surveys the main results appeared in the literature on the computational complexity of non-monotonic inference tasks. We not only give results about the tractability/intractability of the individual problems but we also analyze sources of complexity and explain intuitively the nature of easy/hard cases. We focus mainly on non-monotonic formalisms, like default logic, autoepistemic logic, circumscription, closed-world reasoning and abduction, whose relations with logic programming are clear and well studied. Complexity as well as recursion-theoretic results are surveyed. Work partially supported by the ESPRIT Basic Research Action COMPULOG and the Progetto Finalizzato Informatica of the CNR (Italian Research Council). The first author is supported by a CNR scholarship 1 Introduction Non-monotonic logics and negation as failure in logic programming have been defined with the goal of providing formal tools for the representation of default information. One of the ideas und... , 1996 "... . At a workshop held in Toulouse, France in 1977, Gallaire, Minker and Nicolas stated that logic and databases was a field in its own right (see [131]). This was the first time that this designation was made. The impetus for this started approximately twenty years ago in 1976 when I visited Gallaire ..." Cited by 55 (1 self) Add to MetaCart . At a workshop held in Toulouse, France in 1977, Gallaire, Minker and Nicolas stated that logic and databases was a field in its own right (see [131]). This was the first time that this designation was made. The impetus for this started approximately twenty years ago in 1976 when I visited Gallaire and Nicolas in Toulouse, France, which culminated in a workshop held in Toulouse, France in 1977. It is appropriate, then to provide an assessment as to what has been achieved in the twenty years since the field started as a distinct discipline. In this retrospective I shall review developments that have taken place in the field, assess the contributions that have been made, consider the status of implementations of deductive databases and discuss the future of work in this area. 1 Introduction As described in [234], the use of logic and deduction in databases started in the late 1960s. Prominent among the developments was the work by Levien and Maron [202, 203, 199, 200, 201] and Kuhns [1... - Theoretical Computer Science , 1998 "... Abduction-- from observations and a theory, find using hypotheses an explanation for the observations -- gained increasing interest during the last years. This form of reasoning has wide applicability in different areas of computer science; in particular, it has been recognized as an important pr ..." Cited by 37 (7 self) Add to MetaCart Abduction-- from observations and a theory, find using hypotheses an explanation for the observations -- gained increasing interest during the last years. This form of reasoning has wide applicability in different areas of computer science; in particular, it has been recognized as an important principle of common-sense reasoning. In this paper, we define a general abduction model for logic programming, where the inference operator (i.e., the semantics to be applied on programs), can be specified by the user. Advanced forms of logic programming have been proposed as valuable tools for knowledge representation and reasoning. We show that logic programming semantics can be more meaningful for abductive reasoning than classical inference by providing examples from the area of knowledge representation and reasoning. The main part of the paper is devoted to an extensive study of the computational complexity of the principal problems in abductive reasoning, which are: Given an inst... - THEORETICAL COMPUTER SCIENCE , 1994 "... In this paper we introduce revision programming -- a logic-based framework for describing constraints on databases and providing a computational mechanism to enforce them. Revision programming captures those constraints that can be stated in terms of the membership (presence or absence) of items (re ..." Cited by 36 (1 self) Add to MetaCart In this paper we introduce revision programming -- a logic-based framework for describing constraints on databases and providing a computational mechanism to enforce them. Revision programming captures those constraints that can be stated in terms of the membership (presence or absence) of items (records) in a database. Each such constraint is represented by a revision rule ff / ff 1 ; : : : ; ff k , where ff and all ff i are of the form in(a) and out(b). Collections of revision rules form revision programs. Similarly as logic programs, revision programs admit both declarative and imperative (procedural) interpretations. In our paper, we introduce a semantics that reflects both interpretations. Given a revision program, this semantics assigns to any database B a collection (possibly empty) of P-justified revisions of B. The paper contains a thorough study of revision programming. We exhibit several fundamental properties of revision programming. We study the relationship of revision programming to logic programming. We investigate complexity of reasoning with revision programs as well as algorithms to compute P -justified revisions. Most importantly from the practical database perspective, we identify two classes of revision programs, safe and stratified, with a desirable property that they determine for each initial database a unique revision.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1242251","timestamp":"2014-04-21T12:12:20Z","content_type":null,"content_length":"37818","record_id":"<urn:uuid:20494e37-a166-40ee-8657-a3844526fc35>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Hilshire Village, TX SAT Math Tutor Find a Hilshire Village, TX SAT Math Tutor ...I’ve moved from Houston to Austin and back again. I attended the University of Texas at Austin where I pursued a degree in sound engineering and recording technology with an emphasis in Pre-Med and Pre-Law. I plan on attending an institution where I can achieve a medical doctorate simultaneously with a degree in jurisdictional prudence. 38 Subjects: including SAT math, reading, physics, chemistry ...As a certified bilingual teacher, I am required to take 30 hours a year of professional development, which I cover by taking teaching workshops. I started the university by doing 3 years of Engineering and then switched to Architecture, graduating with a B Arch. I then did an interdisciplinary Master of Arts in Arid and Semi-arid Land Studies at Texas Tech. 41 Subjects: including SAT math, Spanish, English, reading ...Chemistry is one of my strongest subjects. I have tutored numerous students at high school, community college, and university level chemistry. Back in high school, I took AP Chemistry and scored a 5 on the AP exam. 37 Subjects: including SAT math, chemistry, calculus, writing ...While in college, I worked for my professors and tutored college students in Calculus I, College Math, and Geometry. My experience as a teacher taught me that every child does not learn math the same way. I used a variety of hands-on learning experiences in my classroom to make the math comprehension easier for the diverse student population I taught. 24 Subjects: including SAT math, calculus, statistics, biology ...Through WyzAnt, I want to broaden my circle of students, and get more tutoring opportunities. I am sure that the students and parents that come to me will return with satisfaction.I am very good at math, given that I did an undergraduate in Physics, where I had to use math in almost every subjec... 13 Subjects: including SAT math, calculus, algebra 2, algebra 1 Related Hilshire Village, TX Tutors Hilshire Village, TX Accounting Tutors Hilshire Village, TX ACT Tutors Hilshire Village, TX Algebra Tutors Hilshire Village, TX Algebra 2 Tutors Hilshire Village, TX Calculus Tutors Hilshire Village, TX Geometry Tutors Hilshire Village, TX Math Tutors Hilshire Village, TX Prealgebra Tutors Hilshire Village, TX Precalculus Tutors Hilshire Village, TX SAT Tutors Hilshire Village, TX SAT Math Tutors Hilshire Village, TX Science Tutors Hilshire Village, TX Statistics Tutors Hilshire Village, TX Trigonometry Tutors Nearby Cities With SAT math Tutor Bellaire, TX SAT math Tutors Bunker Hill Village, TX SAT math Tutors Cypress, TX SAT math Tutors Hedwig Village, TX SAT math Tutors Hunters Creek Village, TX SAT math Tutors Iowa Colony, TX SAT math Tutors Jacinto City, TX SAT math Tutors Kemah SAT math Tutors Meadows Place, TX SAT math Tutors North Houston SAT math Tutors Oak Forest, TX SAT math Tutors Piney Point Village, TX SAT math Tutors Southside Place, TX SAT math Tutors Spring Valley, TX SAT math Tutors West University Place, TX SAT math Tutors
{"url":"http://www.purplemath.com/Hilshire_Village_TX_SAT_Math_tutors.php","timestamp":"2014-04-19T19:52:03Z","content_type":null,"content_length":"24757","record_id":"<urn:uuid:403e9dca-cb3a-4041-831a-d20a133dcebf>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Buck Converter Buck Converter PPT Sponsored High Speed Downloads EE462L, Spring 2014 DC−DC Buck Converter * * V panel + V out – i L L C i C I out i panel Buck converter for solar applications + v L – Put a capacitor here to provide the ripple current required by the opening and closing of the MOSFET The panel needs a ripple-free current to ... DC to DC Converters Buck Converter a) Make the circuit for buck converter using the following parts: VDC (voltage source) VPULSE (voltage source) IRF150 (Swtich) R (Resistance) L (Inductance) C (Capacitance) DIN4002 (Diode) GND ... Current Model Buck Converter Example LM3495 LM5576 LT3713 All materials are from National Semiconductor website available to any readers, Linear Technology data sheet. Chapter 3 DC to DC CONVERTER (CHOPPER) General Buck converter Boost converter Buck-Boost converter Switched-mode power supply Bridge converter Notes on electromagnetic compatibility (EMC) and Buck Converter Control Design Sung-yeol Youn Electrical Engineering Virginia Tech December 10th, 2008 Open-loop Bode Plots and Output Voltage In result, the final design satisfies the instaneous voltage change and settling time specifications. Digital Cost Trends Pentium 4 Thermal Solution Buck Converter I Power/Thermal Impact of Network Computing Cisco Router Technology Symposium Evaldo Miranda & Laurence McGarry The Power Supply Chain The area of digital is cut in half with every new generation The area of analog is reduced by ... Welcome to the Ansoft Web Seminar PExprt/Maxwell 2D/SIMPLORER: Buck Converter/Transformer Mark Christini EM Application Engineering [email protected] Electronic Engineering Final Year Project 2008 By Claire Mc Kenna Title: Point of Load (POL) Power Supply Design Supervisor: Dr Maeve Duffy Overview Project Outline Background Research Buck and Multiphase Buck Converter Simulation Vicor V.I Chip Simulation Buck Converter Vs V.I Chips Project ... EE462L, Spring 2014 DC−DC Boost Converter * * V in + V out – C i C I out i in Buck converter i L L + v L – Boost converter V in + V out – C i C I out i in i L L + v L – * Boost converter This is a much more unforgiving circuit than the buck converter ... Buck Converter IIN IO + VOUT-+ VIN-iL iL,max iL,min IL,crit IL TS = t1+t2 t1 =DTS t2 =(1-D)TS m2 m1 Buck-Boost Converter : Inductor Current iL DIL iL,max iL,min TS = t1+t2 t1 =DTS t2 =(1-D)TS Buck -Boost Converter : Switch Current IL iD = -iO DIL iL,max iL,min TS = t1+t2 t1 =DTS t2 =(1-D)TS B/B ... Title: Buck-Boost Converter Author: Tom Galos Last modified by: Tom Galos Created Date: 12/11/1998 4:59:59 AM Document presentation format: On-screen Show Linear Technology Corp. 40V, 2A Buck-Boost Converter Tony Armstrong Director of Product Marketing, Power Products Why the Need for the LTC3115-1? Regulated Output with Input Above, Below or Equal to VOUT Enables Continual Operation Even with Wide Input Fluctuations: Example: Automotive ... Buck Converter – Discontinuous Conduction iL TS m2 m1 IL = Io D1 D D2 Determine D, D1, D2 iL D1 IL = IIN TS D m2 m1 Boost Converter – Discontinuous Conduction D2 Determine D, D1, D2 Summary for Discontinuous Conduction Mode Parameter Buck Boost Buck/Boost IL IO IIN IIN - IO IL(max) DDCM D1 ... EE462L, Fall 2011 DC−DC Buck Converter Objective – to efficiently reduce DC voltage Here is an example of an inefficient DC−DC converter Another method – lossless conversion of 39Vdc to average 13Vdc Convert 39Vdc to 13Vdc, cont. C’s and L’s operating in periodic steady-state Examine ... Power device selection and a comparative study of transistor technologies for a Zero Voltage Switching Buck Converter Abstract— The new ultra thin wafer technology makes it possible to make Non Punch Through (NPT) IGBTs in the 600V range, with a lightly doped collector. ... 12.5A) Potential conversion steps considered are 380V-12V-1V and 380-48-1V with bus converter, synchronous buck converters and factorized power regulators with sine amplitude converters. Efficiency, power density and annual electrical running cost comparisons are presented. Analysis result: switch conversion ratio µ Characteristics of the half-wave ZCS resonant switch Buck converter containing half-wave ZCS quasi-resonant switch Boost converter example 20.1.3 The full-wave ZCS quasi-resonant switch cell Analysis: ... ... realization of the switch cell in the buck converter Two quasi-resonant switch cells 20.1 The zero-current-switching quasi-resonant switch cell Averaged switch modeling of ZCS cells It is assumed that the converter filter elements are large, ... How does a buck converter operate? How to calculate power loss? How to select external components? Introduction to Switching Regulators Outline Switching Regulator Overview What is a Switching Regulator? Why is a switcher needed? Buck devices such as Azuray Technologies and Tigo Energy make, are “most effective in PV systems where shade or mismatch occurs only on a few PV panels. In this case, the buck converter is installed only on those PV panels experiencing shade.” Buck Converter Circuit. Rangkaian buck converter terdiridariduabuah transistor, induktor, dankapasitor. Analysis Buck Converter Circuit. Arus rata-rata: Induktans: Operasi Buck Converter-CCM. Continuous conduction mode (CCM): I min selalupositif. CCM duty ratio: The input current to a buck converter is discontinuous, with short rise and fall times, and for this reason an input capacitor is mandatory in order to: Provide a low impedance supply for the converter. Reduce noise on the input supply. University of Illinois ECE445: Senior Design Laboratory DC-DC Converter for EWB Wind Turbine Project Group 3 Jeong-Ah Lee Chris Livesay Qing Janet Wang The Current-mode Buck Converter The next few slides focus on few design issues of the Current–Mode Buck converter: Pole-Zero cancellation. On-Chip Inductor current sensing. Subharmonic oscillations. Pulsewidth Generator The output ... Title: Sliding Mode Control for Half-Wave Zero Current Switching Quasi-Resonant Buck Converter Author: ahmed Last modified by: ahmed Created Date ... Operation of secondary-side diodes Volt-second balance on output filter inductor Full bridge Half bridge isolated buck converter Forward converter Forward converter with transformer equivalent circuit Forward converter: ... Power Electronics Small-signal converter modeling and frequency dependant behavior in controller synthesis by Dr. Carsten Nesgaard Agenda Small-signal approximation Voltage-mode controlled BUCK Converter transfer functions dynamics of switching networks Controller design (voltage-mode ... We can solve for output voltage by focusing on inductor Volt-second balance Buck Converter in Continuous Conduction Switch closed (for time DT) Switch open (for time (1-D)T) ... Augmented Buck Converter. Desire an ideal voltage source – fixed voltage, any current. Add energy paths that can supply or sink energy during a change in the load (not in steady state) ... Ripple Regulator in a buck converter (basic buck converter is below) Ripple regulator output voltage: magnitude and frequency can be varied by choice of components. Often around 40 mV Buck converter Continuous vs Discontinuous mode of Operation Buck converter in Discontinuous mode Specifications / Design considerations Need for Efficiency Improvement Small Signal Model of Buck Converter PWM Switch PWM switch model Next step we perturb and linearize the equations, ... Preliminary Power Supply topology 1- isolated - Push-pull (Half/full bridge) - Fly back 2- non isolated - Buck converter - Boost converter - Buck-Boost converter Boost Converter [2] Operating modes [2] : ... Chapter 3 DC to DC Converters Outline 3.1 Basic DC to DC converters 3.1.1 Buck converter (Step- down converter) 3.1.2 Boost converter (Step-up converter) Introduction to Push-Pull and Cascaded Power Converter Topologies Presented by Bob Bell About the Presenter Outline: Buck Regulator Family Lines Push-Pull Topology Introduction Push-Pull Controller Cascaded Push-Pull Topologies Cascaded Controller Cascaded Half-Bridge Topology Introduction ... Electronic Engineering Final Year Project 2008 By Claire Mc Kenna Title: Point of Load (POL) Power Supply Design Supervisor: Dr Maeve Duffy Project Outline Objective is to compare the industry used Dc-Dc Voltage Regulator Module (VRM) the (Interleaved Buck Converter) with a conventional power ... ELC4345, Fall 2013 DC−DC Buck/Boost Converter * * Boost converter + V out – I out C V in i in L1 + v L1 – Buck/Boost converter + v L2 – C1 + v C1 – L2 V in i in L1 + v L1 – + v L2 – C1 + v C1 – L2 + V out – I out C * Buck/Boost converter This circuit ... Switching Regulators Buck Converter – V Down only Switching Regulators Boost Converter – V up only Switching Regulators Buck-Boost – V Up or Down Design Considerations ... Circuit Example:48 V 1.2 V Buck Converter. High Step Down Ratio Buck Converters Require Small Duty Cycle. Implies Small On Time. Goal: Examine How A Buck Converter With GaN FETs Performs In This International Rectifier IR2110(S)/IR2113(S) High and Low Side Driver Typical Connection for Half-Bridge Inverter Half-Bridge Inverter Application Application in a Buck Converter International Rectifier IR2110(S)/IR2113(S) High and Low Side Driver Typical Connection for Half-Bridge Inverter Half ... Ginlong Generator. Turbine. INPUT CIRCUIT. BUCK CONVERTER. BATTERY CHARGING CIRCUIT. Sensor. CONTROL. PROCESSOR. System Block Diagram. Electrical System Conduction ILB = critical current below which inductor current becomes discontinuous Buck Converter: Discontinuous Conduction Mode Steady state; inductor current discontinuous (i.e. it goes zero for a time) Note that output voltage depends on load current Buck: ... Supports dual-buck and single-boost stage conversion. Includes example software for implementing digital dual-synchronous buck converter and boost converter. Daughter Board for: 16-bit 28-Pin Starter Board ($79.99) As well as the modular Explorer 16 Board ($129.99) Digital Power Block Set (DPBS) – Examples 1/3. Examples. Components. Simulation. TI Developer's Kits. Tools. Components. Compensators. 2p2z compensators. Buck Converter - PID compensator and BUCK CONVERTER. When the inductor has discharged all its energy, its current falls to zero and tends to reverse, but the diode blocks conduction in the reverse direction. In the this state, both the diode and the transistor are OFF . BUCK CONVERTER. The method of a fast electrothermal transient analysis of a buck converter Krzysztof Górecki and Janusz Zarębski Department of Marine Electronics The buck circuit will take the voltage generated by the boost buck down to cells. The negative feedback loop. ... Buck Converter . All necessary voltages and currents . Battery Management System communication. USB-SPI Processing GUI. Integration Test ... International Rectifier IR2110(S)/IR2113(S) High and Low Side Driver Typical Connection for Half-Bridge Inverter Half-Bridge Inverter Application Application in a Buck Converter Project Objective. Overall Objective. Develop a bi-directional buck/boost dc-dc converter between the regenerative energy storage systems (RESS) and the dc link (traction drive inverter), Buck converter with r ¼ close to module; charge pump with r = ½ per module. Seems NOT compatible with some pT-modules (technology, space for passives) Possible useful application: provision of U dig for CBC. conversion ratio 4:3, current ~ 20mA. ... Easy and more visual Design Approach Buck Converter Convert ac to dc using bridge rectifier Buck converter is used to vary the voltage to get desired voltage output Buck converter output Vo=Vs*D Design Approach Flyback Converter Design Approach PWM Output equations: Vo=Vs(D/(1 -D)) N2 ...
{"url":"http://ebookily.org/ppt/buck-converter","timestamp":"2014-04-23T11:54:58Z","content_type":null,"content_length":"41522","record_id":"<urn:uuid:e5bfe9f2-ecfd-4c9a-9e46-0af44f747f2f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
What are the limit points of this set? January 11th 2010, 05:24 AM #1 Junior Member Aug 2009 What are the limit points of this set? What are the limit points of the set containing x^n such that n is in the positive natural numbers for each a that is a member of the inteval [-1,1]? $x^n=1 \Longleftrightarrow x=1\,\,\,or\,\,\,x=-1\,\,and\,\,n\,\,even\,,\,\,x^n=-1\,\,if\,\,x=-1\,\,and$$n\,\,odd\,,\,\,and\,\,\,x^n\rightarrow 0\,\,\,if\,\,\,|x|<1$ Now prove the above and you get your answer. I have already done this, it is more the concept of limit points I don't understand, I believe 0 to be one for every value not equal to -1 1 or 1 but I can't see what the others would be January 11th 2010, 06:38 AM #2 Oct 2009 January 11th 2010, 06:55 AM #3 Junior Member Aug 2009 January 11th 2010, 09:44 AM #4
{"url":"http://mathhelpforum.com/differential-geometry/123253-what-limit-points-set.html","timestamp":"2014-04-19T12:37:00Z","content_type":null,"content_length":"41321","record_id":"<urn:uuid:278dee3b-738c-492e-be3f-42aa0bc0daba>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Created by, ROHAYAH RAMLI Curriculum Info Meet the Author Why quadrilaterals? This is an interesting topic in geometry because you can identify each type of quadrilateral just by looking at the diagram. No memorization is involved if you can remember what a rectangle, square, parallelogram, rhombus, and trapezoid look like. Have fun!
{"url":"http://my-ecoach.com/online/webresourcelist.php?rlid=11318","timestamp":"2014-04-17T06:41:12Z","content_type":null,"content_length":"5531","record_id":"<urn:uuid:88cc5791-e25e-41fc-a8ee-22930b81c270>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
ROn Point: Probability on the GMAT Areas such as calculus and linear algebra are usually only covered in college, putting students at varying levels of exposure depending on which programs they chose to study. Therefore, topics covered on the GMAT tend to come from high school level math, in order to give everyone a fair starting point. However, the problem with high school is that a lot of the learning is based on memorization and repetition. While these two procedures are important in the learning process, they are building blocks, not end points. To that point, much of math in high school is based on memorizing and applying formulae (“formulas” is also an acceptable plural for formula, but I prefer the more apposite formulae). Mechanically applying the correct formula usually yields the correct answer, for example the probability of seeing a heads on two consecutive coin flips is P(Heads on first) + P(Heads on second) – P(Heads of both). Identifying the correct probabilities for each occurrence will yield you the correct answer, but understanding why the chosen formula works is more crucial to getting these questions right on the GMAT. Automatically applying formulae without understanding what is being asked is a typical GMAT trap given that the exam is about logic and reasoning. Consider the following probability question: There is a 50% chance Jen will visit Chile this year, while there is a 25% chance that she will visit Madagascar this year. What is the probability that Jen will visit either Chile or Madagascar this year, but NOT both? A. 25% B. 50% C. 62.5% D. 63.5% E. 75% Putting aside her peculiar travel schedule for a moment, (visiting Western South America and South East Africa without stopping by the Falklands?) what is the probability that Jen travels to one of these locales but not the other? Dutifully applying our memorized probability formula: the probability of visiting Chile is 0.5, and the probability of visiting Madagascar is 0.25, we get 0.5 + 0.25 – (0.5*0.25) or 0.75 – 0.125 = 0.625. Or answer choice C. (As Admiral Ackbar tried to warn us in 1983: It’s a Trap!) But why doesn’t the formula yield the correct choice? The answer lies in the question stem. This particular question asks us the probability of visiting Chile or Madagascar, but not both. The formula gives us the probability of visiting either, implicitly allowing the choice of visiting both. The probability of visiting either will indeed be 62.5% (or 5/8), but this is the probability of visiting Chile, Madagascar, or both. Verifying the converse, the probability of visiting neither is P(¬Chile) and P(¬Madagascar), or 0.5 * 0.75 = 0.375 (or 3/8), confirming that our 0.625 is merely the probability of not staying home this year. The obvious question now is: Why doesn’t the formula work? Didn’t the formula already account for the possibility of both? How do I solve this question correctly? (Admittedly, these are three questions and not one). The key to answering all of them is the same, though. Let’s go through the logic of the formula P(C) + P(M) – P(C&M): 1. The first argument allows for all possibilities of visiting Chile, regardless of what happens with Madagascar. 2. The second argument allows for all possibilities of visiting Madagascar, regardless of what happens with Chile (or the Falklands) 3. The third argument is the possibility of both occurring. The formula works because P(C) accounts for the both choice, and P(M) accounts for the both choice as well, indicating that this option has been double counted. In order to count it only once, we need to remove one instance of it. This is why the formula works and is popular; it addresses the inherent problem in probability, double counting certain situations. The same logic applies to Venn diagrams and other similar question types where counting the same argument twice (or thrice) can occur. In practice, this question could then be solved using the default formula of P(C) + P(M) – P(C&M) and then subtracting P(C&M) again… or simply P(C) + P(M) – 2*P(C&M). Simply put, the first two arguments in the formula account for the “both” possibility twice, so we must remove it twice to answer the question at hand. A somewhat similar yet more straight forward alternative is to go bottom up instead of top down. The probability of going only to Chile is P(C) * P(¬M) = 0.5 * 0.75 = 0.375. The probability of going only to Madagascar is P(M) * P(¬C) = 0.25 * 0.5 = 0.125. Adding those two probabilities together yields the correct answer of 0.5 or 50%. The revised formula also yields this result (0.5 + 0.25 – 2 (0.125) = 0.5), so we can feel confident in our answer choice and not any of the other tempting answer choices. The answer is (B). On the GMAT, knowing the formula is often a useful shortcut to get to the correct answer more efficiently than trying to write out all the possibilities or estimating values. However, not knowing the formula is not a guarantee of an incorrect answer, and knowing the formula is not necessarily a guarantee of a correct answer either. It is always important to grasp the underlying logic before blindly applying formulae to a problem. The GMAT is much more often an exam of how you think than what you know. Plan on taking the GMAT soon? We have GMAT prep courses starting all the time. And, be sure to find us on Facebook and Google+, and follow us on Twitter! Ron Awad is a GMAT instructor for Veritas Prep based in Montreal, bringing you occasional tips and tricks for success on your exam. After graduating from McGill and receiving his MBA from Concordia, Ron started teaching GMAT prep and his Veritas Prep students have given him rave reviews ever since.
{"url":"http://www.veritasprep.com/blog/2013/04/ron-point-probability-on-the-gmat/","timestamp":"2014-04-16T21:54:18Z","content_type":null,"content_length":"50438","record_id":"<urn:uuid:3984873b-7be4-4b3d-9502-7e5b5e558cfa>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Hydrogen atoms under the magnifying glass The latest news from academia, regulators research labs and other things of interest Posted: May 27, 2013 Hydrogen atoms under the magnifying glass (Nanowerk News) To describe the microscopic properties of matter and its interaction with the external world, quantum mechanics uses wave functions, whose structure and time dependence is governed by the Schrödinger equation. In atoms, electronic wave functions describe - among other things - charge distributions existing on length-scales that are many orders of magnitude removed from our daily experience. In physics laboratories, experimental observations of charge distributions are usually precluded by the fact that the process of taking a measurement changes a wave function and selects one of its many possible realizations. For this reason, physicists usually know the shape of charge distributions through calculations that are shown in textbooks. That is to say, until now. An international team coordinated by researchers from the Max-Born Institute has succeeded in building a microscope that allows magnifying the wave function of excited electronic states of the hydrogen atom by a factor of more than twenty-thousand, leading to a situation where the nodal structure of these electronic states can be visualized on a two-dimensional detector. The results were published in Physical Review Letters ("Hydrogen Atoms under Magnification: Direct Observation of the Nodal Structure of Stark States") and provide the realization of an idea proposed approximately three decades ago. Figure: (left) two-dimensional projection of electrons resulting from excitation of hydrogen atoms to four electronic states labeled with a set of quantum numbers (n1,n2,m) and having (from top to bottom) 0, 1, 2 and 3 nodes in the wave function for the ? = r+z parabolic coordinate; (right) comparison of the experimentally measured radial distributions (solid lines) with results from quantum mechanical calculations (dashed lines), illustrating that the experiment has measured the nodal structure of the quantum mechanical wave function. The development of quantum mechanics in the early part of the last century has had a profound influence on the way that scientists understand the world. Quantum mechanics extended the existing worldview based on classical, Newtonian mechanics by providing an alternative description of the micro-scale world, containing numerous elements that cannot be classically intuited, such as wave-particle duality, the importance of interference and entanglement, the Heisenberg uncertainty principle and the Pauli exclusion principle. Central to quantum mechanics is the concept of a wave function that satisfies the time-dependent Schrödinger equation. According to the Copenhagen interpretation, the wave function describes the probability of observing the outcome of measurements that are performed on a quantum mechanical system, such as measurements of the energy of the system or the position or momenta of its constituents. This allows reconciling the occurrence of non-classical phenomena on the micro-scale with manifestations and observations made on the macro-scale, which correspond to viewing one or more of countless realizations allowed for by the wave function. Despite the overwhelming impact on modern electronics and photonics, grasping quantum mechanics and the many possibilities that it describes continues to be intellectually challenging, and has over the years motivated numerous experiments illustrating the intriguing predictions contained in the theory. For example, the 2012 Nobel Prize in Physics was awarded to Haroche and Wineland for their work on the measurement and control of individual quantum systems in quantum non-demolition experiments, paving the way to more accurate optical clocks and, potentially, the future realization of quantum computers. Using short laser pulses, experiments have been performed illustrating how coherent superpositions of quantum mechanical stationary states describe electrons that move on periodic orbits around nuclei. The wave function of each of these electronic stationary states is a standing wave, with a nodal pattern that reflects the quantum numbers of the state. The observation of such nodal patterns has included the use of scanning tunneling methods on surfaces and recent laser ionization experiments, where electrons were pulled out of and driven back towards their parent atoms and molecules by using an intense laser field, leading to the production of light in the extreme ultra-violet wavelength region that encoded the initial wave function of the atom or molecule at rest. About thirty years ago, Russian theoreticians proposed an alternative experimental method for measuring properties of wave functions. They suggested that experiments ought to be performed studying laser ionization of atomic hydrogen in a static electric field. They predicted that projecting the electrons onto a two-dimensional detector placed perpendicularly to the static electric field would allow the experimental measurement of interference patterns directly reflecting the nodal structure of the electronic wave function. The fact that this is so, is due to the special status of hydrogen as nature´s only single-electron atom. Due to this circumstance, the hydrogen wave functions can be written as the product of two wave functions that describe how the wave function changes as a function of two, so-called “parabolic coordinates”, which are linear combinations of the distance of the electron from the H+ nucleus “r”, and the displacement of the electron along the electric field axis “z”. Importantly, the shape of the two parabolic wave functions is independent of the strength of the static electric field, and therefore stays the same as the electron travels (over a distance of about half a meter, in this experimental realization) from the place where the ionization takes place to the two-dimensional detector. To turn this appealing idea into experimental reality was by no means simple. Since hydrogen atoms do not exist as a chemically stable species, they first had to be produced by laser dissociation of a suitable precursor molecule (hydrogen di-sulfide). Next, the hydrogen atoms had to be optically excited to the electronic states of interest, requiring another two, precisely tunable laser sources. Finally, once this optical excitation had launched the electrons, a delicate electrostatic lens was needed to magnify the physical dimensions of the wave function to millimeter-scale dimensions where they could be observed with the naked eye on a two-dimensional image intensifier and recorded with a camera system. The main result is shown in the figure below. This figure shows raw camera data for four measurements, where the hydrogen atoms were excited to states with 0, 1, 2 and 3 nodes in the wave function for the ? = r+z parabolic coordinate. As the experimentally measured projections on the two-dimensional detector show, the nodes can be easily recognized in the measurement. As this point, the experimental arrangement served as a microscope, allowing us to look deep inside the hydrogen atom, with a magnification of approximately a factor twenty-thousand. Besides validating an idea that was theoretically proposed more than 30 years ago, our experiment provides a beautiful demonstration of the intricacies of quantum mechanics, as well as a fruitful playground for further research, where fundamental implications of quantum mechanics can be further explored, including for example situations where the hydrogen atoms are exposed at the same time to both electric and magnetic fields. The simplest atom in nature still has a lot of exciting physics to offer. Subscribe to a free copy of one of our daily Nanowerk Newsletter Email Digests with a compilation of all of the day's news.
{"url":"http://www.nanowerk.com/news2/newsid=30665.php","timestamp":"2014-04-19T12:00:54Z","content_type":null,"content_length":"46079","record_id":"<urn:uuid:52cd5f12-cc1b-46d4-ab38-70b393b2e865>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptanalysis of Multi It's not hard to see that, for 32-bit numbers this occurs precisely when bits 15 and 31 of z are 0. (Note: the LSB of z is bit 0). This occurs with probability 1/4. So the probability that a pair of inputs w and 2w produce outputs y and 2y is the probability that the differential is preserved at every swapping step. Since this happens with probabilitity 1/4 at each swap, we expect this to occur with probability (1/4)^4=1/256. So suppose the input pair (w,2w) produces output pair (v,2v). We call the pair (w,2w) a right pair. Then with high probability bits 15 and 31 of k6*w are 0. This is a two-bit condition on k6*w that one can use to filter the set of potential values of k6; 1/4 of all k6 values will pass this test. One can repeat this test for 16 right input pairs (w1,2*w1)...(w16,2*w16) chosen uniformly at random, and the probability of a given k6 value surviving all 16 tests is roughly (1/4)^16 = 2^-32, so we expect about one value of k6 to survive. We now show that, having determined k6, an attacker can determine k7 using very few additional queries. Note that if (w,2w) is a right pair, then Once k7 has been determined, the right pairs can be applied to determing k8, and so on. Continuing in this way, we see that k6,...,k10 can all be determined with high probability using fewer than 8192 chosen-plaintexts. An attacker can then apply the same trick to k0,...,k4. Thus the whole cipher can be broken with about 2^14 chosen-plaintexts. This is surprisingly small considering the large key size. We should now mention that the work factor of breaking the cipher is quite low, as well. Suppose an attacker has right pairs (w1,2*w1),...,(w16,2*w16) which determine k6. By definition of being right, bits 15 and 31 of k6*wi are 0 for all i. These constraints can be translated into nonlinear equations on the bits of k6. Unfortunately, the degree of the equations is as large as 31, so solving them directly is impossible. One could iterate over all possible values of k6, throwing out the ones that don't satisfy the equations, but this will require testing 2^32 keys. Observe that bit 15 of k6*wi is independent of bits 16,...,31 of k6, though. Thus an attacker can try all possible values for the low 16 bits of k6, checking whether they satisfy this equation. After discovering the lower 16 bits, he can then do the same thing for the upper 16 bits. Since we have to test each half of a key against each right pair, the total number of tests performed is 2*2^16*2^4=2^21. Repeating for k7,...,k10, and then again for k0,...,k4 yields that the whole cipher can be broken with10*2^19~=2^25 tests. But how expensive is each test? Testing the lower or upper 16 bits of a key against a wi involves multiplying by wi, masking bit 15 (or 31), and testing for 0. This is about 1/8th as expensive as the MultiSwap encryption, which requires 10 multiplies, 10 swaps, and 6 adds. So the work factor is about (2^25)/8=2^22 encryptions. Converting to known-plaintext attack Recall there are two stages to the attack: recover k5 and k11, and recover the rest of the key. The attack on k5 and k11 can be converted to a known-plaintext attack as follows. Referring to Figures 1 and 2, observe that w=c1-c0+x1. With probability 2^-32, this value is 0, and that situation can be detected. When this happens, c0=k11. So a set of 2^32 known-plaintexts should suffice to recover k11. Similarly, s0'=c1-c0-s1. Out of a set of 2^32 known-plaintexts, on average one plaintext should satisfy x0+s0=0, in which case s0'=k5. Since these two events are roughly independent, an attacker should be able to recover k5 and k11 with 2^32 known-plaintexts. One can also convert the second stage of the attack to use known-plaintexts. We first have to see that the inputs and outputs of the two halves of the cipher can be isolated. So suppose an attacker knows k5 and k11. First observe that c1 = c0 + s0'. So the input to Figure 2 can be computed as w=c1 - c0 + x1. Since he knows k11, an attacker can also obviously compute the output, v, of the fragment in Figure 2. For the first half of the cipher, the input is x0 (or x0 + s0, which is known, if used in a chaining mode). The output (immediately after multiplication by k4) is All that's left is to figure out the number of messages one needs to capture before expecting to have 8192 pairs (w,2w) for the second round and 8192 pairs (w,2w) for the first round. With 2^22.5 known-plaintexts, we get 2^44 pairs, and the probability that any one of these pairs is of the form (w,2w) is 1/2^31. Hence we expect to have 2^13 such pairs. Experiments confirm this estimate. Thus with 2^22.5 known plaintexts, we expect that the 2^22.5 inputs to the second round will contain about 2^13 pairs, enough to recover k6,...,k10. But these same messages yield 2^22.5 inputs to the first round, which should also contain 2^13 pairs. Since these events are independent, one should be able to break the system with 2^22.5 known plaintexts. Detecting the pairs in a set of known plaintexts is easy if the pairs are stored in a hash-table, so the work factor is just 2^25, as above. A better known-plaintext attack The known-plaintext attack described above requires 2^32 texts, which seems like a waste since those texts are only required to recover k5 and k11. By performing both halves of the attack simultaneously, one can get by with just 2^22.5 known-plaintexts. Recall that, even without knowledge of k5 and k11, we can derive the input to the cipher fragment in Figure 2 via w=c1-c0+x1. If we extend our differential through the additional swap immediately preceding the addition of k11, we get a differential (w,2w) -> (v,2v) -> (v+k11,2v+k11) with probability 1/2^10. Given such a right pair with outputs c0 and c0', we can compute k11=c0'-2*c0. So collect 2^22.5 known-plaintexts, and collect from them 2^13 pairs whose input to Figure 2 is (w,2w). Each such pair suggest a candidate for k11=c0'-2*c0. The right value of k11 will be suggested once for each right pair, or 2^13/2^10=8 times. Wrong pairs will suggest a random value for k11, and so no other value for k11 should be suggested more than once or twice. With k11, one can use the previously described attack to recover k6,...,k10. This attack can then be repeated for k5, and then for k0,...,k4. The total work factor is about the same as for the previous attacks. The storage is also quite small, since we don't have two keep a counter for every possible value of k11, only the ones suggested by a pair. Since we use only about 2^13 pairs, the storage requirement is about 2^16 bytes. We have seen that MultiSwap can be broken with a 2^14 chosen-plaintext attack or a 2^22.5 known-plaintext attack, requiring 2^25 work. We believe this shows that MultiSwap is not safe for any use.
{"url":"http://www.cs.sunysb.edu/~rob/multiswap/","timestamp":"2014-04-25T02:50:50Z","content_type":null,"content_length":"13783","record_id":"<urn:uuid:5add056f-46a5-4ca5-833f-8b6d553864e6>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Haverford Statistics Tutor Find a Haverford Statistics Tutor ...I PROVE my expertise by showing you my perfect 800 score on the SAT Subject Test, mathematics level 2. I'm not too shabby at reading and writing, either. Unlike the one-size-fits-all test-prep courses, and the overly-structured national tutoring companies, I always customize my methods and presentation for the student at hand. 23 Subjects: including statistics, English, calculus, algebra 1 ...Our son earned an A in a 7th grade advanced math class. At the end of the school year, my son asked if he could continue to work with Jonathan over the summer to continue to advance his knowledge of math. I think this speaks volumes about Jonathan’s ability to make a tough subject both fun and interesting. 22 Subjects: including statistics, calculus, writing, geometry ...The problems seem more like puzzles to me. Hopefully, I can turn this "nightmare" into a pleasant learning experience. SWIMMING: As a former competitive swimmer and avid recreational swimmer, I am a total water bug. 20 Subjects: including statistics, reading, algebra 2, biology ...I spent a semester abroad in Guatemala and El Salvador where I taught English and Spanish at the European Academy in San Salvador, and I worked with several non-profits throughout the region (COPREDEH / Paz Joven). I was trained by professional teachers from Georgetown University and other noted ... 26 Subjects: including statistics, Spanish, English, writing ...I first became interested in this area as a monitor for another nursing eligibility exam. Many of the local colleges that have nursing programs use the TEAS exam to determine eligibility for their nursing candidates. The English, Math and Science portions of this exam are similar to many other college entrance exams such as the SAT,and ACT exams. 51 Subjects: including statistics, English, reading, geometry
{"url":"http://www.purplemath.com/Haverford_statistics_tutors.php","timestamp":"2014-04-16T07:34:37Z","content_type":null,"content_length":"24108","record_id":"<urn:uuid:826fdd5c-e747-4d92-af02-aaa7f76d6296>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
harmonic form harmonic form Special and general types Special notions Riemannian geometry Basic definitions Further concepts A differential form $\omega \in \Omega^n(X)$ on a Riemannian manifold $(X,g)$ is called a harmonic fom if it is in the kernel of the Laplace operator $\Delta_g$ of $X$ in that $\Delta \omega = (d + d ^\dagger)^2 \omega = 0$. The basic properties of harmonic forms are described by Hodge theory. See there for details. • Springer Online Dictionary, Harmonic form (web) Revised on February 1, 2011 09:25:37 by Urs Schreiber
{"url":"http://www.ncatlab.org/nlab/show/harmonic+form","timestamp":"2014-04-18T18:56:03Z","content_type":null,"content_length":"30370","record_id":"<urn:uuid:c84fcef9-6695-4e25-81c3-4f6a99954108>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/calyne/asked/1","timestamp":"2014-04-16T17:22:11Z","content_type":null,"content_length":"101849","record_id":"<urn:uuid:6e59a2df-91b3-49c5-8a16-58b7a686a355>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
The second illustration of the new functionality in the DifferentialGeometry package for Maple 17 also involves the exceptional Lie algebra , but now in the context of infinitesimal holonomy. Let be an -dimensional Riemannian manifold with metric . Let be a fixed point. Then the holonomy group Holof the Riemannian manifold is the group of linear transformations defined by the parallel transport of vectors around closed loops starting at . The Lie algebra of the holonomy group is called the infinitesimal holonomy . According to the Ambrose-Singer Theorem, the infinitesimal holonomy can be computed in terms of the curvature tensor of the metric and the covariant derivatives of the curvature tensor. For many years it was an open problem in differential geometry as to whether or not there was a metric whose infinitesimal holonomy was the exceptional Lie algebra This question was answered in the affirmative by R. Bryant in 1987. In this example, use the new command InfinitesimalHolonomy to verify an example of a metric with infinitesimal holonomy (due to S. Salamon). First, define coordinates for a 7-dimensional manifold : The computations for this example are much more easily performed in an anholonomic frame. Set: Calculate the structure equations for this co-frame and initialize: The metric you will study is: This metric is Ricci flat. The infinitesimal holonomy is 14-dimensional. The holonomy algebra is easier to display if you first use the command CanonicalBasis to transform the output to standard form and then make the change of variables u = t^3. You can now proceed as in Example 1 to show that this Lie algebra is also the exceptional Lie algebra . Finally, you can use CovariantlyConstantTensors and the new command InvariantTensorsAtAPoint to show that the metric above admits a covariantly constant (parallel) 3-form. Any covariantly constant 3-form must be pointwise invariant with respect to the infinitesimal holonomy algebra. First find these invariant 3-forms. In terms of the original coordinate this becomes: Every covariantly constant 3-form must be a multiply of this form so use this as the ansatz which you pass to the command CovariantlyConstantTensors. Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=updates/Maple17/DifferentialGeometry","timestamp":"2014-04-18T00:28:03Z","content_type":null,"content_length":"278898","record_id":"<urn:uuid:2a61b5c4-8caa-41cb-beea-ca8dadd51df0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Spring Valley, TX Algebra 2 Tutor Find a Spring Valley, TX Algebra 2 Tutor ...I am a loving and patient Christian mom of three children. I have 20 years of experience teaching Algebra and Chinese in elementary and middle school in Taiwan and USA. I also have two years experience teaching Chinese phonics at Evergreen Chinese school. 12 Subjects: including algebra 2, reading, geometry, algebra 1 ...I look forward to helping you (or your child) meet your goals.I've studied the Bible for over 20 years, both independently and in group studies. I spent a year under rigorous tutelage with three experienced preachers in Lufkin, Tx. Afterwards, I spent five years preaching full time in Gladewater, TX and conducting missionary work overseas. 41 Subjects: including algebra 2, chemistry, English, calculus ...I try as much as possible to work in the comfort of your own home at a schedule convenient to you. I operate my business with the highest ethical standards and have consented to a background check if you would like one.I have taught flute and clarinet lessons since the mid-'80s with many student... 35 Subjects: including algebra 2, chemistry, physics, calculus ...These results are immediately fed back to both student and parents so that a preliminary attack plan is known before starting. I have had high school students from Carnegie Vanguard in HISD, Westchester Academy in SBISD, Hightower in FBISD, St. Johns in River Oaks, Awty, Cypress, Jersey Village, Kline, Spring. 36 Subjects: including algebra 2, English, physics, chemistry Most math problems can be solved in 3-5 steps! As a certified math teacher in Cy-Fair and as a tutor, I believe that anyone can learn and understand math. Those problems that appear complicated are just a series of simple concepts woven together. 16 Subjects: including algebra 2, reading, calculus, GRE Related Spring Valley, TX Tutors Spring Valley, TX Accounting Tutors Spring Valley, TX ACT Tutors Spring Valley, TX Algebra Tutors Spring Valley, TX Algebra 2 Tutors Spring Valley, TX Calculus Tutors Spring Valley, TX Geometry Tutors Spring Valley, TX Math Tutors Spring Valley, TX Prealgebra Tutors Spring Valley, TX Precalculus Tutors Spring Valley, TX SAT Tutors Spring Valley, TX SAT Math Tutors Spring Valley, TX Science Tutors Spring Valley, TX Statistics Tutors Spring Valley, TX Trigonometry Tutors Nearby Cities With algebra 2 Tutor Bellaire, TX algebra 2 Tutors Bunker Hill Village, TX algebra 2 Tutors Cypress, TX algebra 2 Tutors El Lago, TX algebra 2 Tutors Hedwig Village, TX algebra 2 Tutors Highlands, TX algebra 2 Tutors Hilshire Village, TX algebra 2 Tutors Hunters Creek Village, TX algebra 2 Tutors Iowa Colony, TX algebra 2 Tutors Meadows Place, TX algebra 2 Tutors North Houston algebra 2 Tutors Oak Forest, TX algebra 2 Tutors Piney Point Village, TX algebra 2 Tutors Southside Place, TX algebra 2 Tutors West University Place, TX algebra 2 Tutors
{"url":"http://www.purplemath.com/Spring_Valley_TX_Algebra_2_tutors.php","timestamp":"2014-04-21T15:07:03Z","content_type":null,"content_length":"24567","record_id":"<urn:uuid:a794ff99-f729-4d73-b2bc-306f8fbefe9b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Beam offsets When defining a beam cast integrally with a slab I had initially thought the best way to do it would be to offset the beam down to correctly represent the true stiffness of the system. However upon seeing the results, I was puzzled as to the "saw tooth" shape of the bending moment diagram in the beams, and alot of membrane force in the slab. After reading the following post http://forums.autodesk.com/t5/Autodesk-Robot-Structural/T-slab/m-p/3185418#M915 I thought I would try to follow the solution offered, ie no beam offset. The bending moment in the beam now looks like I expect. I created a trial model, a cantilever beam and slab with 2 back spans, all equal length. The only load case is self weight. See pictures for comparison; Left is no offset, right is offset to match slab level. Note saw tooth bending diagram on right. Membrane force in local x, and panel cut of bending in x direction, overlaid on bending map. So in the "no offset model" (left), I get peak beam hogging over the first support of 2320kNm, versus only 1485kNm in the centre beam on the right model with offsets. My question is, which method of modelling produces results which are closer to "reality". I'm running the same system as a monolithic 3D volumatric element in the background as a comparison. I'll post stress maps when it finishes running.
{"url":"http://forums.autodesk.com/t5/Robot-Structural-Analysis/Beam-offsets/m-p/3332969/highlight/true","timestamp":"2014-04-23T06:51:36Z","content_type":null,"content_length":"268548","record_id":"<urn:uuid:05afc247-21c7-4967-8642-d75f69b9d7df>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Every Syzygy is a linear combination of pair-wise Syzygies Im working on understanding Gröbner bases. I've understood how to show existance and uniqueness(of reduced Gröbner bases). To understand how to actually compute them, I need to understand Syzygies in free modules. The theorem reads thus: In a ring of multivariate polynomials over a field, if S =(s_1,s_2,s_3...s_n) is a syzygy of (m_1,m_2,m_3...m_n), where every m_i is a monomial, S is a linear combination of the canonical pair-wise I've been trying to get some headway on this proof for a week now, with little success. Any comments or hints appreciated! Thank you!
{"url":"http://www.physicsforums.com/showthread.php?t=525751","timestamp":"2014-04-21T04:39:40Z","content_type":null,"content_length":"20137","record_id":"<urn:uuid:f52df121-f29f-4111-ab4a-bceaf374bd5b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Eliminability of AC Bas Spitters b.spitters at cs.ru.nl Tue Feb 26 03:29:54 EST 2008 On Monday 25 February 2008 23:45:26 Larry Stout wrote: > Two major theorems in undergraduate mathematics need some form of > choice: > Every vector space has a basis. > Tychanoff's theorem: The product of compact toplogical spaces is > compact. > Will they do for examples from "ordinary mathematics"? These are standard examples, but they do not motivate the use of the axiom of choice very well. * When using the first lemma to obtain a theorem, the next result one wants to prove is that the result does not depend on the choice of the basis. In general, the use of a basis for a vector space should be avoided. Concretely, results depending on the basis of a Hilbert space do not tend to generalize to operator algebras. * The use of the axiom of choice in Tychonoff's theorem is only needed when working with topological spaces instead of working with locales (pointfree topology). It may be argued that the category of locales is more pleasant than the category of topological spaces. We discussed these issues recently: More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2008-February/012700.html","timestamp":"2014-04-20T23:31:22Z","content_type":null,"content_length":"3523","record_id":"<urn:uuid:d3d4364d-a2e0-4bcb-a25d-d7572d477b95>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
In Pursuit of the Traveling Salesman Recent Comments 25 Dec It's a rare pleasure to get a good book ahead of its planned publishing date. In Pursuit of the Traveling Salesman by William Cook that was expected at the beginning of 2012, was delivered yesterday right to my door. My first impression is that this is the sort of a book that are read in one sitting, from cover to cover. The book is clearly intended for the "general audience" and is sweetly written. The author has an obvious writer's streak. The book narrates the history of the so called "Traveling Salesman Problem", TSP for short. Occasionally (and rather suitably for Christmas time) it is also known as the Santa Claus Problem. In its TSP incarnation, the problem is to find the shortest route that passes through a given number of cities. For Santa Claus it may be important to make the planned visits in the shortest time possible. When it comes to drilling points of contact on a printed board or a computer chip, it's crucial - to make the cheaper chips - that the robot arm that drills the holes moves in a shortest possible Here's a portion of such an itinerary on a board The whole picture is available elsewhere. In fact, the TSP has numerous practical applications of which helping campaigning politicians is the most trifling. TSP is NP-complete, which makes it of fundamental importance in Computer Sciences. It is natural that finding an optimal route between 10 cities takes less time than finding an optimal route between 100 cities. As a rule, as the input for an algorithm grows in size so do the memory and time requirements needed to execute the algorithm. An algorithm is thought to be good if that dependency is expressed as a polynomial in the size of the problem and bad if the dependency is exponential. "P" in "NP-complete" stands for "Polynomial". "N" stands for "Non-deterministic". How? This is one thing to solve a problem and another to check whether or not a submitted solution is correct. Problem whose solution can be checked in polynomial time are said to belong to the NP-class. Those whose solution can be found in polynomial time belong to the P-class. Are the two classes the same? If you still do not know that, The Clay Mathematics Institute has announced a $1,000,000 prize for answering this question. The fact that TSP is NP-complete means that the existence of a polynomial time algorithm for its solution will imply N = NP. Every one is welcome to try his or her The book In Pursuit of the Traveling Salesman is a first-hand and a first-class introduction into the evolution of TSP, with chapters devoted to related mathematics and algorithmic topics. TSP is really at the heart of much of the research and development of modern computer science, so the author leads the reader through the past and emerging landscape of relevant research up to the very end of the mapped territory. Reading the book looks like an exciting adventure, with the itinerary mapped for the reader by a master story-teller whose work squarely places him in the forefront of the TSP research. In the words of W. Cook: I plan to take the reader on a path that goes well beyond basic familiarity of the TSP, moving right up to current theory and state-of-the-art solution machinery. For more information on TSP, check a dedicated site. Bill Cook made available a free iPhone application, Concorde TSP Solver, that solves the 24-year old 2392-city example in under 7 minutes on iPhone 4. At that time, a supercomputer has set up a new world record by solving the problem in 23 hours.
{"url":"http://www.mathteacherctk.com/blog/2011/12/in-pursuit-of-the-traveling-salesman/","timestamp":"2014-04-17T18:22:44Z","content_type":null,"content_length":"39456","record_id":"<urn:uuid:1643413a-6247-4cd5-946b-a057fd0c1c4a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Glenn Heights, TX Statistics Tutor Find a Glenn Heights, TX Statistics Tutor ...I was required to teach math essentials, reading essentials, SAT/ACT Prep, as well as, homework support for students grades K-12th. I also had a handful of college students and adults that would come to the center for homework support. At Sylvan Learning Center I acquired the necessary skills t... 23 Subjects: including statistics, reading, chemistry, physics ...I graduated from New Tribes Bible Institute with the equivalent of an associate's degree in Biblical studies. My greatest joy is to share what I have learned from the Bible with others. I still study the Bible on a daily basis. 40 Subjects: including statistics, chemistry, reading, elementary math ...I graduated from the University of Texas at Austin in 2010 and double-majored in both History and English with a minor in Business Foundations. I graduated with a 3.4 GPA and completed undergrad coursework in Spanish, English, Linguistics, American, Latin-American, European and Asian History, Ac... 40 Subjects: including statistics, Spanish, English, chemistry ...I started using UNIX on an ATT 3B2 computer in 1984 while a physics professor at SMU. In 1990 I started work as a scientist at the Superconducting Super Collider. All of our research computers were UNIX and my desktop was a SUN SPARC workstation. 25 Subjects: including statistics, chemistry, calculus, physics ...I WILL NOT tutor graduate level statistics whatsoever.Taught and tutored all concepts related to Algebra I -Linear equations -Quadratic equations -System of equations -Scatterplots -Inequalities -TAKS preparation -Multi-step equations -Exponential properties and equations -Applied algebra (word p... 40 Subjects: including statistics, reading, algebra 1, English Related Glenn Heights, TX Tutors Glenn Heights, TX Accounting Tutors Glenn Heights, TX ACT Tutors Glenn Heights, TX Algebra Tutors Glenn Heights, TX Algebra 2 Tutors Glenn Heights, TX Calculus Tutors Glenn Heights, TX Geometry Tutors Glenn Heights, TX Math Tutors Glenn Heights, TX Prealgebra Tutors Glenn Heights, TX Precalculus Tutors Glenn Heights, TX SAT Tutors Glenn Heights, TX SAT Math Tutors Glenn Heights, TX Science Tutors Glenn Heights, TX Statistics Tutors Glenn Heights, TX Trigonometry Tutors Nearby Cities With statistics Tutor Balch Springs, TX statistics Tutors Cedar Hill, TX statistics Tutors Dalworthington Gardens, TX statistics Tutors Desoto statistics Tutors Duncanville, TX statistics Tutors Hurst, TX statistics Tutors Lancaster, TX statistics Tutors Mansfield, TX statistics Tutors Midlothian, TX statistics Tutors Oak Leaf, TX statistics Tutors Ovilla, TX statistics Tutors Pantego, TX statistics Tutors Red Oak, TX statistics Tutors Watauga, TX statistics Tutors Waxahachie statistics Tutors
{"url":"http://www.purplemath.com/Glenn_Heights_TX_statistics_tutors.php","timestamp":"2014-04-21T15:30:41Z","content_type":null,"content_length":"24378","record_id":"<urn:uuid:a6d0dccb-7364-4f1b-8abe-3b8da83b95ae>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
On the use of evidence in neural networks Results 1 - 10 of 20 "... I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models which include unknown hyperparameters such as regularization constants and noise levels. In the 'evidence framework' the model parameters are integrated over, and the resulting evid ..." Cited by 67 (1 self) Add to MetaCart I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models which include unknown hyperparameters such as regularization constants and noise levels. In the 'evidence framework' the model parameters are integrated over, and the resulting evidence is maximized over the hyperparameters. The optimized - IEEE Trans. Image Processing , 1999 "... In this paper, we propose the application of the hierarchical Bayesian paradigm to the image restoration problem. We derive expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm. We ..." Cited by 65 (26 self) Add to MetaCart In this paper, we propose the application of the hierarchical Bayesian paradigm to the image restoration problem. We derive expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm. We show analytically that the analysis provided by the evidence approach is more realistic and appropriate than the MAP approach for the image restoration problem. We furthermore study the relationship between the evidence and an iterative approach resulting from the set theoretic regularization approach for estimating the two hyperparameters, or their ratio, defined as the regularization parameter. Finally the proposed algorithms are tested experimentally. "... This paper discusses the intimate relationships between the supervised learning frameworks mentioned in the title. In particular, it shows how all those frameworks can be viewed as particular instances of a single overarching formalism. In doing this many commonly misunderstood aspects of those fram ..." Cited by 40 (7 self) Add to MetaCart This paper discusses the intimate relationships between the supervised learning frameworks mentioned in the title. In particular, it shows how all those frameworks can be viewed as particular instances of a single overarching formalism. In doing this many commonly misunderstood aspects of those frameworks are explored. In addition the strengths and weaknesses of those frameworks are compared, and some novel frameworks are suggested (resulting, for example, in a "correction" to the familiar bias-plus-variance formula). - IEEE Transactions on Neural Networks , 1995 "... MacKay's Bayesian framework for backpropagation is a practical and powerful means to improve the generalisation ability of neural networks. It is based on a Gaussian approximation to the posterior weight distribution. The framework is extended, reviewed and demonstrated in a pedagogical way. The ..." Cited by 36 (0 self) Add to MetaCart MacKay's Bayesian framework for backpropagation is a practical and powerful means to improve the generalisation ability of neural networks. It is based on a Gaussian approximation to the posterior weight distribution. The framework is extended, reviewed and demonstrated in a pedagogical way. The notation is simplified using the ordinary weight decay parameter, and a detailed and explicit procedure for adjusting several weight decay parameters is given. Bayesian backprop is applied in the prediction of fat content in minced meat from near infrared spectra. It outperforms "early stopping" as well as quadratic regression. The evidence of a committee of differently trained networks is computed, and the corresponding improved generalisation is verified. The error bars on the predictions of the fat content are computed. There are three contributors: The random noise, the uncertainty in the weights, and the deviation among the committee members. The Bayesian framework is - Neural Computation , 1994 "... Standard techniques for improved generalisation from neural networks include weight decay and pruning. Weight decay has a Bayesian interpretation with the decay function corresponding to a prior over weights. The method of transformation groups and maximum entropy indicates a Laplace rather than a G ..." Cited by 17 (0 self) Add to MetaCart Standard techniques for improved generalisation from neural networks include weight decay and pruning. Weight decay has a Bayesian interpretation with the decay function corresponding to a prior over weights. The method of transformation groups and maximum entropy indicates a Laplace rather than a Gaussian prior. After training, the weights then arrange themselves into two classes: (1) those with a common sensitivity to the data error (2) those failing to achieve this sensitivity and which therefore vanish. Since the critical value is determined adaptively during training, pruning---in the sense of setting weights to exact zeros---becomes a consequence of regularisation alone. The count of free parameters is also reduced automatically as weights are pruned. A comparison is made with results of MacKay using the evidence framework and a Gaussian regulariser. 1 Introduction Neural networks designed for regression or classification need to be trained using some form of stabilisation or re... - The Danish Meat Research Institute, Maglegaardsvej 2, DK-4000 , 1993 "... MacKay's Bayesian framework for backpropagation is a practical and powerful means of improving the generalisation ability of neural networks. The framework is reviewed and extended in a pedagogical way. The notation is simplified using the ordinary weight decay parameter, and the noise parameter fi ..." Cited by 14 (0 self) Add to MetaCart MacKay's Bayesian framework for backpropagation is a practical and powerful means of improving the generalisation ability of neural networks. The framework is reviewed and extended in a pedagogical way. The notation is simplified using the ordinary weight decay parameter, and the noise parameter fi is shown to be nothing more than an overall scale. A detailed and explicit procedure for adjusting several weight decay parameters is given. Pruning is incorporated into the Bayesian framework. Appropriate symmetry factors on sparse architectures are deduced. Bayesian weight decay is demonstrated using artificial data generated by a sparsely connected network. Pruning yields computational advantages: by removing unimportant weights the posterior weight distribution becomes Gaussian, and pruning removes zero-modes of the Hessian and redundant hidden units. In addition, pruning improves generalisation. The Bayesian evidence is used as a stop criterion for pruning. Bayesian backprop is applied ... , 1995 "... this document. Before these are discussed however, perhaps we should have a tutorial on Bayesian probability theory and its application to model comparison problems. 2 Probability theory and Occam's razor ..." Cited by 13 (0 self) Add to MetaCart this document. Before these are discussed however, perhaps we should have a tutorial on Bayesian probability theory and its application to model comparison problems. 2 Probability theory and Occam's - Advances in Neural Information Processing Systems 6 , 1994 "... 1 INTRODUCTION In the conventional Bayesian view of backpropagation (BP) (Buntine and Weigend, 1991; Nowlan and Hinton, 1994; MacKay, 1992; Wolpert, 1993), one starts with the "likelihood" conditional distribution P(training set = t | weight vector w) and the "prior" distribution P(w). As an exampl ..." Cited by 13 (5 self) Add to MetaCart 1 INTRODUCTION In the conventional Bayesian view of backpropagation (BP) (Buntine and Weigend, 1991; Nowlan and Hinton, 1994; MacKay, 1992; Wolpert, 1993), one starts with the "likelihood" conditional distribution P(training set = t | weight vector w) and the "prior" distribution P(w). As an example, in regression one might have a "Gaussian likelihood", P(t | w) µ exp[-c 2 (w, t)] º P i exp [-{net(w, t X (i)) - t y (i)} 2 / 2s 2 ] for some constant s. (t X (i) and t Y (i) are the successive input and output values in the training set respectively, and net(w, .) is the function, induced by w, taking input neuron values to output neuron values.) As another example, the "weight decay" (Gaussian) prior is P(w) µ exp(-a(w 2 )) for some constant a. Bayes' theorem tells us that P (w | t) µ P(t | w) P(w). Accordingly, the most probable weight given the data - the "maximum a posteriori" (MAP) w - is the mode over w of P(t | w) P(w), which equals the mode over w of the "cost function" ... - Maximum Entropy and Bayesian Methods Conference , 1994 "... The "evidence" procedure for setting hyperparameters is essentially the same as the techniques of ML-II and generalized maximum likelihood. Unlike those older techniques however, the evidence procedure has been justified (and used) as an approximation to the hierarchical Bayesian calculation. We use ..." Cited by 7 (1 self) Add to MetaCart The "evidence" procedure for setting hyperparameters is essentially the same as the techniques of ML-II and generalized maximum likelihood. Unlike those older techniques however, the evidence procedure has been justified (and used) as an approximation to the hierarchical Bayesian calculation. We use several examples to explore the validity of this justification. Then we derive upper and (often large) lower bounds on the difference between the evidence procedure's answer and the hierarchical Bayesian answer, for many different quantities. We also touch on subjects like the close relationship between the evidence procedure and maximum likelihood, and the self-consistency of deriving priors by "first-principles" arguments that don't set the values of hyperparameters. "... any inference must be based on strict adherence to the laws of probability theory, because any deviation automatically leads to inconsistency." - S. Gull, in [5] "(Some have) estimated alpha from the data and then procee... , 1999 "... Covariance matrices are important in many areas of neural modelling. In Hopfield networks they are used to form the weight matrix which controls the autoassociative properties of the network. In Gaussian processes, which have been shown to be the infinite neuron limit of many regularised feedforward ..." Cited by 4 (0 self) Add to MetaCart Covariance matrices are important in many areas of neural modelling. In Hopfield networks they are used to form the weight matrix which controls the autoassociative properties of the network. In Gaussian processes, which have been shown to be the infinite neuron limit of many regularised feedforward neural networks, covariance matrices control the form of Bayesian prior distribution over function space. This thesis examines interesting modifications to the standard covariance matrix methods to increase functionality or efficiency of these neural techniques. Firstly the problem of adapting Gaussian process priors to perform regression on switching regimes is tackled. This involves the use of block covariance matrices and Gibbs sampling methods. Then the use of Toeplitz methods is proposed for Gaussian process regression where sampling positions can be chosen. A comparison is made between Hopfield weight matrices, and sample covariances. This allows work on sample covariances to be used ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1531731","timestamp":"2014-04-18T10:45:07Z","content_type":null,"content_length":"38552","record_id":"<urn:uuid:04d4a6f5-14a2-407d-8cea-332a8a9d8420>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Wilcox, CA Math Tutor Find a Wilcox, CA Math Tutor ...Air Force Academy. I love working with students and have experience teaching a wide range of classes from pre-Algebra to Advanced Engineering Mathematics. I am currently at Fuller Seminary in Pasadena preparing for a career in community work as a pastor. 9 Subjects: including geometry, ACT Math, algebra 1, algebra 2 ...I have helped over 100 students who have been behind in reading and writing, due to dyslexia, improve significantly. The method I use most often, which has been very effective, is Orton-Gillingham. I have been working with it for the past eight years, and it is the right method for 95% of my students. 16 Subjects: including prealgebra, algebra 1, reading, writing ...Different strategies work for different people, so I will always take the time to find the perfect method of instruction. Patience is one of my stronger virtues, so you can be sure that I will never give up on a student. Anybody can learn, and I will take as much time as needed to help everyone... 24 Subjects: including SAT math, algebra 1, algebra 2, biology ...Furthermore, I have experience with kids specifically, because I worked in a daycare for about two years with kids ranging from babies to fifth graders. Also, I've worked as a writing counselor at drama camps and taught improv. acting. I love getting people exciting for learning, and I love making things fun. 25 Subjects: including prealgebra, SAT math, algebra 1, algebra 2 ...It has been proven that people learn something more quickly when they enjoy what they're learning. I promise to help you do just that - find joy in any given subject. Much like a handrail on a staircase, I do my best to guide a student, keeping them balanced as they go through each lesson. 13 Subjects: including algebra 2, algebra 1, prealgebra, English Related Wilcox, CA Tutors Wilcox, CA Accounting Tutors Wilcox, CA ACT Tutors Wilcox, CA Algebra Tutors Wilcox, CA Algebra 2 Tutors Wilcox, CA Calculus Tutors Wilcox, CA Geometry Tutors Wilcox, CA Math Tutors Wilcox, CA Prealgebra Tutors Wilcox, CA Precalculus Tutors Wilcox, CA SAT Tutors Wilcox, CA SAT Math Tutors Wilcox, CA Science Tutors Wilcox, CA Statistics Tutors Wilcox, CA Trigonometry Tutors Nearby Cities With Math Tutor Bicentennial, CA Math Tutors Briggs, CA Math Tutors Cimarron, CA Math Tutors Dowtown Carrier Annex, CA Math Tutors Farmer Market, CA Math Tutors Foy, CA Math Tutors Green, CA Math Tutors Miracle Mile, CA Math Tutors Oakwood, CA Math Tutors Pico Heights, CA Math Tutors Rimpau, CA Math Tutors Sanford, CA Math Tutors Santa Western, CA Math Tutors Vermont, CA Math Tutors Wilshire Park, LA Math Tutors
{"url":"http://www.purplemath.com/wilcox_ca_math_tutors.php","timestamp":"2014-04-20T06:52:45Z","content_type":null,"content_length":"23838","record_id":"<urn:uuid:16acffa8-c0e4-40a0-9f82-4e94f8d2781a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Lessons: Tax - Partnership Taxation This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 25 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 25 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 30 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 25 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 25 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 30 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 35 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 25 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 30 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 10 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 15 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 25 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes This lesson is best used after studying the material in class. Unlike many of CALI's other lessons, this lesson takes a more problem oriented approach to learning the material. The lesson provides students with additional problem sets to work through, allowing students to refine their ability to apply the Code and Regs. to a variety of situations. 20 minutes
{"url":"http://www.cali.org/category/cali-topics/2l-3l-upper-level-lesson-topics/tax-law/tax-partnership-taxation","timestamp":"2014-04-21T12:09:11Z","content_type":null,"content_length":"74619","record_id":"<urn:uuid:110bdc1f-95eb-4180-bf96-45934bb02a77>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Allocate resources which are guaranteed to be released. For more information, see http://www.yesodweb.com/blog/2011/12/resourcet. One point to note: all register cleanup actions live in the base monad, not the main monad. This allows both more efficient code, and for monads to be transformed. Data types data ResourceT m a Source The Resource transformer. This transformer keeps track of all registered actions, and calls them upon exit (via runResourceT). Actions may be registered via register, or resources may be allocated atomically via with or withIO. The with functions correspond closely to bracket. Releasing may be performed before exit via the release function. This is a highly recommended optimization, as it will ensure that scarce resources are freed early. Note that calling release will deregister the action, so that a release action will only ever be called once. MonadTrans ResourceT MonadBase b m => MonadBase b (ResourceT m) MonadBaseControl b m => MonadBaseControl b (ResourceT m) Monad m => Monad (ResourceT m) Monad m => Functor (ResourceT m) Typeable1 m => Typeable1 (ResourceT m) Monad m => Applicative (ResourceT m) MonadIO m => MonadIO (ResourceT m) runResourceT :: Resource m => ResourceT m a -> m aSource Unwrap a ResourceT transformer, and call all registered release actions. Note that there is some reference counting involved due to resourceForkIO. If multiple threads are sharing the same collection of resources, only the last call to runResourceT will deallocate the Resource allocation :: Resource m => Base m a allocate -> (a -> Base m ()) free resource -> ResourceT m (ReleaseKey, a) Perform some allocation, and automatically register a cleanup action. If you are performing an IO action, it will likely be easier to use the withIO function, which handles types more cleanly. :: ResourceIO m => IO a allocate -> (a -> IO ()) free resource -> ResourceT m (ReleaseKey, a) Same as with, but explicitly uses IO as a base. Use references modifyRef :: Resource m => Ref (Base m) a -> (a -> (a, b)) -> ResourceT m bSource Modify a value in a reference. Note that, in the case of IO stacks, this is an atomic action. Special actions resourceForkIO :: ResourceIO m => ResourceT m () -> ResourceT m ThreadIdSource Introduce a reference-counting scheme to allow a resource context to be shared by multiple threads. Once the last thread exits, all remaining resources will be released. Note that abuse of this function will greatly delay the deallocation of registered resources. This function should be used with care. A general guideline: If you are allocating a resource that should be shared by multiple threads, and will be held for a long time, you should allocate it at the beginning of a new ResourceT block and then call resourceForkIO from there. Monad transformation transResourceT :: Base m ~ Base n => (m a -> n a) -> ResourceT m a -> ResourceT n aSource Transform the monad a ResourceT lives in. This is most often used to strip or add new transformers to a stack, e.g. to run a ReaderT. Note that the original and new monad must both have the same Base A specific Exception transformer newtype ExceptionT m a Source The express purpose of this transformer is to allow the ST monad to catch exceptions via the ResourceThrow typeclass. MonadTrans ExceptionT MonadTransControl ExceptionT MonadBase b m => MonadBase b (ExceptionT m) MonadBaseControl b m => MonadBaseControl b (ExceptionT m) Monad m => Monad (ExceptionT m) Monad m => Functor (ExceptionT m) Monad m => Applicative (ExceptionT m) (Resource m, MonadBaseControl (Base m) m) => ResourceThrow (ExceptionT m) Type class/associated types class (HasRef (Base m), Monad m) => Resource m whereSource A Monad with a base that has mutable references, and allows some way to run base actions and clean up properly. type Base m :: * -> *Source The base monad for the current monad stack. This will usually be IO or ST. resourceLiftBase :: Base m a -> m aSource Run some action in the Base monad. This function corresponds to liftBase, but due to various type issues, we need to have our own version here. :: Base m () init -> Base m () cleanup -> m c body -> m c Guarantee that some initialization and cleanup code is called before and after some action. Note that the initialization and cleanup lives in the base monad, while the body is in the top monad. Resource IO (MonadTransControl t, Resource m, Monad (t m)) => Resource (t m) Resource (ST s) Resource (ST s) class Resource m => ResourceUnsafeIO m whereSource A Resource based on some monad which allows running of some IO actions, via unsafe calls. This applies to IO and ST, for instance. ResourceUnsafeIO IO (MonadTransControl t, ResourceUnsafeIO m, Monad (t m)) => ResourceUnsafeIO (t m) ResourceUnsafeIO (ST s) ResourceUnsafeIO (ST s) class (ResourceBaseIO (Base m), ResourceUnsafeIO m, ResourceThrow m, MonadIO m, MonadBaseControl IO m) => ResourceIO m Source ResourceIO IO (MonadTransControl t, ResourceIO m, Monad (t m), ResourceThrow (t m), MonadBaseControl IO (t m), MonadIO (t m)) => ResourceIO (t m) class Resource m => ResourceThrow m whereSource A Resource which can throw exceptions. Note that this does not work in a vanilla ST monad. Instead, you should use the ExceptionT transformer on top of ST. ResourceThrow IO ResourceThrow m => ResourceThrow (MaybeT m) ResourceThrow m => ResourceThrow (ListT m) ResourceThrow m => ResourceThrow (IdentityT m) (Resource m, MonadBaseControl (Base m) m) => ResourceThrow (ExceptionT m) (Monoid w, ResourceThrow m) => ResourceThrow (WriterT w m) (Monoid w, ResourceThrow m) => ResourceThrow (WriterT w m) ResourceThrow m => ResourceThrow (StateT s m) ResourceThrow m => ResourceThrow (StateT s m) ResourceThrow m => ResourceThrow (ReaderT r m) (Error e, ResourceThrow m) => ResourceThrow (ErrorT e m) (Monoid w, ResourceThrow m) => ResourceThrow (RWST r w s m) (Monoid w, ResourceThrow m) => ResourceThrow (RWST r w s m) class Monad m => HasRef m whereSource A base monad which provides mutable references and some exception-safe way of interacting with them. For monads which cannot handle exceptions (e.g., ST), exceptions may be ignored. However, in such cases, scarce resources should not be allocated in those monads, as exceptions may cause the cleanup functions to not run. The instance for IO, however, is fully exception-safe. Minimal complete definition: Ref, newRef', readRef' and writeRef'. HasRef IO HasRef (ST s) HasRef (ST s)
{"url":"http://hackage.haskell.org/package/conduit-0.0.1/docs/Control-Monad-Trans-Resource.html","timestamp":"2014-04-24T20:27:34Z","content_type":null,"content_length":"51957","record_id":"<urn:uuid:8c211807-78aa-407d-a238-af27783b0e7c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Do all subtraction-free identities tropicalize? up vote 12 down vote favorite If you take a subtraction-free rational identity like $(xxx+yyy)/(x+y)+xy=xx+yy$ and replace $\times$,$/$,$+$,$1$ by $+$,$-$,min,$0$, do you always get a valid min,plus,minus identity like min(min ($x+x+x,y+y+y$)$-$min($x,y$),$\:x+y$)$\ =\ $min($x+x,y+y$)? ac.commutative-algebra universal-algebra tropical-arithmetic Colin and Will's answers are both convincing; thanks! One thing more I'd appreciate is a literature reference that I can cite if I publish something that makes use of this transfer principle. Given how straightforward the proof is (at least in hindsight), it's likely that some version of this result appears in some existing book or article. – James Propp Apr 10 '13 at 19:06 There would be an existing tag 'tropical-arithmetic'. I just would like to check if your created the new tag specifically and would like to have it in addition or if you were fine with changing to the other tag. Thanks in advance. – quid Apr 11 '13 at 17:42 @quid: I've changed tropical-mathematics to tropical-arithmetic. Thanks for pointing out the existence of the tropical-arithmetic tag. – James Propp Apr 12 '13 at 2:07 add comment 2 Answers active oldest votes It suffices to show that whenever $F$ is a function $\mathbb R_{\geq 0}^k\to\mathbb R_{\geq 0}$ defined using $\times,/,+,1$, and $f,g$ is the corresponding tropicalization $\mathbb R ^k\to\mathbb R$, for all real $x_1,\dots,x_k$ we have $$F(\exp(-\beta x_1),\dots,\exp(-\beta x_k))^{1/\beta}\to \exp(-f(x_1,\dots,x_k)).$$ as $\beta\to+\infty$. This follows by structural induction on the formula defining $f$, using the following lemma. up vote 9 down Lemma: Let $F$ be one of the operations $\times,/,+$, let $f$ be the corresponding tropical operation, and let $G_1$ and $G_2$ be functions $\mathbb R_{> 0}\to\mathbb R_{> 0}$ such vote accepted that $G_1(\beta)^{1/\beta}$ tends to some limit $e^{-x_1}$ as $\beta\to\infty$, and likewise $G_2(\beta)^{1/\beta}\to e^{-x_2}$. Then $$F(G_1(\beta),G_2(\beta))^{1/\beta}\to \exp(- f (x_1,x_2)).$$ as $\beta\to+\infty$. Proof: The operations $\times$ and $/$ are easy - the only non-trivial step is $$(G_1(\beta)+G_2(\beta))^{1/\beta}\to e^{-\beta \min(x_1,x_2)}.$$ add comment Yes. Replace $x$ with $e^{Na}$, $y$ with $e^{Nb}$, etc. Then take the log, then divide by $N$. One gets a new identity where $\times$ is replaced by $+$, $/$ by $-$, $1$ by $0$, and $u+v$ by $\ln (e^{N u} + e^{N v})= \min(u,v) + \ln\left( 1+ e^{-N |u-v|}\right)/N = \min(u,v) + O(1/N)$. Then take the limit as $N$ goes to $\infty$. You now have a tropcal identity. up vote 11 down vote This fits with the idea of tropical geometry as the limit of classical algebraic geometry as variables get very large. add comment Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra universal-algebra tropical-arithmetic or ask your own question.
{"url":"http://mathoverflow.net/questions/127108/do-all-subtraction-free-identities-tropicalize?sort=newest","timestamp":"2014-04-17T01:27:35Z","content_type":null,"content_length":"58456","record_id":"<urn:uuid:0466facf-afe7-49c7-8488-174e073c65ec>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Can one calculate Ext's between microlocalized perverse sheaves/D-modules using topology? up vote 4 down vote favorite So, I know one really good technique for calculating Ext's between perverse sheaves/D-modules using topology: the convolution algebra formalism, worked out in great detail in the book of Chriss and Ginzburg. This method has some great successes in geometric representation theory: the most popular is probably Springer theory and character sheaves. The rough idea of this technique is that just as the Ext algebra of the constant sheaf on a topological space with itself is the cohomology of the space (and the Yoneda product is cup product), Ext's between pushforwards of constant sheaves can be calculated using the Borel-Moore homology of fiber products, and Yoneda product will again have a realization as convolution product. Now, I'm interested in pushing this method a bit further to work in the microlocal world. Microlocal perverse sheaves/D-modules are a new geometric category, where one forgets about some closed subset of the cotangent bundle, and declares any map which is an isomorphism on vanishing cycles (which are microlocal stalks) away from this locus to be an isomorphism. My question: If I have a constant sheaf (in D-module language, the D-module of functions) on a smooth variety, or maybe a pushforward of one, is there some way of calculating the Ext's in the microlocal category topologically as well, hopefully using the topology of the characteristic variety? ag.algebraic-geometry geometric-rep-theory at.algebraic-topology add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ag.algebraic-geometry geometric-rep-theory at.algebraic-topology or ask your own question.
{"url":"https://mathoverflow.net/questions/11275/can-one-calculate-exts-between-microlocalized-perverse-sheaves-d-modules-using","timestamp":"2014-04-21T15:21:56Z","content_type":null,"content_length":"47193","record_id":"<urn:uuid:b03ea0b3-5839-43fa-a154-bddea770b4ff>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible Answer The Distance Formula is a variant of the Pythagorean Theorem that you used back in geometry. Here's how we get from the one to the other: - read more Distance Formula The distance formula can be obtained by creating a triangle and using the Pythagorean Theorem to find the length of the hypotenuse. - read more Please vote if the answer you were given helped you or not, thats the best way to improve our algorithm. You can also submit an answer or search documents about distance math formula Share your answer: distance math formula? Question Analizer distance math formula resources
{"url":"http://www.askives.com/distance-math-formula.html","timestamp":"2014-04-24T17:05:01Z","content_type":null,"content_length":"33401","record_id":"<urn:uuid:e5790565-0368-410f-8bfd-f806a3719d8b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
KnittingHelp.com Forum - View Single Post - yarn, weight/yardage If you have the original ball band it will tell you haw may ozs and how many yds in the ball. Then just divide the number of yds from the ball band by the number of ozs from the ball band to get yds per oz. Now multiply that number times the number ozs you have (e.g. 5.8oz) and that’s approximately the number of yds you’ve got.
{"url":"http://www.knittinghelp.com/forum/showpost.php?p=1326361&postcount=2","timestamp":"2014-04-16T08:20:58Z","content_type":null,"content_length":"11279","record_id":"<urn:uuid:4b9b3953-7cc5-4e7a-92f5-5ef8d2f461de>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Rockland, MA Prealgebra Tutor Find a Rockland, MA Prealgebra Tutor ...I'm able to help students organize writing projects, develop effective defensible theses, find their voice in their writing and improve writing through revision and editing. I have strong verbal and written communication skills with experience as writer, copy editor, and proofreader that transla... 18 Subjects: including prealgebra, reading, English, writing ...I am a certified math teacher with many years teaching experience who will help your child catch up and become proficient at fulfilling the requirements of elementary school math. I am very hands-on, and give students lots of problems to solve for the practice they need to really master the topi... 9 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...This translates to the way I have always approached teaching situations, which is to guide students to the answer, but allow them to arrive at it themselves. In this way, they are able to fully grasp the concepts behind the problem and gains a deeper understanding of the subject. Feel free to shoot me an email if you’re interested in studying with me, or if you have any questions. 38 Subjects: including prealgebra, English, chemistry, reading ...Here too I stress active learning, and my goal is usually to get my students to study by challenging themselves with problems rather than simply reading over their text, notes, and previous work. If you are in need of assistance for a student struggling in Physics or Math, I am the man for you. ... 9 Subjects: including prealgebra, calculus, physics, geometry I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College. 13 Subjects: including prealgebra, chemistry, calculus, geometry
{"url":"http://www.purplemath.com/Rockland_MA_Prealgebra_tutors.php","timestamp":"2014-04-19T12:21:08Z","content_type":null,"content_length":"24316","record_id":"<urn:uuid:9f9bf1f4-a245-4440-8727-1fb5fd361c4f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Help Needed with this Trig Equation! February 3rd 2011, 08:16 PM #1 Feb 2011 Help Needed with this Trig Equation! Hi, could someone show me how this one's done? Find all solutions for X, satisfying 0 < X < (two pi): 2cotX = tanX Thanks, your help would be much appreciated! Supposing that $\tan x e 0$ You can devide both terms by $\tan x$ and after some steps You arrive at the equation $\tan x = \pm \sqrt{2}$ which has four solutions, one for each quadrant... Kind regards Thanks very much! $2-tan^2x=0\Rightarrow\ (\sqrt{2}+tanx)(\sqrt{2}-tanx=0)$ $tanx$ gives the slope of a line passing through the origin with an angle x, hence the one with positive slope traverses quadrants 1 and 3, while the other traverses quadrants 2 and 4. Thanks very much, that's very helpful. February 3rd 2011, 08:30 PM #2 February 3rd 2011, 08:44 PM #3 Feb 2011 February 4th 2011, 04:58 PM #4 MHF Contributor Dec 2009 February 6th 2011, 06:02 PM #5 Feb 2011
{"url":"http://mathhelpforum.com/trigonometry/170171-help-needed-trig-equation.html","timestamp":"2014-04-17T21:31:35Z","content_type":null,"content_length":"42240","record_id":"<urn:uuid:bdfe005b-de45-4163-b170-7b3ed6f57e61>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometrically connected curve Take the 2-minute tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. What is the definition of a geometrically connected curve? Seriously, the first hit on google gives you the answer... google.fr/search?q=geometrically+connected Lierre Nov 9 '12 at 13:10 add comment For a variety over a non-algebraically closed field, "geometrically connected" means connected over the algebraic closure. up vote 2 down As an example where this fails, note that the curve $x^2+1=0$ in $\mathbb{A}^2$ is connected over $\mathbb{Q}$, but not over $\mathbb{Q}[i]$, where is becomes $(x+i)(x-i)=0$, which is vote a union of two lines. Hence this curve is connected but not geometrically connected. You can also use the same adjective for many other properties, so that you can talk about something being geoemtrically integral, geometrically rational, etc... I think your example doesn't work, at least in the projective plane, where any two lines meet in a point (in this case in the point $[0:0:1]\in\mathbb{P}^1(\mathbb{Q}[i])$). Qfwfq Nov 9 '12 at 13:01 Projective or not, the two lines in the example meet at $x=y=0$. For an example of a connected, non geometrically connected curve, better consider the affine curve over $\mathbb{Q}$ with affine ring $\mathbb{Q}(\sqrt{2})[x]$. Matthieu Romagny Nov 9 '12 at 13:32 Yes I realise now that I do not properly think through my example as I wrote it in a rush... I have edited it accordingly. Daniel Loughran Nov 9 '12 at 14:10 add comment
{"url":"http://mathoverflow.net/questions/111890/geometrically-connected-curve/111892","timestamp":"2014-04-21T04:54:14Z","content_type":null,"content_length":"53655","record_id":"<urn:uuid:5a0029cc-4c9e-4028-82ee-f7c322472191>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Order of Operations Ppt Presentation PowerPoint Presentation: Ms. Dulavitch SOL 6.8 Math 6 Order of Operations PowerPoint Presentation: Using Order of Operations helps us decide which part of a problem to solve first. For example, in this problem, do you add 9 + 5 first or multiply 5 x 2 first? 9 + 5 x 2 Do you get the same answer either way? Order of Operations PowerPoint Presentation: 9 + 5 x 2 If you add 9 + 5 first, then multiply by 2, your final answer is 28. If you multiply 5 x 2 first, then add 9, your final answer is 19. Which answer is correct? Order of Operations PowerPoint Presentation: 9 + 5 x 2 Answer: 19 How do you know to multiply first? Does it work that way every time? Yes! If you use Order of Operations, you will follow the same steps for every problem and arrive at the correct answer. Order of Operations PowerPoint Presentation: We will use PEMDAS to help us remember the Order of Operations. P arentheses E xponents M ultiplication D ivision A ddition S ubtraction PEMDAS or or PowerPoint Presentation: First, solve every operation in P arentheses. Second, solve every E xponent. Third, solve M ultiplication and D ivision depending on which comes first left to right . Ex 1) 3 x 6 / 2 Multiply first, then divide. Ex 2) 10 / 2 x 3 Divide first, then multiply. PEMDAS PowerPoint Presentation: Fourth, solve A ddition and S ubtraction depending on which comes first left to right . Ex 1) 5 + 6 - 2 Add first, then subtract. Ex 2) 12 - 1 + 3 Subtract first, then add. Let’s try one together! PowerPoint Presentation: Keep PEMDAS in mind! First, solve everything in P arentheses. Our new problem says: Example 1 PowerPoint Presentation: Example 1 There are no E xponents in the problem, so we can skip that step. Next, we M ultiply or D ivide. Last, we A dd or S ubtract. PowerPoint Presentation: Let’s try another one! We can skip the P arentheses step. First, solve the E xponent. Example 2 PowerPoint Presentation: We skip M ultiplication and D ivision. Next, we S ubtract, since subtraction comes before addition in the problem. Last, we A dd. Example 2 PowerPoint Presentation: Now it’s your turn! Don’t forget to use PEMDAS! Check your work: Answer: O Practice 1 PowerPoint Presentation: Let’s try one more practice problem! Check your work: Answer: 12 Practice 2 PowerPoint Presentation: If you want some more practice before your quiz, check out the Math is Fun website. You can also watch this Order of Operations rap to help you learn the PEMDAS steps! More PEMDAS PowerPoint Presentation: For the following problem, which operation would you do first? A. Multiplication B. Division C. Addition D. Subtraction Quiz Question 1 PowerPoint Presentation: For the following problem, which operation would you do last? A. Multiplication B. Division C. Addition D. Subtraction Quiz Question 2 PowerPoint Presentation: Solve the problem using Order of Operations: A. 9 B. 13 C. 7 D. - 3 Quiz Question 3 PowerPoint Presentation: Solve the problem using Order of Operations: A. 24 B. 9 C. 4 D. 1 Quiz Question 4 PowerPoint Presentation: Why do you multiply before you divide in the following problem? A. Multiply comes before divide in PEMDAS. B. Multiplication comes first reading left to right. C. The Multiplication problem has an exponent with it. D. Actually, it doesn’t matter whether you multiply or divide first. You get the same answer. Quiz Question 5 PowerPoint Presentation: PowerPoint Presentation: PowerPoint Presentation: Microsoft Office Clipart- http://office.microsoft.com/en-us/images/ Creative Commons- http://creativecommons.org/ Order of Operations Math Rap [Video File] Retrieved from http://www.youtube.com/ watch?v=d0xutl2sUt0 MathIsFun (2012). Order of Operations. Retrieved from http://www.mathsisfun.com/operation-order-pemdas.html Credits
{"url":"http://www.authorstream.com/Presentation/katiedulavitch-1475626-order-operations/","timestamp":"2014-04-19T19:53:29Z","content_type":null,"content_length":"131979","record_id":"<urn:uuid:4ee1572d-3aaa-446b-8367-656a694b6398>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Can we get a definitive verdict in what concerns the Jarque-Bera Test? Replies: 2 Last Post: Sep 15, 2012 9:55 PM Messages: [ Previous | Next ] Luis A. Can we get a definitive verdict in what concerns the Jarque-Bera Test? Afonso Posted: Sep 8, 2012 7:20 PM Posts: 4,518 Can we get a definitive verdict in what concerns the Jarque-Bera Test? From: LIsbon Registered: 2 The main question: Can we assert that a sample could be normal simply because the test-value falls in the acceptance region? /16/05 I do not agree . . . I would say instead: What we can state is that the sum of the Skewness Coefficient and that of Excess Kurtosis each one reduced (by its sample standard deviations) are so that are in conformity with that the normal samples have. NOTE this statement is about sums, never about each quantity *PER SE*, one of each (not both, of course) could be insupportably high (one tail test) by the tested hypotheses of normality. Luis A. Afonso Date Subject Author 9/8/12 Can we get a definitive verdict in what concerns the Jarque-Bera Test? Luis A. Afonso 9/11/12 Re: Can we get a definitive verdict in what concerns the Jarque-Bera Test? Luis A. Afonso 9/15/12 Re: Can we get a definitive verdict in what concerns the Jarque-Bera Test? Luis A. Afonso
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2401146","timestamp":"2014-04-17T01:33:39Z","content_type":null,"content_length":"19110","record_id":"<urn:uuid:624d158d-ea53-4dd3-88b0-ba1463a47ae8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Volts, Amps What are Volts, Amps & Watts etc? This page details the most common terms used in electronics and often physics subjects. After reading this page you should be familiar with many of the aspects of electricity and electronics which should help you to understand man of the other pages on this site. Electricity is a term used to describe a physical process involving subatomic particles in materials. A battery does not hold electricity, but it is a store of energy that is used to make electrical processes work. Electricity involves the charged particles (electrons and protons) that make up ordinary matter. What we call 'static electricity' is a buildup or lack of electrons which are a negatively charged particle. If there is an excess of electrons on an object then it will have an overall negative charge, if electrons are removed then the object will have an overall positive charge due to the remaining protons. Notice that it is the electrons that are doing the moving around and that the protons stay where they are, fixed in the nucleus of atoms. When the electrons are moving from one place to another we call this an electric current. In a metal the outer electrons of the atoms are 'free' to move around therefore if it is connected to a battery or another source of e.m.f. the electrons will be repelled from the negative terminal and attracted to the positive terminal constituting an electric current. e.m.f. (ElectroMotive Force) This is the force applied to charged particles such as electrons that will cause them to move. Sources of e.m.f. include chemical reactions (like in batteries) where energy is released allowing electrons to be moved around, or generators where mechanical energy is converted by using magnetic fields to influence the movement of electrons in metal wires. E.m.f. is measured in Volts (V) which is a measure of the amount of energy per unit of charge (Joules per Coulomb). Voltage - Volts (V) potential energy per unit ofcharge, and is measured in Volts. A voltage measurement is taken between two points separated by a dielectric or partly conductive material. A voltage can be measured even when no electrons are moving between the points as it is a measure of potential energy and will not be released until the electrons begin to flow. Because the measurement is taken between two points the voltage is also known as a potential difference (p.d.). This is because it is the difference of potential energy between the two points. For example; A measurement with a voltmeter is taken between the two points of a battery and shows a reading of 12V. Now this just means that one terminal is 12V higher than the other. It could be that one terminal is 0V and the other is +12V, or it could mean that one terminal is -6V and the other is +6V. For this situation it does not matter which it is, as either case has a voltage of 12V between the two battery terminals. This is often useful when designing a circuit where we need a +V and a -V. Two batteries connected in sequence can be considered to have three terminals. One end is +V, the middle where they join is 0V, and the other end is considered -V. You can also use a combination of resistors with a single battery to do the same thing which is known as a voltage divider. Current - Amps (I) A current is the flow of charged particles (usually electrons) which is normally produced when a source of e.m.f. is applied to a conductor. The current is akin to the actual number of electrons flowing in the same direction. The amount of current flowing in a conductor is proportional to the applied voltage and the resistance of the material. For example; if a light bulb is connected to a source of e.m.f. such as a battery the current flowing through it would be calculated using I = V / R. Where I is the current, V is the voltage of the battery, and R is the resistance of the light bulb. This relationship is known as Ohms Law. You can see from this example that to double the current flowing in the bulb you would need to double the voltage applied to it. In a conductor the electrons are flowing from the negative terminal towards the positive, but just to confuse things when talking about electrical currents we say that the current flows from positive to negative Direct Current (DC) This is where the current is flowing in one direction at a constant rate. A Battery causes DC to flow in circuits. Alternating Current (AC) This is where the current is changing with time or oscillating back and forth. A typical mains outlet causes AC to flow in a circuit. In the UK the frequency at which this current oscillates is 50Hz. Resistance - Ohms (R) Resistance is a measure of the restriction of the flow of current through a material. All materials except superconductors have a resistance above zero and the value is measured in Ohms (Ω). Metals have lots of 'free' electrons therefore they have a low resistance. In an electrical circuit it is important to use cables with a low enough resistance so that they can adequately carry the necessary current for the application. In high power applications thick wires are used because thicker wires have lower resistance. Ohms Law and Watt's Law Ohms law determines the relationship between Voltage (V), Current (I), and resistance (R). The simple formula can be used to determine one unknown variable if the other two variables are known. Related to this is Watt's Law which includes calculations for power (energy per second). For example; If a 12V battery were connected to a 100 ohm load such as a light bulb, the current flowing in the circuit could be calulated using ohms law. I = V/R = 12/100 = 0.12 A (120mA) Another example would be to calculate the power drawn in the circuit. P = V x I = 12 x 0.12 = 1.44 W P = V^2/R = 12^2/100 = 1.44 W Power - Watts (P) Power is a measure of the overall amount of work being done in a system in relation to time (energy used per second). In an electrical system power can be calculated by using the formula P = V I. From this you can see how the voltage and current in a system relate to the overall amount of power used. The unit of a Watt (W) is equivalent to joules per second , therefore one Watt is equal to one joule per second. Energy - Joules (E) Energy is a fundamental quantity that every physical system possesses. The quantity of energy available allows us to predict how much work a system could be made to do, or how much heat it can produce or absorb. For any sort of physical change energy is involved. The change can be anything such as temperature, movement, voltage, etc. Reactance - Ohms (X) Reactance is the measure of opposition to alternating current flow in a circuit. The opposition is caused by the effect of an inductor orcapacitor. In a coil of wire (inductor) in an AC circuit the changing magnetic field produced by the current has the effect of inducing a voltage of opposite polarity to the polarity at that moment in time. This is known as back e.m.f. Impedance - Ohms (Z) Impedance is similar to resistance but it is used to describe the total amount of opposition a circuit offers to the flow of alternating current. It is simply a combination of the resistance and the reactance of a circuit. With inductors (wire coils) the impedance is higher at higer frequencies whereas with capacitors the impedance is lower at higher frequencies. Inductance - Henries (L) Inductance is the measure of how well a coil (inductor) or conductor is able to produce a magnetic field from a given current. Inductance is equivalent to the magnetic flux divided by the electrical current. A Coil of wire having a high value of inductance would typically be made from a large number of turns of wire. The effect of the magnetic field produced by the inductor has the effect of making a reactance to the change of current flowing in the coil. If a DC current is passing through a coil there will be a stable magnetic field around it. If the source of e.m.f. is suddenly removed the magnetic filed will collapse inducing a current back into the coil. In an AC circuit this has the effect of altering the phase relationship between the voltage and current. Self Inductance is simply the property of a coil or inductor. It's called self inductance because each turn of wire in the same solenoid will induce a current in its self (and the nearby ones) as the field around it changes. Mutual Inductance is the total inductance produce by the interaction of two or more inductors. As the field around one coil or inductor changes, it effects other nearby inductors. The the strength of the effect the coils have on each other is known as coupling. The formulae below are used to calculate the mutual inductance produced between two solenoids that are magnetically coupled. M = µ[0]N[1]N[2]lπr^2= (L[1]L[2])1/2 M = k(L[1]L[2])1/2 Induction is a term used to describe how electromagnetic effects are copied from one object to another. When a charge or current in an object is changed, we say that it induces a charge or current in an object to which it is somehow coupled. Although the words 'inductor' and 'inductance' are mostly used when describing magnetic fields andsolenoids, the words 'induction' and 'induce(ed)' are used to describe both electric and magnetic effects. For example: Magnetic induction - The output (secondary coil) of a transformer is connected to a light bulb or LED, while the input (primary coil) is repeatedly connected and disconnected from a battery. As the battery is connected or disconnected, there is an abrupt change in the magnetic field created by the current in the primary coil. This field also surrounds (is coupled with) the secondary coil and therefore when the primary current changes a current is induced into the secondary coil causing the LED to briefly light up. Electric Induction - A metallic ball is fixed centrally between two metal plates. All three objects are separated by several cm of air. If a voltage is applied to the plates so that one plate becomes more negative than the other there will be an electric field between the plates. The metallic ball has 'free' electrons which can be moved around quite easily (this is why metals conduct). Electrons are negatively charged and will therefore be pushed away from the negative plate (opposites attract, like poles repel). This causes there to be more electrons (therefore more negative charge) one one side of the ball than the other. Even though nothing touched the ball it now has a positive and a negative side, and we call this a dipole. More specifically here, the ball has an induced dipole Capacitance - Farads (C) This is the measure of how well conductor - insulator barrier is able to store energy. It is widely stated that a capacitor stores charge, but this is simply not true, the total charge inside a capacitor is always the same. The capacitor stores energy by keeping separate regions of different levels of charge. The attractive force between the areas of opposite charge in a capacitor is used as the energy source to cause a current to flow when its terminals are attached to an external circuit. Charge - Coulombs (Q) Charge is a property possessed by particles and physical objects. In most objects the overall charge is zero but it can be made to become positive or negatively charged by an amount measured in coulombs. A single electron carries a negative charge of about 1.6 × 10^-19coulombs. An object is considered charged when it has either an excess or a deficit of electrons therefore giving the object an overall positive or negative charge. A capacitor is considered as charged when the charge on its plates is separated within the device, but the overall charge remains constant. This term is used to describe the amount of electromagnetic linkage between two conductors or coils. As the field around one object is changed, it will cause a similar change in other nearby objects. The strength of this change is determined by the coupling coefficient. Moving two objects closer together would have the effect of increasing the effect they have on each other, and therefore the increasing coupling coefficient (k). The value 'k' is between 0 and 1, and is simply used as a multiplier to represent the amount of coupling between the objects. In high speed circuits it is important to pay attention to coupling between various tracks on a PCB, as unwanted coupling will interfere with signals causing data to become corrupt. Capacitive coupling refers to the coupling between capacitors, or conductors (usually metal) separated by a dielectric (insulator). The coupling occurs because as the electric field around one object changes, it induces a relative change in nearby conductors. The plates in a capacitor are tightly coupled which is why they can pass alternating currents even though there is a dielectric barrier in the circuit. Magnetic coupling is used to describe the linkage between twocurrent carrying conductors such as solenoids or transformerwindings. A changing current in one coil of wire will cause a changing current in a nearby coil. The strength of the change is determined by the coupling coefficient. In a transformer the two separate coils (primary and secondary) are tightly coupled by wrapping them around the same ferrous core. Frequency - Hertz (F) Frequency is the measure of rate of change and is given in Hertz (Hz). One Hz is equivalent to one full cycle per second. The time taken to complete one full cycle is called the period. Phase - Radians (Θ) Phase is the measure of the difference in position of two waves or cycles and is measured by the the equivalent angle of a wave. The angle is usually given in radians but it is shown as degrees in the diagram for simplicity. A full cycle of a wave is equivalent to a rotation of 360 degrees or 2π Radians. Pulse Width Modulation (PWM) DIY Power Pulse Controller page. Anode and Cathode These are terms used to describe electrodes or electrical terminals in a device. The Anode represents the positive terminal whilst the cathode represents negative. The terminals of diodes are often named anode and cathode. This is another name for an insulating or non conductive material. The material between the plates of a capacitor is a dielectric, and the 'dielectric constant' of this material determines the effectiveness of the capacitor. Ionization is a process which occurs when atoms or molecules are highly energised. When an atom has a non-neutral overall charge is is called an ion. This ion can be either positive or negatively charged. During electrolysis, ions are formed which move between the electrodes in a solution and then stop at the electrodes when they are neutralised, Also high voltage electricity is used to ionize gasses so that they will become a plasma. When exposed to high enough voltage, some electrons can be removed from the atoms so that the atom is positively charged. When the electrons are being ripped away and then pulled back into other atoms, light is released. We make use of this phenomenon in neon lights. Polarity & Dipoles Polarity is used to describe the orientation of electric or magnetic fields. The word 'dipole' is used to describe objects which have two ends with opposite charges or magnetic fields. A typical bar magnet is a dipole because it has a North and a South end. If this magnet is turned upside down so that the N and S have swapped places, it is now oriented with the opposite polarity. This works in the exact same way for a battery. If the wires to a battery from a circuit are swapped over, then the polarity has been reversed. This term is used to describe materials with Iron (Fe) like properties. Specifically it is the magnetic properties of iron that are being referred to. Ferrous materials can be magnetized and are attracted to magnets. Ferrous materials also exhibit hysteresis. This term is used to describe an effect seen in ferrous materials, and is also used to describe a sort of memory effect in analogue circuits. In ferrous materials hysteresis is the 'magnetic memory' of the material. If a solenoid is energized around a piece of Iron, a magnetic field will be induced into the metal and therefore increasing the overall field strength, if the polarity of the solenoid is reversed, then it must overcome the magnetic field of the iron which still rests partially in the previous polarity. In a transformer this is an unwanted property as energy is wasted by having to repeatedly overcome the magnetization of the material. The typical way of representing this property of a material is to draw a graph known as a BH curve. In an electronic circuit we can add an electronic form of hysteresis to help stabilize an unwanted or spurious oscillation. This is commonly used in a thermostat circuit so that a device isn't randomly switched on and off when the temperature is at the threshold level. It is basically a way of separating the 'on temperature' and the 'off temperature'. e.g. A heater comes on when the temperature is below 15 Celsius, and off again at 20 Celsius. Series & Parallel Circuits When components are connected in series they are connected end to end in sequence. This causes the voltage to be shared between the components whilst the current through each is the same. In a a parallel circuit the components are connected with all their similar terminals connected together (e.g. all positive terminals together and all negative terminals together.) Each component will have the same voltage across its terminals. Analogue Electronics This term is used to describe circuits which involve mostly varying currents and voltages like you would find in a radio or amplifier. Analogue systems have a 'fuzzy' design as component tolerance can vary. There is often a lot of tuning and adjusting after a circuit has been made. An analog circuit can be from anything from a simple amplifier to an old TV or radio. Digital Electronics These circuits are usually based in transistors and logic devices. Digital circuits can be designed to produce exact and repeatable results. They often use computer chips or microcontrollers for performing calculations. Memory can also be used to allow a sequence of events to be recorded or triggered. Next Page: Electronic Components Previous Page: Electronics Menu Comments and questions for Electronics Terms The information provided here can not be guaranteed as accurate or correct. Always check with an alternate source before following any suggestions made here. I would appreciate if you could explain me how 'I' has been used to represent current. Does it stand for someones name? 'I' is just the letter used to represent current when writing fomulae. It does not represent anyones name. The name associated with current is 'Ampere'. This is a persons surname but it is usually shortened to 'amps' which is the unit of current (coulombs per second). Question; why are resistors given a "watt" value when they are measured in ohms of resistence. When a current flows through a resistor there is a voltage drop across it. This voltage drop causes energy to be dissipated through the resistor as heat. The power rating of a resistor refers to how much heat it can safely dissipate without breaking down or melting. The power dissipated in a resistor is proportional to the square of the current flowing through it and is calculated using the following formula. P = I^2 x R Power (Watts) = Current (Amps) squared times Resistance (Ohms) I'm in need of information regarding the conversion of watts to votage, if that is even possible. I purchased a safe sync for my camera that will protect it up to 400 voltage. The strobe lights have a 2400 watt capacity. I'm loss and I don't want to fry my camera. Please help. There is no direct conversion. You need more variables to complete a calculation. See the section on Ohms / Watts law for the formulae. You would be best asking at your local camera dealers, or post a message in a photography forum. in parrallel plate capacitor, if I add some more charge to one plate of the capacitor what would happen to the capacitcance? The capacitance will not change. Adding more charge to one plate will push the same amount of charge out of the other plate (assuming it is in a circuit and has somewhere to go). This will increase the overall amount of energy stored in the capacitor but the total amount of charge in the whole capacitor will remain the same. The "I" in Ohm's Law stands for the word INTENSITY. The intensity of current flow is measured in Amps. Is there a way that I can find the energy (in joules) just from knowing the voltage and amperage? You must also consider time. Power is calculated by volts x amps. This gives you the energy per second. For example; If you connect a 12 ohm resistor between the terminals of a 12 Volt battery for 1 second, 12 joules of energy will be taken from the battery and dissipated as heat in the resistor. So for 1 amp, 12V for 1 second, the energy is 12 joules. Earlier, I had asked you how much power my OBIT put out, you said it was about 765 watts, but after recalculating a few times, I kept getting a result of 787.5 watts. was I using the wrong formula? Show me how you work it out I multiplied Vxl (V=17,500, l=.045) and got 787.5 watts. Thats fine. Did I just estimate 765? Can you give me the post number? it was in posts 2443 +2447 (2443 was my question) In your original post you stated the voltage as 17kV. Here you are using 17.5kV. Oh! Sorry! Yes, it is really 17.5kv. My mistake. Is there a way to put two capacitors together to increase the voltage rating, but keep the capacitance the same as with one? example: If I have my 6.8nF capacitor, but it's only rated for 15 kv. could I put two of them together to get the 30,000 volts I need and keep the 6.8nF? It doesn't seem to make sense that that would work, but I thought I'd ask anyway. Connecting two identical capacitors in series will double the voltage rating, but half the capacitance. If they are placed in parallel the voltage rating remains the same but the capacitance is doubled. This page gives more info about connecting capacitors together. Thanks! that helps a lot!!! what happens if I put 3 capacitors in parallel? earlier, you said that putting 2 of them in parallel would double the capacitance, but what if a third one is added on? Is the capacitance tripled instead of doubled? I'm just a little bit confused. Thank you again for all your help! Der Strom It would be tripled. You just add the values together when they are connected in parallel. If I understand correctly Joules is the Enerigy potential in a system whereas Watts is the Power used by the energy to perform the work of the system. Correct? It is stated that one Joule is equal to one Watt so if I have equipment rated at 200 Watts it is also rqated at 200 Joules? Almost. Power measured in watts is equivalent to the energy in joules delivered each second. If a device is rated for 200W it means it can supply 200 joules every second. Power = Energy / Time i would like to ask why wire is circle and not rectangular for example. Wire is often made using extrusion or stretching techniques. This naturally forms round or cilindrical strands. It is just the ease of manufacturing that makes most wires this way. Other shapes of wire are available for those who have a specific need for it. To the person who asked why "I" is used for current. I recall it stands for Intensity. E stands for Electromotive force, R resistance, P+ power How can we calculate the resistance required if input voltage is given and required voltage is given? Eg. :- Input voltage is 240V and required voltage is 1.5V Without using stepdown transformer. I'm trying to compute the amps my standby generator (used) can safely support. It generates 15kw, and as I understand it, that means I can draw 136 amps if it's 110v, or 68 if it's 220v. Have I got it right? Say I connect 4 resistors together so that they are 2 parralel lines of 2 series resistors. If they are all of equal value can i run 4x(or nearly) the wattage that I could through just 1? with the same resitance as 1? You also need to know the current the the device is rated for or the maximum current you want to flow. With that, you cam then use ohms law. Yes those values are correct. I have a stereo question, I have 2 500k capacitors, with two 12V batterys; they ushally read between 14.06 V and 11.08; when I have my stereo playing at different volumes. I have a total of 3760 Watts of power with all my amps at peak. What I dont understand is how should I find the total "draw" of power from the batterys to the capacitors, how they store power, and get the power to the amplifiers. Whats the difference in voltage mean and relate to power drawn? How can I increase charge related to wire size? I don't really understand what you are asking. The diameter of wire you need is related to the current that you want to flow through them. Since your application is audio, the current will vary with time. You need to calculate an RMS or average value. suppose i have a paralle plate capacitor. if i have a constant voltage across it and slowly bring the plates closer together shouldn't i see a slight increase in voltage until the discharge? If so, what possible explanations are there for me not seeing that voltage change? Moving the plates together would decrease the voltage. The laws of conservation of energy must apply. The total energy stored in the capacitor should be the same before and after you move the plates. The capacitance increases as you bring the plates together. The energy is calculated as ^1/[2] cv^2 therefore if the energy is constant, increasing c (capacitance) must cause v (voltage) to decrease. I am a bit confused by your last comment. Did you mean that since the energy is constant, increasing c must cause v to decrease? Based on the equation you gave that would make sense. Yes what you suggest is correct. Sorry, typo now corrected. Got another one for you. How would you calculate the voltage rating for a parallel plate capacitor? It is not a property of the capacitor, but one of the dielectric. You just need to find the breakdown voltage per mm thickness of your dielectric material. These figures obtained experimentally so you should just google for the value. what is the purpose of an isolation resistor? A resistor does not provide isolation, so I think it would depend on your circuit. What circuit is it for? Using P=V^2/R results in an answer about 10 times what I expected. Is this formula valid for AC? What formula or formulas would be used to find the value of resistor R1 in order to supply the proper voltage to this circuit. Yes, if you use RMS voltage. It gives you an average power because AC is constantly changing. What circuit? Type your message here Excellent idea using the wheel for your ohms law calcs. I wrote a simple prog for calculating the tricky ones eg sqr root and squared What in the construction of a battery causes its potential difference so that we can measure it in voltage? Chemical reactions provide the energy to force electrons around a circuit. I have a question dealing with hertz and amps. My refrigerator is 115V/60 Hz. Is this equal to or less than 1.5 amps? Thank you You can't work it out from the information you provided. You also need to know either the power (watts) or the impedance/resistance (ohms). I was wondering How many watts volts amps and hertz are in a computer? The power flowing through the computer varies between different computers and will also vary depending on hat the computer is doing.. Mains input in the UK is 220V @ 50Hz. A typical PC would use about 350W when running which would mean it is pulling about 1.6A.
{"url":"http://www.rmcybernetics.com/science/cybernetics/electronics_volts_amps_watts.htm","timestamp":"2014-04-20T00:38:41Z","content_type":null,"content_length":"77466","record_id":"<urn:uuid:15bbcf6c-5d4b-4d37-9c44-b6a462918c76>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
app that counts probabilities in c++ Yea,I found an intersting site called "Math is fun",and I undestood it,but how I can count probabilities with that? For the example in your original post: You cast the die six times, so there are ##6^6## equally likely sequences of results. Because they are equally likely, the probability of getting any given sequence is one in ##6^6##. How many of these sequences include exactly three fives?
{"url":"http://www.physicsforums.com/showthread.php?s=400c5c7ae755abc2d16ec59da83946b3&p=4625448","timestamp":"2014-04-24T17:35:07Z","content_type":null,"content_length":"32890","record_id":"<urn:uuid:ed2ee7d2-49ee-4ba8-9077-8597e738e815>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Probabilities on the circle At my department's tea yesterday, the following problem came up: Given three random points on a circle, what is the probability that they lie in the same semicircle? (Incidentally, why do we insist on calling it "tea" when most of the people there drink coffee? I suspect British influence at some point.) First, by "random" I mean "uniformly at random, with respect to angle or arc length"; a classic problem in this sort of thing is to determine the probability that the length of a random chord of a circle is larger than the length of a side of an equilateral triangle inscribed in the circle. This is Bertrand's paradox . A chord is uniquely determined by its midpoint, and the result is different if you choose a chord by picking its midpoint uniformly at random from the interior of the circle than by choosing its endpoints uniformly at random from the boundary. I feel this is "more ambiguous" in the two-point case than in the three-point case, because there's a nice way to associate a single point in the interior with two points on the boundary. The answer to my problem is 3/4. Consider the circle as the interval [0, 1] with the endpoints identified. Then the problem is one about choosing triplets of real numbers (x , x , x ) in that interval, where these numbers are chosen uniformly at random and independently. Replacing these with (0, y , y ) = (0, x , x - x ) corresponds to rotating the circle by x and therefore doesn't change the probability of our event. Finally, if either of y and y are between 1/2 and 1, subtract 1 from it; now we're just viewing the circle as the interval [-1/2, 1/2] (centered on the first point) instead of [0, 1]. So now our event is just that |y - y | ≤ 1/2. If y and y have the same sign, this is obviously true, and we can take the semicircle [0, 1/2] or [-1/2, 0]. If y and y have different signs, but |y - y | ≤ 1/2, then some interval of size 1/2 containing y and y . But if |y - y | ≤ 1/2, -- for example, if y = 0.3, y = -0.3 -- then we lose. Any interval of width one-half containing both of those -- say the union of [0.25, 0.5] and [-0.5, -0.25], recalling that 0.5 and -0.5 are identified -- won't contain zero. And the probability that two random points in a unit interval are within distance 1/2 of each other is just 3/4. Alternatively, if we're trying to find this interval, one of the points is at the "counterclockwise" end of the interval. So fix which point is counterclockwisemost in the interval, there are three ways to do that. Then the other two points must be within 180 degrees from that point; the probability of that is 1/4. The "most counterclockwise" point is unique, so we can just add up the three probabilities to get 3/4. In general, if "semicircle" is replaced with some section of the circle making up a proportion c of the whole, where c ≤ 1/2, the probability that three randomly chosen points lie in an interval of that size is 3c 9 comments: I think the easiest way to see this is with some simple diagrams and some conditional probability. Since all three points are placed independently, let's imagine that two points are placed first. The distance between these points is uniformly drawn from [0, .5] circumference lengths. Ignore the endpoints for a moment and consider the (0, .5) cases: draw radii from the endpoints to the center. The first two points and radii through them form a wedge, varying in size from 0 to half the circle. Now extend those radii to be diameters of the circle. It's easy to see that the points that are not within a semicircle of the original two points are the points that fall into the symmetrically opposite wedge. Anything within the larger portion of arc bounded by the diameters drawn from the first two points is within a semicircle of the first two points. The portion excluded is an arc that's symmetrically opposite, and of the same size as the distance between the first two points. This arc size will vary uniformly between 0 and 1/2 circumferences. The conditional probability that the third point will fall in that arc therefore varies uniformly between 0 and 1/2. The expectation of this conditional probability is therefore 1/4. Since the placement of the three points is independent, this conditional probability that the third point does not fall into the same semicircle as the first two, based on the distance between the first two must be the same as the actual probability that the three points will not be in the same semicircle. We are interested in 1 - this probability, or the probability that all three points are within the same semicircle, which is, of course, 3/4, just as you said. Finally, at the two endpoints of the distance between the first two points (which occur with probability zero), the portion of the circle exluded is either zero or 1/2, which averages to 1/4, just as the interior points do. i see your explanations - but somehow i am confused. please temme if this logic is sensible? ans = prob(angle subtended by two points = x) * prob(third point is inside x) = x/pi * x/2pi integral of above from 0-pi = pi/6 Anonymous, close but wrong. The ans. = 1 - (exp(prob.(third point is inside x)). The prob that the angle = x is the same for every value of x: 0. The probability for the angle between the first two points is uniform. You gave the probability that the angle is less than or equal to x, which is x/pi. The expectation of the probability of the third point being in that arc is simply the integral of (1/pi) * (x/2pi) from 0 to pi. The integral is (1/pi) * (x^2/4pi). Evaluated at the endpoints this gives 1/4. One minus this expecation = 3/4. Please tell me what's wrong with this argument: The first and second points are free to be anywhere on the circle. Whether or not the three points lie in a semicircle depends on the placement of the third point. The proprtion of the circumference where this will happen will depend on the acute angle (theta) between between the first two points. For example if 1st point was at 0 degrees and the second at 180. Then the three points will all lie in a semicrcle with certainty. If both points lie at 0 degrees, again they all lie in semicirlce with certainty. If 1st point lies at 0 degrees and second at 90, then the three points will lie in the same semicircle with probability 3/4. As the theta changes from 0 to 90, the probability decreases linearly from 1 to 3/4. As theta increases from 90 to 180 the probability increases from 3/4 back to 1. The probability is therefore 7/8. (Much easier to explain with diagrams) Scrap that last argument. I've just realised what i was doing wrong. thanks anyway Sorry guys. I am not so good in maths. Wide Circles This is very important site for math’s students. Math is my favorite subject so I like this site very much. Interesting blog as for me. I'd like to read something more concerning this matter. Thanx for sharing this info. Sexy Lady English escort Was thinking about dropping n points on circle, and it occurred to me that for any successful outcome, there exists exactly one of the points where a clockwise (WLOG) semi-circle contains all n We can then solve the trivial problem of finding the probability that the next n-1 points all fall in the clockwise semi-circle determined by the first point. 1/2^(n-1) is this answer. Essentially we're realizing that the order of dropping doesn't matter, so we're just looking at the mass of the set of all successful outcomes, and realizing that we've counted 1/n of them. So, the total probability is just n/2^(n-1). This avoids conditional probability. We instead find a clever way to identify the set of outcomes in question, such that computing the probabilistic measure of each is trivial.
{"url":"http://godplaysdice.blogspot.com/2007/10/probabilities-on-circle.html","timestamp":"2014-04-19T11:58:26Z","content_type":null,"content_length":"69469","record_id":"<urn:uuid:dc494377-ba5a-4fa8-82c0-716ca54788f3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
A simple ordinary differential equation up vote 1 down vote favorite Consider an entire function $f : \mathbb{C} \rightarrow \mathbb{C}$! We search the function $$ g: (a,b) \rightarrow \mathbb{C},$$ which solves the following equation locally: $g'(t)=f(g(t))$ and $g I can compute the inverse $G$ of $g$, if $f(x_0) \neq 0$, i.e. $$ G(y) = \int\limits_{f(x_0)}^y \frac{d s}{f(s)}.$$ I also known how to compute the Taylor expansion recursively, whose radius of convergence is positive (see below). Also we can give suitable approximations of the solution in terms of Picard iterations. I am not interested in such a solution! Is there an alternative to this integral expression? differential-equations dg.differential-geometry complex-analysis 4 Though f may be entire, the solution won't necessarily be so; take $f(u)=1+u^2$ for instance. – J. M. Nov 15 '10 at 8:54 2 If $f$ is real valued on the reals and $x_0$ is real your equation reduces to a standard autonomous equation $g'=f(g)$ on $R$. Why do you expect anything better than the standard representation of the solution, which is given precisely by your last formula? – Piero D'Ancona Nov 15 '10 at 10:13 @J.M: That is definitely true, I am not hoping for something entire. This example related to the tangens is a good illustration. @Piero D'Ancona: I hope that $f$ entire, does allow for a better description of the solutions. – plusepsilon.de Nov 15 '10 at 10:25 add comment 3 Answers active oldest votes It is hard to guess what you are looking for. Take the apparently simpler case where $f$ is a polynomial, say of degree $d$. If $d = 1$ you have an explicit solution in terms of the up vote 2 exponential function (because your $G$ is logarithmic). If $d = 2$ the solution can be written in terms of trigonometric functions. If $d = 3$ you need elliptic functions to express the down vote solution explicitly. As soon as $d$ is greater than $3$, I don't know of any standard naming for the functions you get or any interesting theory of these functions. 1 I don't think you need elliptic functions yet for $d=3$; you may have been thinking of the DE for Weierstrass: ${y^{\prime}}^2=4y^3-a y-b$ where the derivative is squared. On the other hand, $y^{\prime}=4y^3-a y-b$ requires the solution of a nasty-looking transcendental equation involving sums of logarithms. – J. M. Nov 15 '10 at 16:20 Thanks, you're right, J.M., I did forget the square! :-( – Dick Palais Nov 15 '10 at 16:56 I choose this as the correct answer since it comes closest to what I wanted. In the case of an elliptic equation, we get Jacobi's theory of elliptic function. This theory is well understood. If there a nicer expression I will probably find them here. The function $\sqrt{x}$ is "almost entire", hence I accept this argument as an indication! Thanks for this illustrating examples. – plusepsilon.de Nov 17 '10 at 11:10 add comment You don't need the Cauchy-Kovalevskaya theorem. Just the analytic inverse function theorem. up vote 7 down vote add comment The power series for $g$ has a positive radius of convergence; this is a consequence of the Cauchy-Kovalevskaya theorem (which is a statement about PDEs, but an ODE is just a PDE with up vote 3 down one variable). @Florian and Michael Renardy: The problem with the radius of convergence is now solved. Since power series are hard to analyse, I am searching for another expression. My problem is not as well posed as I have thought. – plusepsilon.de Nov 15 '10 at 14:37 add comment Not the answer you're looking for? Browse other questions tagged differential-equations dg.differential-geometry complex-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/46104/a-simple-ordinary-differential-equation/46114","timestamp":"2014-04-16T10:35:00Z","content_type":null,"content_length":"65194","record_id":"<urn:uuid:72934871-d716-461d-9302-6a2ba4ac9f48>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
The TIMSS 1995 Videotape Classroom Study In 1995, the Third International Mathematics and Science Study (TIMSS) included a Videotape Classroom Study. This video study was an international videotape survey of eighth-grade mathematics lessons in Germany, Japan, and the United States. Funded by the National Center for Education Statistics (NCES) and the National Science Foundation, the 1995 video study was the first attempt to collect videotaped records of classroom instruction from nationally representative samples of teachers. The study was conducted in a total of 231 classrooms in Germany, Japan, and the United States and used multimedia database technology to manage and analyze the videos. The Videotape Classroom Study had four goals: • To provide a rich source of information regarding what goes on inside eighth-grade mathematics classes in the three countries; • To develop objective observational measures of classroom instruction to serve as quantitative indicators, at a national level, of teaching practices in the three countries; • To compare actual mathematics teaching methods in the United States and the other countries with those recommended in current reform documents and with teachers’ perceptions of those • To assess the feasibility of applying videotape methodology in future wider-scale national and international surveys of classroom instructional practices. For the report on the methods and findings of the Videotape Classroom Study, click here. Example lessons from the TIMSS 1995 Video Study were made available in the form of video vignettes of six eighth-grade lessons, two each from Germany, Japan, and the United States. These example lessons were taught by teachers who volunteered to be videotaped for the project. The video vignettes were originally made available on a CD-ROM: Video Examples from the TIMSS Videotape Classroom Study: Eighth Grade Mathematics in Germany, Japan, and the United States (NCES 98092). Now they are all available for viewing through the links below. • U.S. Lesson 1: Complex Algebraic Expressions U.S. Lesson 1: Complex Algebraic Expressions After some warm-up problems, the teacher presents the problem 1/(x - 7) + 1/(x2 - 49) and asks students to find the least common denominator. After explaining the correct way to solve the problem the teacher assigns multiple tasks for seatwork, and students work on their own for the rest of the lesson. │ │ The lesson begins with the teacher asking the students to solve "warm-up" problems displayed on the overhead projector. The problems include finding the │ │ Part 1 │ largest integer n for which 2 >n! and finding the number of cubic inches in the volume of a rectangular solid if the side, front, and bottom faces have │ │ Presenting and Checking Warm-Up Problems │ areas of 12 in., 8 in., and 6 in. respectively. Students work on their own, during which time the teacher moves around the classroom helping individual │ │ [Begin: 02:12] │ students. After about thirteen minutes the teacher reconvenes the class to share the solutions. The teacher asks students for the answers which she │ │ │ records on the transparency. For the last problem, she asks, "How did you get it?" and the student describes the process. │ │ │ The teacher presents the problem 1/(x - 7) + 1/(x2 - 49) on the overhead projector and says to the students, "Yesterday we worked on least common │ │ │ denominators. Try this problem." While the teacher passes out the homework worksheets, the students work on this problem individually. After about one │ │ Part 2 │ minute, the teacher reconvenes the class to ask for the solution. Some students have difficulty, so the teacher explains each step. She then continues │ │ Presenting and Discussing Problems │ the lesson by presenting a second problem, [5/(x + 6)] - [(2-x)/(x + 6)], and warns students that "this one looks easier but there is a trick to it." │ │ [Begin: 17:18] │ Students work individually on the problem for about one and a half minutes. During this time, the teacher moves from desk to desk, checking students' │ │ │ work. When the teacher announces the answer, some students ask for an explanation. The teacher provides a brief explanation by asking students to fill │ │ │ in several steps leading to the answer. │ │ │ The teacher says, "For the remainder of the period there are about five things that I would like you to work on in the following order." These include │ │ Part 3 │ finishing a test, correcting the previous day’s homework, and finishing a worksheet for which a graphing calculator is needed. When these tasks are │ │ Assigning Multiple Tasks for Seatwork │ completed, students are to work on the next day’s homework. The homework requires students to find the Least Common Denominator of rational expressions. │ │ [Begin: 23:37] │ Exercises include finding the LCD of 4x and 8x; 3x-6 and 12x-24; and 12, 18 and 30. Students work on these assignments individually as the teacher │ │ │ circulates to assist them. The seatwork activity lasts about twelve minutes. The lesson ends with this activity. │
{"url":"http://nces.ed.gov/timss/video95.asp?VidType=5","timestamp":"2014-04-17T12:36:41Z","content_type":null,"content_length":"27680","record_id":"<urn:uuid:8252b862-32f4-4692-ac27-631984817251>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the best way to prove this conjecture? August 31st 2010, 04:08 PM What is the best way to prove this conjecture? Okay, first post in the forums for me(Nod) This is my first year in high school geometry (i'm a freshman) and I'm loving it! I've run into a spot of trouble though... My geometry teacher challenged us with a problem, and if I get it right, he will give me an A for the semester. I don't have the exact conjecture written down, but basically it is a formula to find the nth term in a sequence. For example, this formula could tell you what the 65th term in the sequence 4, 7, 13, 21.... (1st difference is 3, 6, 9, 2nd difference is 3,3,3). I have yet to learn proofs, which is why this is a challenge. From the research I have done, I figure that mathematical induction would be the best method. I am guessing that if I can prove it works for a difference of 1, I can prove it for 2,3,4,5, etc. Am I correct in my thinking? Is there a better method? Also, any advice in the process; this is my first time proving a theorem (and he said it IS a theorem, as he has already proven it). Thanks in I'll edit this tomorrow when I write down the formula, sorry about its absence right now. If any more information is needed, just ask. I look forward to helping out in the algebra section, and spending some time on these forums! September 1st 2010, 01:26 AM If the second difference column is constant then the function is a quadratic. That is, Use the first three numbers to solve for $a, b, c.$ September 1st 2010, 10:35 AM "...and if I get it right, he will give me an A for the semester." Due to the philosophy of this website, I'll pass on this one. September 1st 2010, 12:47 PM I expected this to come up, and I suppose I wasn't clear enough. I fully intend to work out the proof myself- once I know how to that is. My teacher said that we could do outside research (as we havn't learned proofs yet) and that is what I am doung. I don't want anyone to solve the problem for me, just explain the process of proving a conjecture using mathematical induction. Hope that clears some stuff up. September 2nd 2010, 04:47 PM Okay, Here is all of the information, with a better explanation of what I need help on. The theorem $A_k=[(k-b)*c]+a$ is used to solve for the nth term in an arithmetic sequence. The variables are as such: $k=$the number term you are looking for (e.g. the 54th term or the 78th term) $b=$the number of terms shown (e.g in the sequence of 2,4,6,8... this value would be 4) $c=$the 1st difference $a=$the last term shown (e.g. 8 in the above example) A quick example in case I did not explain it thoroughly: Given the sequence 3,6,9,12,15,18,21... determine the 86th term. The 86th term is therefore 258. I understand all of the above. The challenge is to prove this as a theorem, and that is what I am confused about, as I have not yet learned proofs. It is my understanding that if I can prove it to be true when $c=0$, I can then prove it to be true for $n+1$. I don't quite understand how to do that, but I can figure it out with time. What I don't get is how to prove this for negative integers. Can anybody offer an explanation at this point? Thanks. September 2nd 2010, 09:39 PM Your general idea about induction is correct. So, you prove the formula is correct for k = 1. Then you prove, under the assumption the formula for $A_k$ is correct, that the formula for $A_{k+1}$ is correct. One addition: you may have to do this for variables other than k. (Or maybe not. You should do something to deserve your A, don't you think?) September 2nd 2010, 10:29 PM I leave my comment purposely obscure because of the whole A incentive. If it were me I would replace a with something else. September 3rd 2010, 03:33 AM Ah okay so my thinking is correct. Thats all I needed, really, to get started. Thanks again for all the help here. Quick question; can I mark this thread as 'solved', even if it truely isn't, as I'm not asking for any more help. September 3rd 2010, 06:38 AM I see no problem with marking the thread solved; your main problem was "how to get started" and that has been solved. (Happy)
{"url":"http://mathhelpforum.com/geometry/154888-what-best-way-prove-conjecture-print.html","timestamp":"2014-04-18T04:32:50Z","content_type":null,"content_length":"12945","record_id":"<urn:uuid:ffdb1117-2602-4199-9ee1-08235978d6d0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: From point A 10cm from a plane P, a perpendicular line AC is drawn passing through a circle with canter C and radius of 8 cm in the plane. At any point B on this circle, a tangent BD is drawn 18cm in length. FInd the distance from A to D. ----> i need the illustration • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50aa361fe4b064039cbd2564","timestamp":"2014-04-20T13:53:48Z","content_type":null,"content_length":"62773","record_id":"<urn:uuid:84f22d7b-96c7-4844-90a2-6f2b4bce820f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Piedmont, CA Trigonometry Tutor Find a Piedmont, CA Trigonometry Tutor ...I am very effective in helping students to not just get a better grade, but to really understand the subject matter and the reasons why things work the way they do. I do this in a way that is positive, supportive, and also fun. I explain difficult math and science concepts in simple English, and continue working with the students until they understand the concepts really well. 14 Subjects: including trigonometry, calculus, statistics, geometry ...I tutored pre-calculus as an on-call tutor for Diablo Valley Junior College for three years. I taught pre-calculus sections as a TA at UC Santa Cruz for two years. I have taken classes in teaching literacy at Mills College. 15 Subjects: including trigonometry, reading, calculus, writing ...Of course, I do not impose one approach or the other on the student. I adapt to each student’s needs. I want to learn from my students as much as they want to learn from me. 24 Subjects: including trigonometry, chemistry, calculus, physics ...I have several recent Geometry students (HS or middle school) using the Holt, Glencoe or Jurgensen textbook. I can help with the basic problems thru drills whether it is proof-intensive or Geometry with Coordinate Geometry. I will also emphasize the technique of concentrating on special parts ... 15 Subjects: including trigonometry, calculus, GRE, algebra 1 ...Using these tools, you can find missing lengths and angles of other figures. It also includes the concepts of functions, and different properties (identities) of each functions. This subject is very handy for craftsmen, such as woodworkers, etc. 23 Subjects: including trigonometry, reading, geometry, ASVAB Related Piedmont, CA Tutors Piedmont, CA Accounting Tutors Piedmont, CA ACT Tutors Piedmont, CA Algebra Tutors Piedmont, CA Algebra 2 Tutors Piedmont, CA Calculus Tutors Piedmont, CA Geometry Tutors Piedmont, CA Math Tutors Piedmont, CA Prealgebra Tutors Piedmont, CA Precalculus Tutors Piedmont, CA SAT Tutors Piedmont, CA SAT Math Tutors Piedmont, CA Science Tutors Piedmont, CA Statistics Tutors Piedmont, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Piedmont_CA_trigonometry_tutors.php","timestamp":"2014-04-17T19:51:59Z","content_type":null,"content_length":"24182","record_id":"<urn:uuid:780aca1e-931f-4548-a949-5f819a006b04>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Commutative Noetherian Domains of Krull Dimension One up vote 2 down vote favorite k is an alegraically closed field and A is a commutative k-algebra. We also know that A is a Noetherian domain and its Krull dimension is one. Are there any necessary and sufficient conditions on A under which A becomes finitely generated module over a polynomial algebra k[c] for some c in A? Does anybody know any papers or books that discuss this? Thanks guys. 4 The answer seems to be that it is necessary and sufficient for $A$ to be finitely generated as a $k$-algebra. Necessity: finitely generated as a module implies finitely generated as an algebra, and finitely generated over finitely generated is finitely generated. Sufficiency: apply Noether normalization as suggested in the responses below (e.g. Eisenbud, Thm. 13.3). This has an evident generalization to any finite Krull dimension. – Pete L. Clark Mar 10 '10 at 13:16 +1 for catching my error, Prof. Clark. – Harry Gindi Mar 10 '10 at 13:29 add comment 3 Answers active oldest votes Dear Amitsur, It might help you to think geometrically. For example, $k[x,x^{-1}]$ is the ring of functions on a hyperbola $xy = 1$, and the projection from this hyperbola to the line $x = y$ is a up vote 2 finite projection. This corresponds to the fact that $k[x,x^{-1}]$ is finitely generated as a module over $k[x + x^{-1}].$ (If we write $f = x + x^{-1}$, then $x^2 - f x +1 = 0$ and $x^ down vote {-2} - f x^{-1} + 1 = 0$.) add comment This is always the case, by Noether normalization. For a proof, see for examlple, Eisenbud - "Commutative algebra with a view towards algebraic geometry" - Theorem 13.3. up vote 0 Edit: this is false, one need a finiteness assumption. See comments below. down vote It's clearly not always the case, because A could be L[T], a polynomial ring in one variable over a field L which is a huge algebraically closed field and a transcendental extension of k – Kevin Buzzard Mar 10 '10 at 13:03 you are right off course. My mistake, I misread the question. – Liran Shaul Mar 10 '10 at 13:05 This is incorrect. If $A$ is a finitely generated module over $k[t]$, then in particular $A$ is a finitely generated $k$-algebra, hence a Hilbert-Jacobson ring. The Laurent series ring $k[[t]]$ is a PID (and not a field) with only finitely many prime ideals, hence not Hilbert-Jacobson: see e.g. Section 8 of math.uga.edu/~pete/integral.pdf. – Pete L. Clark Mar 10 '10 at 13:06 add comment This follows from a direct generalization of the Noether normalization lemma. It is covered in these notes from Mel Hochster. These notes prove it in a pretty general form (when the base ring is only an integral domain rather than a field). up vote 0 down Edit: [DEL:A sufficient condition is that the algebra is finitely generated, but it is clearly not necessary.:DEL] Edit 2: I misread the question. I thought he was asking if A is finitely generated over some polynomial algebra (including infinitely generated polynomial algebras). Thank you guys. I'll check the sources you gave me. So, the Laurent polynomial ring A=k[x,x^{-1}] must be finitely generated k[f]-module for some f in A then. This is kind of strange, isn't it? – Amitsur Mar 10 '10 at 13:02 See the comments above. This is incorrect (note: being finitely generated over k is clearly necessary.) – Pete L. Clark Mar 10 '10 at 13:06 1 Dear Amitsur, perhaps it seems less strange if you realize that you can take f=x+x^{-1} and that x then satisfies the equation x^2-fx+1=o, so that 1 and x generate A as a k[f] -module . – Georges Elencwajg Mar 10 '10 at 13:46 lol, of course! Thanks Georges Elencwajg. I guess I need to get some rest. Thank you all for your useful comments. – Amitsur Mar 10 '10 at 13:57 add comment Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/17713/commutative-noetherian-domains-of-krull-dimension-one","timestamp":"2014-04-19T04:59:53Z","content_type":null,"content_length":"67961","record_id":"<urn:uuid:bbe1364e-1f7e-448d-aacb-46780652b4c3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Self-Organization of Muscle Cell Structure and Function The organization of muscle is the product of functional adaptation over several length scales spanning from the sarcomere to the muscle bundle. One possible strategy for solving this multiscale coupling problem is to physically constrain the muscle cells in microenvironments that potentiate the organization of their intracellular space. We hypothesized that boundary conditions in the extracellular space potentiate the organization of cytoskeletal scaffolds for directed sarcomeregenesis. We developed a quantitative model of how the cytoskeleton of neonatal rat ventricular myocytes organizes with respect to geometric cues in the extracellular matrix. Numerical results and in vitro assays to control myocyte shape indicated that distinct cytoskeletal architectures arise from two temporally-ordered, organizational processes: the interaction between actin fibers, premyofibrils and focal adhesions, as well as cooperative alignment and parallel bundling of nascent myofibrils. Our results suggest that a hierarchy of mechanisms regulate the self-organization of the contractile cytoskeleton and that a positive feedback loop is responsible for initiating the break in symmetry, potentiated by extracellular boundary conditions, is required to polarize the contractile cytoskeleton. Author Summary How muscle is organized impacts its function. However, understanding how muscle organizes is challenging, as the process occurs over several length scales. We approach this multiscale coupling problem by constraining the overall shapes of muscle cells to indirectly control the organization of their intracellular space. We hypothesized the cellular boundary conditions direct the organization of cytoskeletal scaffolds. We developed a model of how the cytoskeleton of cardiomyocytes organizes with respect to boundary cues. Our computational and experimental results to control myocyte shape indicated that distinct muscle architectures arise from two main organizational mechanisms: the interaction between actin fibers, premyofibrils and focal adhesions, as well as cooperative alignment and parallel bundling of more mature myofibrils. We show that a hierarchy of processes regulate the self-organization of cardiomyocytes. Our results suggest that a symmetry break, due to the boundary conditions imposed on the cell, is responsible for polarization of the contractile cytoskeletal organization. Citation: Grosberg A, Kuo P-L, Guo C-L, Geisse NA, Bray M-A, et al. (2011) Self-Organization of Muscle Cell Structure and Function. PLoS Comput Biol 7(2): e1001088. doi:10.1371/journal.pcbi.1001088 Editor: Edmund J. Crampin, University of Auckland, New Zealand Received: November 24, 2009; Accepted: January 19, 2011; Published: February 24, 2011 Copyright: © 2011 Grosberg et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work has been supported by the Nanoscale Science and Engineering Center of the National Science Foundation under NSF award number PHY-0117795, the Harvard Materials Research Science and Engineering Center under NSF award number DMR-0213805, the DARPA Biomolecular Motors program, and NIH grant 1 R01 HL079126 (KKP). Mark-Anthony Bray acknowledges salary support from a UNCF-Merck Science Initiative postdoctoral fellowship. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. During biological development, evolving forms are marked by distinct functionalities. An interesting example is the organization of myofibrils in striated muscle cells. As the myocyte matures, the myofibrils are rearranged from an irregularly dispersed pattern into tightly organized bundles spanning the length, rather than the width, of the cell [1]. Although assembly of the myofibril from its molecular constituents has been extensively investigated [2], [3], [4], how myofibrils build this specialized architecture and its functional consequences remains unanswered. This is important because changes in muscle structure accompany not only morphogenesis, but also pathogenesis [5], [6]. Myofibrils mature in a force-dependent manner [7], [8], [9], suggesting that the contractility of a cell may play an important role in polarizing the myofibrillar network. This has been shown in nonmuscle cells where the cytoskeletal architecture within a geometrically-defined microcompartment becomes polarized with increasing tractional forces [10], [11]. Thus, we hypothesized that geometric cues in the extracellular matrix (ECM) can organize the intracellular architecture and potentiate directed myofibrillogenesis. Because of the difficulty in identifying de novo sarcomeres in primary harvest muscle cells in culture, one strategy for studying myofibrillogenesis is to coax the disassembly and reassembly of myofibrils by forcing myocytes to assume shapes that are not commonly observed in vivo using engineered substrates in vitro [10], [11]. To guide these experiments, we developed a computational model of myofibrillar patterning to show the sensitivity of the intracellular architecture to the extracellular space. With these tools, we sought to understand the critical events in the global assembly and organization of the contractile apparatus in cardiac myocytes. By comparing experimental results with our computational model, we were able to elucidate the role of maturing myofibrils, their parallel coupling, and their functional attachment to the focal adhesion assembly and how these processes are guided spatially by the boundary conditions imposed on the cell. After determining the roles of these parameters in myofibrillogenesis, we then expanded our model to test the functional implications of these architectures. We developed a novel method for micropatterning on soft substrates and were able to engineer myocyte shape on substrates that would allow us to measure the contractility of these artificial shapes and compare them with the model results. Together, these results suggest that the self-assembly and -organization of the contractile apparatus is facilitated by a symmetry-breaking event that is potentiated by either a geometric cue in the extracellular space or a random event in the intracellular space. Qualitative Description of the Model Our theoretical approach focuses on the interaction between the myofibril and the ECM, as well as adjacent myofibrils (Fig. 1). Inherent to our model are two key assumptions: 1) the force that the myofibrillar bundle exerts on the substrate is fiber length-dependent [12] and 2) adjacent myofibrils affect each other to facilitate lateral coupling, which is akin to them exerting torque on each other. We have modeled only the maturation of cytoskeletal structural elements responsible for contraction and integrin binding to the ECM. We define these components using coarse-grained variables that are experimentally observable. This eliminates the computational complexity required to model detailed molecular interactions and the effect of different protein isoforms. The nomenclature for the immature and mature versions of the myofibril vary with different qualitative models (reviewed by Sanger and colleagues [2]). Here we refer to the immature state as the premyofibril, and the quasi-mature state as the nascent myofibril [1], [13]. Figure 1. Schematic representation of myofibril reorganization in a 2D myocyte. (A) red: actin; blue: nucleus; green: FAs. The FAs can spread throughout the ECM island (outlined by solid black island). (B) Net force (F) exerted on bound integrins, as determined by the sum of all forces exerted by the anchoring premyofibril vectors, recruits free integrins and promotes growth of FAs. For the purposes of modeling the bound integrins connected to premyofibrils are labeled ρ[p]( r). (C) Continued recruitment of free integrins to the growing FA at the cellular corners is associated with enhanced bundling of the premyofibrils and subsequently increased traction. (D) Built upon the premyofibrillar network, the nascent myofibrils align in parallel and develop into a fully organized bundle, further amplifying local force to result in FA maturation. For the purposes of modeling the bound integrins connected to nascent myofibrils are labeled as ρ[n](r). (E) Bound integrins with zero net force cannot recruit free integrin and are disassociated from the membrane, leading to release of the attached fiber (F). Consequently, contractile fibers on shorter axes (G) are less bundled than that following the longest diagonal of the cell. (H) Qualitative schematic of model implementation algorithm. Our mathematical approach differs from others [14], [15] in that we incorporate focal adhesion (FA) kinetics, mutual alignment of adjacent contractile fibers, and the dependence of contractile forces on fiber length [16]. The variables used in our approach are: (1) the density of bound and unbound integrin, and , respectively; with the bound integrins connected to premyofibrils and nascent myofibrils labeled as and , respectively; (2) the net force exerted on the bound integrin, ; (3) the local density, , orientation, , and the orientational order parameter, , of the premyofibril network and the nascent myofibril network; and (4) the resultant 2D stress field exerted by the cell on the substrate, T. Previously, we reported [17] that when cardiac myocytes are constrained on 2D islands, their vertical dimension, orthogonal to the plane of the culture surface, is uncontrolled. In that study, we reported that myofibrils are predominantly located under the nucleus, in a plane parallel to the culture surface. However, as that study also showed, several layers of myofibrils may be present, and the nucleus and microtubule organizing center may represent an obstacle to a symmetrical array of myofibrils in the thicker regions of the cell. Our model and analysis is restricted to the 2D intracellular plane closest to the culture surface. Instead of solving the steady state for all of the variables, we numerically simulated their spatiotemporal profiles. This allows us to trace the effect of local symmetry-breaking events such as the mutual alignment of fibers on myofibrillar patterning, which cannot be easily predicted by conventional steady-state analysis. The local symmetry-breaking event may result from a static cue or a transient perturbation. In our simulation, we began with randomly distributed densities of the unbound integrin, unless fitting parameters, in which case we examined several sets of initial conditions. The unbound integrin can initially become bound through a random process, with the rate proportional to its local concentration. The fraction of bound integrins connected to the fibrils is modeled as an adsorption process, and is calculated using the Langmuir isotherm. The force exerted between FAs is assumed to be proportional to the product of fiber connections at each site [16]. The net force at a local FA is computed by integrating the tension contributed by all connected contractile elements (Fig. 1A). The net force governs the growth rate of local FAs, which in turn modulates the premyofibril network [18], [19]. The assembly of FAs and the bundling of its associated fibers is coupled by a positive feedback loop via forces exerted on the FA [16], [18]. As a consequence of the positive feedback, when the net force on a FA is not zero, both the FA and its associated fibers are structurally reinforced (Fig. 1B–D) [20]. If the net force is zero, the bound integrins will disassemble at each time step and disassociate the attached fibers (Fig. 1E–G) [18], [21]. As time lapses, the premyofibrils are converted to the nascent myofibrils. The local orientation of the nascent myofibril is primarily determined by the antecedent premyofibril network, but also can be modulated by adjacent myofibrils due to their lateral coupling [1], [22]. In some cell shapes, polarization of the myofibrillar array can only be achieved by the lateral alignment of adjacent myofibrils, which occurs at a much slower time scale than that of fiber assembly [1], [22]. The effect of the lateral coupling is modeled as a biasing potential field that distributes the free integrins, such that the nascent myofibrils are moved towards each other through the course of normal integrin recycling. To visualize the amount of parallel, or lateral, coupling of the fibers, we define a variable, ψ, which varies from zero for no local coupling, to unity for the maximal local coupling. The model's calculations are ordered as depicted in Fig. 1H. Model versus Experiment: The Architecture of a Stair-Shaped Myocyte To fit the parameters of the computational model, we chose an uncommon cell shape, a stair-shaped myocyte, that we could model computationally in silico and repeatably in vitro with cell engineering techniques (Fig. S1). The parameters were fit on a variety of initial conditions (Fig. S2) such that the steady state results were the same for each. In Fig. 2A we show the temporal results for an initial condition with a random distribution of free integrins. Initially, there are no fibers in the cell, as no integrins are bound (Fig. 2A). The geometrical symmetry of the stair-shape cell potentiates the initial appearance of fibers predominantly along the diagonal. As the fibers form, the fiber density is mostly uniform throughout the cell, as evident from the line segment thickness (Fig. 2A). When the nascent myofibrils form and begin to laterally couple, they are distributed diffusely within the cell (Fig. 2A). As time progresses, the positive feedback increases, i.e. greater number of fibers produces a greater force which drives the clustering of bound integrins and fibers. As a result, the myofibrils achieve a distribution very similar to the steady state (Fig. 2A). For the rest of the simulation the nascent myofibrils mutually align and exhibit greater degrees of parallel coupling (Fig. 2A). Myocytes were cultured on stair-step shaped islands for three days and then stained against actin filaments (Fig. 2B). At equilibrium, most nascent myofibrils are coupled and aligned with the major diagonal, as shown experimentally in Fig. 2B and in simulation (Fig. 2A ). The parallel coupling of the nascent myofibrils emerges later in the simulation, as suggested by previous reports [1], [21], [22], [23]. In summary, the simulated dynamics visualized for nascent myofibril bundling and realignment show that well-aligned myofibrils first occurred in the center of the cell, followed the longest diagonal, and recruited additional adjacent fibers to form a bundled, parallel arrangement. Figure 2. Simulated dynamics of myofibril organization and immunostaining of actin alignment. (A) Simulated results for the dynamic profile of myofibril organization in a stair-step-shaped myocyte. Red lines represent the myofibrils, with thicker lines representing regions of denser myofibrils. The grey color scale represents the amount of local parallel coupling of the nascent myofibrils; color values are in arbitrary units normalized to the highest values. As we start with a random distribution of free integrins, initially there were no fibers. The geometrical symmetry break in the stair-cell is so strong that for random initial conditions the fibers generally align with the major diagonal as soon as they are formed. However, nascent myofibrils become latterly coupled throughout the cell as evident by the diffuse grey shading at . As time elapsed, the nascent myofibrils reorganized and oriented themselves along the longest cellular diagonal, and coupled to each other greatly increasing parallel coupling. The steady state fiber organization matches the experimental results. (B) Immunostaining of the actin network from a myocyte with similar shape agrees with the numerical prediction; scale bar: 10 µm. Model versus Experiment: Heterogeneous and Homogeneous Boundary Conditions To test our hypothesis, we examined the sensitivity of myocytes and our model to various cellular boundary conditions. We reasoned that when myocytes are constrained by a heterogeneous boundary curvature, triangles (Fig. 3A) and squares (Fig. 3F), the distinct geometrical cues at the cell boundaries would potentiate unique cytoskeletal architectures, but when cells are constrained by a homogeneous boundary curvature, a circle (Fig. 3K), there is no external cue to break the symmetry of the isotropic network. Thus, we examined two cases of the cell with heterogeneous curvature at the periphery: the square shaped cell, where the longest axes are on the diagonal, and the equilateral triangle shaped cell, where the long axes are along the cell periphery. We also tested cells with homogeneous boundary curvature: the circular shaped cell, in which no major axis is defined. To ensure that the observations resulted from geometric considerations alone, we used the same parameter values from the previous simulations. Figure 3. Experimental images and model depictions of organization of actin and FAs. First column: DIC images of micropatterned triangular (A), square (F), and circular (K) myocytes. Second column: Immunostained actin in triangular (B) and square (G) myocytes followed the longest cellular dimension, while actin fibers in the circular myocyte (L) primarily oriented on the 2 to 8 o'clock axis. Third column: Predicted myofibrillar pattern of triangular (C), square (H), and circular (M) myocytes agrees with experimental results. The steady state of the circular cell occurred slower than that of the triangular and square cells. The thickness of the lines is proportional to the myofibril density . The grey color scale represents myofibril bundling, i.e. degree of parallel coupling . Fourth column: Immunostained vinculin of triangular (D) and square (I) myocytes was concentrated at cellular corners, while two opposing plaques of vinculin localized on the 2 to 8 o'clock axis in the circular (N) myocyte. Fifth column: Simulated FA density () at steady state in triangular (E), square (J), and circular (O) cells was consistent with experimental results. The FA distribution in a circular myocyte (O) was expected to break the symmetry. Color values in simulated results are in arbitrary units scaled from 0 to 1; scale bars are . Fluorescent staining of actin filaments in myocytes cultured on square and triangular ECM islands for 72 hrs revealed that polymerized actin fibers were densely arranged along the longest axes (Fig. 3). The fibers are regularly punctuated along their length, indicating the presence of sarcomeres (Fig. 3B, G). At steady state, modeled triangular and square cells displayed the same cytoskeletal arrangement as the in vitro results, with enhanced parallel bundling occurring along the longest axis of these cells (Fig. 3C, H). Fluorescent staining of vinculin revealed elongated FAs in the corners of the square and triangular cells that were oriented in parallel with their attached myofibrils (Fig. 3D, I). Numerical results revealed the same accumulation pattern of FAs, as indicated by the density of bound integrin located in the corners (Fig. 3E, J). The dynamics of the simulation results are depicted in Fig. 3C, E, H, and J and Video S1, S2, S3, and S4. As previously observed in the simulation shown in Fig. 2, the predominant orientation of the premyofibrils occurs quickly and the parallel bundling increased with time to further stabilize the myofibrillar architecture with respect to the geometric cues in the ECM. These data suggest that FAs localize and mature at the corners because the premyofibrils that align along the longest axes of the cell are the strongest by virtue of their greater propensity for parallel bundling and binding myosin motors [24], [25]. In contrast, myocytes cultured on circular ECM islands (Fig. 3K) for the same period of time have random myofibrillar architectures (Fig. 3L) [26], which is recapitulated in the model (Fig. 3M). Without an external cue to break the geometric symmetry, computer simulations suggest that myofibrillar polarity will emerge after a longer period of time, (almost five times as long as other shapes). Transient multi-pole patterns develop within cellular microcompartments (Video S5) and at equilibrium there is local bundling and nascent myofibril formation, but no overall cell organization (Fig. 3M, Video S5). In vitro, vinculin stains irregularly around the myocyte perimeter (Fig. 3N). In silico, after a similarly prolonged simulation, FAs appear as opposing bands along the cell periphery (Fig. 3O, Video S6). It is important to note that this patterning is due to a random, intercellular, symmetry-breaking event and that while the model will always converge, circular cells both in silico and in vitro, after 2–3 days in culture, often display irrepeatable cytoskeletal structures. Together, the simulation and experimental results summarized in Fig. 3 suggest that the orientation of the premyofibrillar network is regulated by ECM cues. These cues promote stabilization of the network and FAs, facilitating parallel bundling of the nascent myofibrils. Furthermore, our model predicted that the polarized myofibrillar network has a preference to align along the longest axis of cells. Model versus Experiment: Contractility Proper functioning of myocytes requires the correct myofibrillar configuration for coordinated contraction [5]. To correlate myofibrillar structure with contractile function, we investigated the spatial patterning of sarcomeric proteins and conducted traction force microscopy on the cultured myocytes. Fluorescent micrographs of myocytes immunostained against sarcomeric α-actinin revealed distinct myofibrillar patterning on ECM islands of heterogeneous boundary curvature (Fig. 4A, F). The sarcomeric Z-lines register in the internal angles of the corners of both the square and triangle and are perpendicular to the orientation of the actin fibers. To measure myocyte contractile stresses, we engineered ECM islands on soft substrates. When freshly harvested myocytes are cultured on these substrates, they remodel to assume the shape of the island in the same manner as they do on rigid substrates (Fig. 4B, G). Unlike myocytes cultured on the rigid substrates, myocytes on soft substrates do not contract isometrically and can be observed to shorten as in traditional assays of single myocyte contractility (Fig. 4C, H, Video S7 and S8). To visualize substrate deformation due to myocyte contraction, fluorescent beads were embedded in the substrate and bead movement was detected using high speed fluorescence microscopy. The nominal stress field exerted on the substrate due to systolic contraction, with the resting myocyte position defined as the reference state, was calculated from substrate deformation with the known substrate mechanical properties and assuming that the substrate is linearly elastic. In the videos (Videos S9 and S10), the substrate displacement vectors, as depicted by the white arrows, are directed inward during systole, indicating that the substrate is pulled towards the center of the myocyte by the shortening FA-anchored myofibrils. During diastole, they reversed direction as the elastic recoil of the myocyte pushed the substrate back to the rest position. The myocytes generate a unique contractile footprint that mimics the position of the FAs depicted in Fig. 3, with the highest systolic stresses exerted on the substrate at the corners of the myocyte (Fig. 4D, I). Note that even though the model does not differentiate between systolic and diastolic stresses, the experimental substrate stress field pattern matches the simulated results (Fig. 4E, J). Figure 4. Sarcomeric structure, traction force at peak systole, and model predictions. First column: Sarcomeric -actinin immunofluorescence delineates the Z-lines in triangular (A), square (F) and circular (K) myocytes. Z-line orientation indicated that the axis of contraction was parallel to the longest axis of the cell. In the circular myocyte, most of the Z-lines aligned on the 1 to 7 o'clock axis with the dominant axis of contraction expected to follow the 4 to 10 o'clock direction. Second column: DIC images of micropatterned triangular (B), square (G), and circular (L) myocytes at full relaxation. Third column: DIC images at full contraction of the triangular (C), square (H), and circular (M) myocytes show the cells shortened about 24%, 18%, and 14% along the longest cell dimension, respectively. Fourth column: The contractile traction map of the triangular (D) and square (I) myocytes displayed high traction stresses at the cellular corners. The contraction map of the circular myocyte (N) indicated that the cell broke radial symmetry, with the principal axis of contraction along the 3 to 9 o'clock axis. Fifth column: Numerical results of predicted traction (T) of triangular (E), square (J), and circular (O) myocytes replicated experimental results. In the fourth and fifth columns, the color scale and arrows represent the magnitude and direction of traction, respectively. Color values in simulated results are in arbitrary units; scale bars are . In myocytes of homogeneous boundary curvature, the myofibrillar patterns are not reproducible. However, structural coordination of the myofibrils on a preferential axis was observed, as evidenced by the well-demarcated Z-lines that continuously traversed the 1 to 7 o'clock axis in the circular myocyte shown in Fig. 4K. Similarly, the circular shaped myocytes cultured on soft substrates appear to shorten concentrically during contraction (Fig. 4L, M, Video S11), where a principal axis of shortening is apparent at peak systole but does not occur with the same spatial regularity of the square and triangular cells (Fig. 4N, Video S12), consistent with previous findings with nonmuscle cells [10]. Our model predicted a similar contractile signature (Fig. 4O), with the peak stresses coincident with the location of the widest FA bands observed in Fig. 3O. Thus, these data suggest that muscle cells depend on extracellular spatial cues to efficiently and functionally organize the myofibrils and contracion. Hierarchy of Organizing Strategies: Force-Length Dependence vs. Mutual Alignment We hypothized that a hierarchy of mechanisms may be responsible for myofibrillar organization. We reasoned that our model would allow us to determine which of the two model features, the fiber length-force dependence and parallel coupling of fibers, was dominant in organizing the myofibrillar architecture. We also reasoned that the nature of the cell boundaries may determine the sensitivity of the cell to these two mechanisms. To test this hypothesis, we ran simulations where these two features were either on, or turned off, within the cell. In the staire-shaped cell, we ran simulations where: 1) there is no mutual alignment mechanism but fiber contractility is fiber length-dependent (L = ON, τ = OFF, refer to Eq. (1) & (4)); 2) the fiber contractility is not myofibril length-dependent but there is mutual alignment of fibers (L = OFF, τ = ON); and 3) there is neither fiber force-length dependence nor any mutual alignment of fibers (L = OFF, τ = OFF). In simulations where the nascent myofibrils have fiber force-length dependence, fibers will predominantly organize along the major diagonal (Fig. 5A) as shown experimentally (Fig. 2B), however, when there is no fiber force-length dependence, fiber bundles follow both the long and short diagonals (Fig. 5B). Figure 5. Testing model assumptions in silico. (A,B,D,E) Steady state results for different conditions tested in silico, where the red segments correspond to the direction of the all fibers, and the thickness of the red segments is proportional to the density of the fibers. The grey contour represents the degree of parallel coupling. Note that all the values were normalized by the maximum across all the conditions for ease of comparison between them. C) Plot of averaged (over the whole cell) degree of parallel coupling. Stair shape cell: fiber length-force independence, but mutual alignment –solid grey line; fiber length-force independence and no mutual alignment – dashed black line; fiber length-force dependence and mutual alignment - solid black line; no mutual alignment, but fiber length-force dependence - dash-dot grey line; the inset shows the difference in steady state values between the two latter cases. Triangle cell with both fiber length-force dependence and mutual alignment – grey line, triangular markers. Square cell with both fiber length-force dependence and mutual alignment – black line, square markers. Circular cell with fiber length-force dependence, with mutual aligment and no mutual aligment is shown as a black line with circular markers, and a grey line with empy circular markers, respectively. Comparing steady states for stair cells in (A) and (B) illustrates the necessity of fiber length-force dependence, while comparison of circular cells in (D) and (E) illustrates the necessity of mutual alignment of fibers. We compared the mean degree of parallel coupling as a function of time for all conditions (Fig. 5C). This analysis reveals that the force-fiber length dependence is an essential contributor to the emergence of an organized equilibrium in the myofibrillar network. In these simulations, the absence of the force-length dependence potentiated a less organized nascent myofibril network, whereas mutual alignment of nascent myofibrils enhanced parallel coupling. Eliminating the mutual alignment alone (grey dot-dashed line), produces a minor effect in the stair cell as shown in the inset of Fig. 5C, however, previous reports suggest that the effect of mutual fiber alignment is seen at longer time scales [1], [21], [22], [23]. We asked how mutual fiber alignment would effect myofibrillar organization in the circular cell, whose homogeneous boundary curvature requires an internal, random symmetry break to achieve equilibrium. By eliminating the ability of fibers to cooperatively align in circular cells (grey-empty circle line Fig. 5C), we show that the increase in parallel fiber coupling is solely depended on the ability of the nascent myofibrils to mutually align. The importance of mutual alignment is illustrated by contrasting the steady state fiber organization in circle cell with mutual alignment ( Fig. 5D) and no mutual alignment (Fig. 5E). In the case of no mutual fiber alignment the fibers in the circular cell remain randomly organized, which is contradicted by experimental results (Fig. 3L and Fig. 4N). In summary, our data suggests that the fiber length-force dependence is necessary to reproduce myofibrillogenesis in all cell shapes, while the importance of mutual fiber alignment effect increases in cells with homogenous boundary conditions. Muscle morphogenesis is a hierarchal, self-organizing process spanning from nanometer scale conformational changes in proteins to bundled fibers sometimes a meter in length. We reasoned that boundary constraints are a physical signal that is conserved over all of these length scales and spatially organizes this broad range of coupled structures. Based on previous experimental evidence [17], [26], [27], [28], we hypothesized that geometric cues in the extracellular space help organize the assembly of the contractile apparatus in the cytoplasm and developed computational and experimental models to recapitulate these events. We report that distinct cytoskeletal architectures arise from two temporally-ordered, organizational processes: the cooperative interaction between premyofibrils and focal adhesions, as well as the mutual alignment and parallel bundling of nascent myofibrils. Our model assumes that the assembly of FAs and the parallel bundling of actin based fibers is coupled by a positive feedback loop and that the growing force on the FA potentiates its structural reinforcement, as suggested by previous experimental work [7], [8], [9]. By modeling the amount of bound and unbound integrin and by marking the maturation of the premyofibril to a nascent myofibril simply by increased contractility, we are able to predict the organization of the contractile apparatus in cardiac myocytes cultured on engineered substrates in a computationally efficient manner. To achieve this efficiency, we ignore the details of the molecular constituents of the assembly of myofibrils [2], [3], [4]. However, we were able to account for all the dominant factors in a course grained manner as indicated by the match between all our models and experiments. By experimenting with our assumptions in silico and comparing them to data from in vitro experiments, our results suggest that the force that the myofibrillar bundle exerts on the substrate is fiber length-dependant [10], [11], [12] and that the adjacent myofibrils exert “torque” on one another to facilitate coupling [24], are necessary to describe how these myocytes build and organize their internal cytoskeleton relative to extracellular cues. Our computationally efficient model recapitulates the elegant protein choreography of the sarcomere assembly, where an ensemble of proteins assembles repetitively along the length of the actin fiber template. Several models of cell cytoskeleton assembly and mechanics have been reported and it is worthwhile to compare and contrast the efforts [14], [15], [29]. Our model is similar to the model by Novak and colleagues [16] in that we have used reaction kinetics to simulate the dynamic self-assembly and – organization of the cytoskeleton. These approaches differ from that of Deshpande, et al [14], [15], [16] who report a solid mechanics model and Pazek and colleagues [14], [15], [29] who use a mechanochemical model. All four of these models simulate the bound and free states of integrins in some form and also model the increasing stabilization, or maturation, of focal adhesions with increases in exerted force. The Despande and Pazek models offer detailed mechanical analysis of the cell-substrate interface, whereas our model, like the Novak model, does not. While the Pazek, et al., model does not recapitulate stress fibers, our model, like the Novak and Deshpande models, does. Our model accounts for the specialized case of the maturing striated muscle cell by mimicking the transition of a premyofibril to the nascent myofibril, modeled by an increased ability to generate tension. The Hammer and Novak models omit the fiber length-force assumption that is critical to our model's ability to recapitulate our experimental data. Similarly, the Desphande and Novak models explicitly do not account for mutual alignment of fibers, whereas ours does. Our model, like the Desphande et al. and Pazek models, calculates the load exerted on the substrate by the contracting cell, where the Desphande and Pazek models offer detailed descriptions of the solid mechanics at this interface. Both our model and that by Novak et al., are similar to larger scale models of myofibril adaptation in the left ventricle [30], in the assumption that there is a network of fibers where all integrins are connected to all other integrins. Each model, including the one reported herein, varies in approach and further work is required to test all of these models against experimental data as we have attempted. We were able to reproduce the results shown by Novak et al., [16], who predicted that with no fiber tension-length dependence, and homogeneous boundary conditions the FAs would aggregate to the perimeter. However, as our in vitro work shows even with a homogeneous boundary condition, i.e. the circular cell, there occurs a symmetry break, therefore it is necessary to introduce fiber tension-length dependance and mutual alignment of fibers for in silico experiments. We can also utilize the model to explore the effect of cell boundary curvature, cell aspect ratios and combinations of multiple cells on the myofibril distribution, as well as the relative importance of mutual fiber alignment in three dimensions. Additionally, it will be possible to integrate our model with adhesion dynamics models using the same methods as Paszek et al., to explore integrin clustering with contractile cells on substrates with different material properties [29]. This combination of a mechanical model with our myofibrillogenesis model could also allow for simulations of the rearrangement of the extracellular matrix by contractile cells. In summary, our study suggests that hierarchal organization of muscle requires localized cues that guide myofibrillogenesis. Specifically, a local symmetry break is required to potentiate the assembly and organization of FA and actin complexes that are the template for myofibrillar organization. Such cytoskeletal symmetry-breaking has also been widely observed in other important biological behaviors such as cellular migration [11], cellular division [31], and formation of tissue sheets [32]. The symmetry-breaking can arise from a static, external cue, such as a geometric feature in the boundary conditions imposed on the cell, or from a dynamic internal cue, such as a local overlapping of long fibers. The multiple time scales of these interacting events suggest a hierarchy of post-translational, self-organizational processes that are required for coupling cellular form and function. Materials and Methods Mathematical Description of the Model Model formulation. The model is based on the principles of reaction kinetics. This allows us to track densities (or concentrations) instead of individual molecular constructs. We assume that the focal adhesions are formed by the binding of integrins and that the integrins can exist in a free, , or bound, , form. The bound integrins are connected to pre-, or nascent, myofibrils via an adsorption process (Eq. (5) & (6)). The myofibrils are force bearing fibers and are approximated by a network which connects every bound integrin to every other bound integrin in the cell [16]. We model two types of myofibrils: pre-myofibrils and nascent myofibrils. Premyofibrils mature into a more stable nascent myofibril which can produce more force [1], [13]. The integrins are represented by three fields: unbound integrins, (Eq. (1)), bound integrins connected to pre-myofibrils, (Eq. (2)), and bound integrins connected to nascent myofibrils, (Eq. (3)). The total number of integrins is held constant throughout the simulation. Bound integrins form FAs and the total density of bound integrins is defined as . In the unbound state, the integrins diffuse through the 2D cell. Diffusion is assumed to be faster than all other processes in the cell, and therefore it is approximated as instantaneous. The higher the force exerted on a FA, the more stable it is, i.e. at that point in space the rate of converting unbound integrin to bound integrin is increased [7], [8], [9]. Our hypothesis is that the force produced by each fiber is larger if the fiber is longer, however the model includes the flexibility to test this hypothesis by making the force independent of fiber length, i.e., changing the value of to in Eq. (4). The increase in force due to an increase in the number of fibers is bound by the equilibrium of the adsorption process, attenuated by (Eq. (5) & (6)). We introduce a biasing potential field, (Eq. (7)), acting on the free integrins, the net effect of which is to cluster focal adhesions together if each has fibers leading to the same distant point. This property can be turned off by setting parameter in Eq. (1), or, the property can be adjusted by varying the proximity of the effect, changing the value of . The net force on the integrins is translated to the substrate, and the traction stress vector on the substrate is therefore given by [33]. The model is then expressed as a set of equations, two of which are ODEs, where all variables are defined in Table 1:(1) For convex cells integration in Eq. (4) and (7) are performed over the whole cell cultured on an ECM island, i.e. . For concave cells the integration is performed only for pairs of points that are connected by fibers that are entirely contained within the ECM island. We can formally represent this concept by defining a 2D space of pairs for each :(8) The system of model equations (Eq. (1)–(7)) is discretized and solved using MatLab (Fig. 1H and S3). The details on discretizing the equations and the schematic representation of the MatLab code can be found in the supplemental information (Text S1 and Fig. S3). Table 1. Model variables. Model output: Fibril distribution. To calculate the fiber distribution, we use the above assumption that the fibers are approximated by the network connecting all the integrins to each other. To continue to operate with concentration fields instead of individual integrins we calculate the total length of fiber passing through each small area in a specified direction: pre-myofibril network,(9) nascent myofibril network,(10) and the total myofibrillar network(11) The rest of the equations describing our method for calculating the properties of the fiber network are the same for all three types of fibers (pre-myofibril, nascent myofibril and overall networks). Therefore, for brevity, we present them only once and a schematic representation of these values can be found in the supplemental information (Fig. S4). The fiber density, , at any point in the cell island is given by the length of fiber passing through the small area around the point of interest, , normalized by the total length of fibers in the cell, :(12) Likewise, the density distribution of fibers in a small area around a given point going in a given direction is calculated by dividing the length of fiber in that direction by the total length of fiber in the small area around that point:(15) In this model we assume that the network of fibers can be estimated by considering that all integrins are connected to all other integrins. In such a formulation, the fibers can be approximated as straight rods at any given lattice point. The OOP characterizes the degree of order of a distribution of rods, and is zero for perfectly isotropic systems and one for completely aligned rods. We calculate the OOP and the director of the fiber distribution for each point in the cell [34]. The director is the main orientation of a distribution of rods. We perform these calculations by using the coefficients of the Fourier series of the fiber density distribution, :(16) In drawing the fiber distributions, we assume that at any given point the fibers approximately follow the main direction of the fiber distribution in the small area around that point (Eq. (18)). We define the degree of local parallel coupling as the product of the normalized nascent myofibril density and their degree of order:(19) Model parameters. The parameters were fit using a stair-shaped myocyte (Fig. 2), the detailed description can be found in the supplemental information (Text S1). The parameter fit was validated using three other shapes: square cell, triangular cell, and a circular cell. Additionally, the hypotheses were tested by adjusting the appropriate parameters on the circular and stair shape cells. Parameter sensitivity studies are described in the supplemental information (Text S1). The model parameters were fitted using coarse grained variables. The following parameters specify units in the simulations and the computational time step: . The total computational time for all shapes except the circle was , while the circle needed a longer time to achieve equilibrium with . Prior studies suggest that FA formation takes place on a time scale of seconds, followed by the assembly rate of the premyofibril (~minutes) and the realignment of the nascent myofibril (10–20 hours) [1], [21], [22], [23]. By construction, the rate constants in Eq. (2) will be dictated by the time formation of the premyofibrils, while the rate constants in Eq. (3) will be dictated by the formation time of the nascent myofibrils. The rest of the constants were fitted by matching the fiber distribution in the stair shape cell (Fig. 2): , . The following parameters were varied to test the hypotheses: or , or . Detailed Explanation of Equations Equation (1). This equation was originally written as,(20) However, it was simplified using the assumption that the diffusion of unbound integrin is much faster than the formation of bound integrin and formation of pre-myofibrils and nascent myofibrils. To arrive at Eq. (1), we assume all the terms are small compared to the diffusion term and that the mass is conserved. The mass conservation can be written as the following for each time step, where the total amount of integrin does not change:(21) Together we arrive at:(23) where, is the total amount of all types of integrins in the cell, is the ratio of the bias diffusion and the free diffusion constants. Equations (2)–(3) and equation (20). The first term in Eq. (2) and in Eq. (20): at point the rate of conversion of free integrins to premyofibril-connected bound integrins increases as the density of free integrins increases. The second term in Eq. (2) and in Eq. (20): a larger force promotes the conversion of unbound integrin to bound integrin connected to pre-myofibrils, or in other words makes the bound integrin more stable. The third term in Eq. (2) and Eq. (20): the more bound integrin there is at point the higher the rate of its conversion to unbound integrin. The fourth term in Eq. (20) is the diffusion of the unbound integrin. In this term is the biasing potential field that forces a distribution of free integrins that biases the fibers towards co-aligning with each other. The fourth term in Eq. (2) and first term in Eq. (3): the more force on a focal adhesion at the higher the rate of conversion from the pre-myofibrils to nascent myofibrils, and the bound integrins change from being connected to pre-myofibril to be connected to nascent myofibrils. The fifth term in Eq. (2) and second term in Eq. (3): the more nascent myofibrils there are the higher the rate of conversion back to pre-myofibrils, i.e. the bound integrins change from being connected to nascent myofibril to be connected to pre-myofibrils. Equations (4)–(6). In Eq. (4) the force was normalized such that has the same units as . Here we assume that the bound integrins that contribute to the force are the ones that are connected to the myofibrils. The fraction of bound integrins connected to the pre-myofibrils and nascent myofibrils is given by the Langmuir isotherms in Eq. (5) & (6), respectively. The first integral term in Eq. (4) is the force contributed by the pre-myofibrils: is the relative strength of the pre-myofibril and nascent myofibril. The force at is calculated by a vector sum (integral) of all the contributions from all other integrins. The force between and is given by the number of connections between those two points scaled by the distance between the points. The “number” of connections is basically . However, there is a limit of how many fibers can connect to any point , this bounded quantity is given by the function . The second integral term of Eq. (4) is the same as the first, but it calculates the force contribution from the nascent myofibrils. In Eq. (5) and Eq. (6), is the inverse of the equilibrium constant of the “adsorption” process of bound integrins connecting to the fibers. The numerator of Eq. (5) and Eq. (6) is the pre-myofibril bound integrin and nascent myofibril bound integrin, respectively. Also, note that . The speed at which the saturation value is reached depends on the . At its limit . Note that the density of bound integrins connected to the fibers would be given by , where is a constant specifying the total available connections between the bound integrin and fiber in the unit area. This constant is not present in the equations as it is rolled into dimensionalization of . Equation (7). The term in the figure parenthesis in Eq. (7) is simply the distance between point and the line . The biasing potential on point is stronger if the fiber is “thicker,” thus the amount of binding at each end-point is taken into account. The form of the biasing potential is such, that it is low for close to fiber , and zero far away. The area around each fiber where the potential is not zero is inversely proportional to . The total biasing potential on point is the sum of contribution by each nascent myofibril. Equation (9)–(15). While numerically it is easiest to operate with density fields, it is easier to understand this equation by first writing the expression for the number of fibers passing through an area about a point , in the direction . In the following equation the area on one side of in the direction and up to the boundary of the cell is and in the other direction is . The number of integrins at each point, is taken into account by assuming the number of fibers passing between the two points is the product of the number of fibers at each point. To get to Eq. (9)–(10), we must account for the length of the fiber passing through ,(27) The rest of equations are a manipulation with the length of fibers in the space and the whole cell Ω. Equation (16)–(18). To find the fiber directions and the degree of alignment, we consider the Fourier series of the fiber distribution :(28) where, a, b are constant at each . The first term of the series is simply the average length of the fiber in the unit area, , and is given simply by:(29) The next two terms contain information about the main direction of the fibers and the degree of orientation and given in Eq. (16). The director is the direction, such that rotating to that frame of reference leaves only the cosine term of the Fourier series. The orientational order parameter (OOP) is the Fourier coefficient in such a frame of reference:(30) We can use a trigonometrical identity to show this and to solve for the coefficients. We then normalize the OOP to be between zero (isotropic) to one (perfect alignment): Note that physically , and therefore the outer square root in Eq. (18) can be any sign as long as we are consistent. Cardiac Myocyte Culture All experiments were conducted in accordance with the guidelines of the Institutional Animal Care and Use Committee of Harvard University. Trypsinized ventricular tissue isolated from 2-day old neonatal Sprague Dawley rats (Charles River Laboratories, Wilmington, MA) was serially dissociated into single cells by treating the ventricular tissue 4 times with a 0.1% solution of collagenase type II (Worthington Biochemical, Lakewood, NJ) for 2 minutes at 37°C. The myocyte fraction was purified and pre-plating the cells twice for 45 minutes each time. Purified myocytes were plated onto micropatterned substrates prepared as described below at a density of 100,000 cells per coverslip and kept in culture at 37°C with a 5% CO[2] atmosphere. The culture medium was M199 (Invitrogen, Carlsbad, CA) base supplemented with 10% heat-inactivated Fetal Bovine Serum, 10 mM HEPES, 20 mM glucose, 2mM L-glutamine, 1.5µM vitamin B-12, and 50 U/ml penicillin. The medium was changed 24 hours after plating to remove unattached and dead cells and every 48 hours afterwards. After 72 hours in culture, most cardiac myocytes beat spontaneously and were used either for immunostaining or traction force measurements. Micropatterning Substrates Micropatterned substrates containing square, triangular, or circular adhesive islands were prepared for immunostaining and traction force microscopy, as follows. For immunostaining, the substrates were micropatterned using a microcontact printing procedure similar to that described by Tan et al. [35]. Micropatterned substrates for traction force experiments were created by adapting the published techniques [10], [36]. Briefly, a thin layer of 10% by weight poly-N-iso- propylacrylamide (PIPAAM) prepared in 1-butanol was spin coated on a silicon wafer (Fig. S1a). A 50:75 µm layer of photoresist (SU-8, MichroChem Corp, Newton, MA) was spin-coated on top of the PIPAAM (Fig. S1b), UV light treated through a photolithographic mask (Fig. S1c), and developed to obtain a complementary master that contained holes with the same size and shape as the desired adhesive islands (Fig. S1d). The master was immersed in ice water to dissolve the PIPAAM and the photoresist membrane was released from the wafer (Fig. S1e). Polyacrylamide gels (0.1% bis and 5% acrylamide; 90 µm thick) containing 1:500 volume of carboxylate-modified fluorescence latex beads (0.2 µm Fluospheres, Molecular Probes, Eugene, OR) were fabricated on 25 mm coverslips. The Young's modulus of the gel was estimated to be ~3 KPa using atomic force microscopy as described previously [37]. The photoresist membrane was placed on the surface of the gel and 1 mM sulfo-SANPAH (sulfosuccinimidyl- 6-4-azido-2-nitrophenylamino-hexanoate; Pierce, Rockford, IL) in 50 mM HEPES was added through the holes in the photoresist membrane. The whole system was then placed under vacuum for 3 minutes to ensure that the sulfo-SANPAH reached the gel surface. The surface of the gel that contacted with the sulfo-SANPAH was photoactivated by UV light exposure (Fig. S1f). After excess sulfo-SANPAH was removed, fibronectin (FN) 100 µg/mL was added to the membrane and the gel was placed under vacuum for another 3 minutes to remove bubbles from the holes (Fig. S1g). The FN was allowed to react with the photoactivated gel for at least 4 hours at 37°C to create FN-coated adhesive islands. Excess FN was washed away with PBS. After removal of the photoresist membrane, the gel was immediately used for cell plating (Fig. S1h). Sarcomere Length Measurement In vitro studies show that the maturation of sarcomere can be determined by measuring the distance between two adjacent α-actinin rich spots that are supposed to be the precursors of the Z-band [13]. A 2–2.5 µm spacing between the sarcomeric α-actinin rich spots indicates a matured sarcomere [2]. We used a fast Fourier transform (FFT) to calculate the spacing of sarcomeric -actinin rich spots in Fig. 4A, F, K. An intensity profile of the sarcomeric α-actinin stains was chosen along myofibrils spanning the long axis of the cells. The profile was then detrended, weighted with a Hamming window and transformed into the spatial frequency domain by FFT. The spatial frequency at peak power of the first-order harmonic in the spatial frequency domain was identified and converted into the spatial domain to yield the sarcomere length. The results reveal that the sarcomere lengths are 2.4±0.1 µm, 2.2±0.1 µm, and 2.4±0.2 µm for the cell in Fig. 4A, F, K, respectively, indicating that they are mature sarcomeres. Traction Force Microscopy Data Measurement and Analysis Coverslips containing the beating myocytes were removed from the incubator, mounted onto a custom-made microscope stage containing a bath chamber, and continuously perfused with 37°C normal Tyrode's solution (1.192 g of HEPES, 0.901 g of glucose, 0.265 g of CaCl[2], 0.203 g of MgCl[2], 0.403 g of KCl, 7.889 g of NaCl and 0.040 g of NaH[2]PO[4] per liter of deionized water, reagents from Sigma, St. Louis, MO). Fluorescence images of gels containing fluorescent beads immediately beneath the contracting myocytes were taken at 28.1 Hz. The duration of image acquisition was long enough to include at least two complete cycles of contraction-relaxation of individual myocytes. Consecutive images were paired and the prior image was used as a reference to measure the change of the position of the fluorescence beads using the algorithm described previously [38]. This yielded the discretized displacement field between two consecutive frames. The calculated displacements were summed up for a whole systolic cycle to determine the overall 2D displacement field. The systolic traction field was calculated from the displacement field by adapting the algorithm previously developed [39], [40]. This algorithm solved the inverse of the Boussinesq solution from the displacement field on the surface of an elastic halfspace to obtain the traction field when the mechanical properties of the gel are known. The Poisson ratio of the gel was assumed to be close to 0.5 [10]. The interior of the cell was subdivided into 4×4 µm^2 squares to approximate the discretized localization of contractile forces. The ability of a particular solved traction field to explain the observed displacements was estimated with statistics. In addition to a zero-order Tikhonov regularization, a constraint that the forces should not become exceedingly large was used to minimize and stabilize the solution [40]. The L-curve criterion, as previously described [40], was used to determine the optimal balance between the data agreement and the regularization. Immunofluorescent Staining and Imaging Cardiac myocytes stained for actin (Alexa 488 Phalloidin, Molecular Probes), vinculin (clone hVIN-1, Sigma), and sarcomeric α-actinin (clone EA-53, Sigma) were fixed in 4% PFA with 0.01% Triton X-100 in PBS buffer at 37°C for 15 minutes and equilibrated to room temperature during incubation. Secondary staining was performed using tetramethylrhodamine- conjugated goat anti-mouse IgG (Alexa Fluor 594, Molecular Probes), and nuclei were visualized by staining with 4′,6′-diamidino-2- phenylindole hydrochloride (DAPI, Molecular Probes). All fluorescence and traction force microscopy was conducted with a Leica DMI 6000B microscope, using a 63× plan-apochromat objective. For traction force experiments, images were collected with a Cascade 512b enhanced CCD camera, while immunofluorescence images were collected with a CoolSnap HQ CCD camera (both from Roper Scientific, Tucson, AZ) controlled by IPLab Spectrum (BD Biosciences/Scanalytics, Rockville, MD). Supporting Information Schematic representation of micropatterning FN on polyacrylamide gel. After a thin layer of PIPAAM was spin-coated on a silicon wafer (a), SU-8 photoresist was spin-coated on top of the PIPAAM (b), treated with UV light through a photolithographic mask (c), and developed to obtain a complementary master (d). The master was immersed in ice water to release the photoresist membrane (e). The photoresist membrane was placed on the surface of polyacrylamide gels and sulfo-SANPAH was added to the gel surface, photoactivated by UV light (f). FN solution was then added to react with the photoactivated gel (g). After removal of the photoresist membrane, the gel was immediately used for cell plating (h). (5.99 MB TIF) Comparison of different initial conditions in the stair shape cell. First column: Initial condition name. Second column:The fiber map at the first time step at which fibers exist (time listed next to each frame). Third column: Steady state fiber distribution with a the grey scale showing the degree of parallel coupling. The steady states are the same for each initial condition. Fourth column: Map of initial density of free integrins. Fifth column: Map of initial density of bound integrins. (1.00 MB TIF) Schematic of model implementation algorithm. This schematics shows how each equation was implemented inside the MatLab code. (0.18 MB TIF) Schematic showing fiber density and distribution definitions. A: All the fibers in the cell are represented as green lines. The total length of all fibers in the cell is labeled as S[cell]. We consider a point r, with an area δA that is vanishingly small for continues systems. For discrete systems δA is the area of cell divided by the number of points in the lattice. B: The total length of fibers inside the area associated with point r is the total length of red line segments. C: The length of fibers passing through a small area around point r in the direction of n is the total length of the blue lines. (0.30 MB TIF) The supporting text includes details on implementation of the model in code, and a discussion of parameter sensitivity. (0.04 MB PDF) Myofibril organization in an equilateral triangular muscle cell. The positive feedback loop between assembly of contractile elements and FA maturation permits continued lengthening of the contractile fibers, with the longest dimension of the cell acting as the only limiting factor to fiber elongation. This is demonstrated in Video S1, where nascent myofibril bundling and orientation were initially random (t = 0); as time elapsed, they reoriented themselves and were stabilized along the cellular peripheries at t = 120au. The color scale and lines represent the degree of parallel coupling and local orientation of myofibrils, respectively; color values are in arbitrary units. (1.14 MB AVI) FA organization in an equilateral triangular muscle cell. The color scale represents FA density and color values are in arbitrary units. Initially, FAs were randomly distributed, then were redistributed to the cellular peripheries and accumulated at the cellular corners at steady state. This is because growth of FAs depends on the traction field, defined as the sum of all contractile element vectors connecting to a FA. Thus, FA density is expected to be larger at cellular peripheries with higher curvatures, where the overall alignment of contractile element vectors is also larger, leading to a higher net traction. (0.62 MB AVI) Myofibril organization in a square muscle cell. Nascent myofibrils were not aligned initially, but at equilibrium, they realigned with enhanced parallel bundling occurring along the diagonals and edges of the cell. Definitions of the color scale and lines are the same as Video S1. (1.52 MB AVI) FA organization in a square muscle cell. The initial homogenously distributed FAs quickly redistributed to the cellular peripheries and were stabilized at the cellular corners at steady state. Definition of the color scale is the same as Video S2. (0.56 MB AVI) Myofibril organization in a circular muscle cell. Parallel bundled nascent myofibrils first occurred at the center of the cell, then realigned adjacent fibers, and finally extended across the diameter of the cell to define a principal axis of contraction at t = 500au. Note that definitions of the color scale and lines are the same as Video S1. (1.57 MB AVI) FA organization in a circular muscle cell. FAs in a circular cell first redistributed to the peripheries of the cell. Transient multi-pole patterns of FA then developed in accordance with the reorganized myofibril network as shown in Video S5. These poles were seen to redistribute, merge, and finally converged to a bipolarized pattern with two opposing bands of FA along the cellular peripheries. Definition of the color scale is the same as Video S2. (2.54 MB AVI) DIC images of a beating triangular myocyte. Images were acquired at 21.7 frames per second and contain a full cycle of contraction and relaxation. Time is labeled at the top left corner. During contraction, the cellular body was shortened toward the center of the cell, with obvious deformation of the nucleus. The cell was still an equilateral triangle at the full contraction, with about 24% shortening along the cellular edges. (0.33 MB MPG) DIC images of a beating square myocyte. Images were acquired at the same frame rate as Video S7 and contain a full cycle of contraction and relaxation. At full contraction, the cell kept a square shape, with about 18% shortening along the diagonal. (0.12 MB MPG) Displacements maps of a beating triangular myocyte. The white arrows depict the frame to frame displacements of the fluorescent beads embedded in the gels. The displacements of the beads were not traced individually. Instead, the displacement map was discretized as suggested by Butler et al. 5. The color scale represents the magnitude of the displacement vectors. For consistency, the ranges of the color scale are the same for Videos S9, S10, and S12. During systole, the displacements are relatively larger at the cellular corners. Images contain a full contraction cycle. (0.49 MB MPG) Displacements maps of a beating square myocyte. As seen in the triangular cell (Video S9), larger displacements occurred at the cellular corners during systole. Definitions of the white arrows and color scale are the same as Video S9 and the images represent a full contraction cycle. (0.44 MB MPG) DIC images of a beating circular myocyte. Images were acquired at the same frame rate as Video S11 and contain a full cycle of systole and diastole. The cellular body was shortened concentrically during systole, with about 8% shortening along the vertical axis and 13% along the horizontal axis. (0.22 MB MPG) Displacements maps of a beating circular myocyte. Definitions of the white arrows and color scale are the same as Video S9. The displacements at the opposing peripheries on the horizontal axis were relatively larger, and thus defined the principal axis of contraction. (0.51 MB MPG) We are grateful to the Center of Nanoscale Systems at Harvard University for the use of their cleanroom facilities. Author Contributions Conceived and designed the experiments: PLK CLG KKP. Performed the experiments: PLK NAG. Analyzed the data: PLK MAB. Contributed reagents/materials/analysis tools: SPS. Wrote the paper: AG PLK CLG KKP. Designed and implemented the model and in silico experiments: AG. Co-developed the math model: CLG, PLK, KKP. Provided important technological contributions to experimental techniques: WJA. Contributed important cell culture design and support: SPS.
{"url":"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1001088?imageURI=info:doi/10.1371/journal.pcbi.1001088.t001","timestamp":"2014-04-16T11:33:03Z","content_type":null,"content_length":"274033","record_id":"<urn:uuid:218c9350-d492-4c8d-86e9-e89d6bc6cb30>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Harbor City Prealgebra Tutor Find a Harbor City Prealgebra Tutor ...I know how to break math problems down into simple steps. I can analyze what your son or daughter needs to succeed in math. I have a life teaching credential and a bilingual credential in 14 Subjects: including prealgebra, reading, Spanish, ESL/ESOL ...I specialize in high school math subjects like Pre-Algebra, Algebra, Algebra 2/Trigonometry, Precalculus and Calculus. I can also tutor college math subjects like Linear Algebra, Abstract Algebra, Differential Equations, and more. My teaching strategy is to emphasize the importance of dedicatio... 9 Subjects: including prealgebra, calculus, geometry, algebra 1 ...From personal experience, I know how daunting (and even boring) chemistry can seem at first, second, and even hundredth glance. Because I originally intended to study medicine, I took a year of chemistry, along with my courses in neuroscience, biology, and psychology. Both semesters I received an A, mostly due to my extremely creative ways of learning the material. 69 Subjects: including prealgebra, English, chemistry, reading I have been a teacher/private tutor at highly competitive private schools in the Pacific Palisades, Malibu and Brentwood for over ten years. I have recently relocated to the shore and am interested in sharing my passion for math with your child! I LOVE integrating technology, stories, art and other tools such as food to make math come alive and give it real world impact. 3 Subjects: including prealgebra, elementary (k-6th), elementary math ...Once they do so, then it is easier for the student and myself to fix the area of weakness. Algebra 1 or elementary algebra is very important to know as it is required for higher math courses, like geometry and algebra 2. Algebra 1 can be challenging if concepts in prealgebra were not fully understood. 19 Subjects: including prealgebra, Spanish, reading, chemistry
{"url":"http://www.purplemath.com/harbor_city_ca_prealgebra_tutors.php","timestamp":"2014-04-19T02:23:07Z","content_type":null,"content_length":"24348","record_id":"<urn:uuid:3fa166dc-1a58-4160-9f63-d661f088e47d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00617-ip-10-147-4-33.ec2.internal.warc.gz"}