content
stringlengths
86
994k
meta
stringlengths
288
619
the first resource for mathematics The associated classical orthogonal polynomials. (English) Zbl 0995.33001 Bustoz, Joaquin (ed.) et al., Special functions 2000: current perspective and future directions. Proceedings of the NATO Advanced Study Institute, Tempe, AZ, USA, May 29-June 9, 2000. Dordrecht: Kluwer Academic Publishers. NATO Sci. Ser. II, Math. Phys. Chem. 30, 255-279 (2001). The paper is concerned with polynomials that satisfy the three-term recurrence relation $\begin{array}{cc}\hfill {p}_{n+1}\left(x\right)& =\left({A}_{n+c}x+{B}_{n+c}\right){p}_{n}\left(x\right)-{C}_{n+c}{p}_{n-1}\left(x\right),\phantom{\rule{4pt}{0ex}}n\in {ℕ}_{0},\hfill \\ \hfill {p}_ {-1}\left(x\right)& =0,\phantom{\rule{1.em}{0ex}}{p}_{0}\left(x\right)=1,\hfill \end{array}$ where $c=0$ corresponds to a classical system, while $ce 0$ yields an associated system. Some examples where such polynomials occur are given in the first section. Next, the author considers the problem of finding measures of orthogonality for the polynomials; four methods (using moments, generating function, suitable special functions, and minimal soulutions, respectively) are reviewed and discussed. Finally, some particular cases are considered at some length, viz., the associated Askey-Wilson polynomials, the continuous $q$-Jacobi polynomials, the continuous $q$-ultraspherical polynomials, and the associated Wilson polynomials. There is a rather extensive bibliography. 33-02 Research monographs (special functions) 33C45 Orthogonal polynomials and functions of hypergeometric type 42C05 General theory of orthogonal functions and polynomials 33D45 Basic orthogonal polynomials and functions (Askey-Wilson polynomials, etc.)
{"url":"http://zbmath.org/?q=an:0995.33001","timestamp":"2014-04-18T11:24:29Z","content_type":null,"content_length":"23135","record_id":"<urn:uuid:857ba2ed-c18c-4c2a-a9e1-7ec5e5e2e9a6>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
angles degrees and seconds here is an easy way to do it, first to find out how many seconds in so many hours, multiply how many seconds there are in a minute (60) by how many minutes there are in a second (60) and then by how many hours you are dealing with (8), so the equation is (60 x 60)8 = 28,800 secs in 8 hours, then minutes are practicly the same but even more simple. Amount of seconds in a minute (60) multiplied by how many minutes (1) = 60
{"url":"http://mathhelpforum.com/trigonometry/40181-angles-degrees-seconds.html","timestamp":"2014-04-20T02:20:21Z","content_type":null,"content_length":"35440","record_id":"<urn:uuid:9acfbb42-8954-4ee6-b93e-597b06c61d5a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
THE AGE OF INTELLIGENT MACHINES | Chapter 3: Mathematical Roots THE AGE OF INTELLIGENT MACHINES | Chapter 3: Mathematical Roots September 24, 2001 In the world of formal mathematics, it is just as bad to be almost right as it is to be absolutely wrong. In a sense, that’s just what mathematics is. But that’s not good psychology. Marvin Minsky, The Society of Mind A mathematician is a machine for turning coffee into theorems. Paul Erdos The AI field was founded by mathematicians: John McCarthy, Alan Turing (1912-1954), Norbert Wiener (1894-1964), students of Alonzo Church, Claude Shannon, Marvin Minsky, and others. LISP, the primary language for academic research in artificial intelligence, was adapted from a mathematical notation designed by Stephen Kleene and Barkley Rosser, both students of Church.^1 Mathematics has often been viewed as the ultimate formalization of our thinking process, at least of the rational side of it. As I noted in the last chapter (and as was noted in several of the contributed articles at the end of the last chapter), the relationship of logic and the analytic process underlying mathematics to cognition has been debated through the ages by philosophers, many of whom were also mathematicians. The actual deployment of mathematical techniques to emulate at least certain aspects of human thought was not feasible until the electronic computer became available after World War II. However, the foundations of computation theory, along with the set theory on which computation theory is based, were established long before the potential of the electron to revolutionize applied mathematics was realized.^2 Mathematics has often been described as a branch of philosophy, the branch most concerned with logic.^3 It has only been in this century that the fields of mathematics and philosophy have split into largely distinct disciplines with few major figures doing important work in both areas. Bertrand Russell, having been a pivotal figure in the establishment of both modern set theory and logical positivism, was perhaps the last. Russell’s Paradox In the early part of this century Bertrand Russell, a young and as yet relatively unknown mathematician and philosopher, became increasingly occupied with a certain type of paradox and attempts to understand its implications. The resolution of the paradox had important implications for the subsequent development of the theory of computation. The following story illustrates Russell’s class of A judge is sentencing a man for a crime that he finds reprehensible and for which he wishes to mete out the most severe sentence he can think of. So he tells the convicted man not only that he is sentenced to die but also that because his crime was so offensive, the sentence is to be carried out in a unique way. “The sentence is to be carried out quickly,” the judge says. “It must be carried out no later than next Saturday. Furthermore, I want the sentence to be carried out in such a way that on the morning of your execution, you will not know for certain that you are going to be executed on that day. When we come for you, it will be a surprise.” When the judge finished describing his unusual sentence, the condemned man seemed surprisingly pleased and replied, “Well, that’s great, judge, I am greatly relieved.” To this the judge said, “I don’t understand, how can you be relieved? I have condemned you to be executed, I have asked that the sentence be carried out soon, but you will be unable to prepare yourself because on the morning that your sentence is to be carried out, you will not know for certain that you will die that day.” The convicted man said, “Well, your honor, in order for your sentence to be carried out, I could not be executed on Saturday.” “Why is that?” asked the judge. “Because since the sentence must be carried out by Saturday, if we actually get to Saturday, I will know for certain that I am to be executed on that day, and thus it would not be a surprise.” “I suppose you are right,” replied the judge. “You cannot be executed on Saturday. I still do not see why you are relieved.” “Well,” said the prisoner, “if we have definitely ruled out Saturday, then I cannot be executed on Friday either.” “Why is that?” asked the judge. “We have agreed that I definitely cannot be executed on Saturday. Therefore, Friday is the last day I can be executed. Thus, if Friday rolls around, I will definitely know that I am to be executed on that day, and therefore it would not be a surprise. So I cannot be executed on Friday.” “I see,” said the judge. “Thus, the last day I can be executed would be Thursday. But if Thursday rolls around, I would know I had to be executed on that day, and thus it would not be a surprise. So Thursday is out. By the same reasoning we can eliminate Wednesday, Tuesday, Monday, and today.” The judge scratched his head as the confident prisoner was led back to his prison cell. There is an epilogue to the story. On Thursday the prisoner was taken to be executed. And he was very surprised. So the judge’s orders were successfully carried out. If we analyze the paradox contained in the above story, we see that the conditions that the judge has set up result in a conclusion that none of the days meets, because, as the prisoner so adroitly points out, each one of them in turn would not be a surprise. But the conclusion itself changes the situation, and now surprise is possible again. This brings us back to the original situation in which the prisoner could (in theory) demonstrate that each day in turn would be impossible, and so on. The judge applies Alexander’s solution to this Gordian knot. A simpler example and the one that Russell actually struggled with is the following question about sets: Consider set A, which is defined to contain all sets that are not members of themselves. Does set A contain itself? As we consider this famous problem, our first realization is that there are only two possible answers: yes and no. We can therefore exhaustively consider all of the possible answers (this is not the case for many problems in mathematics). Let us try “yes.” If the answer is yes, then set A does contain itself. But if set A contains itself, then according to its defining condition set A would not belong to set A, and thus it does not belong to itself. Since the assumption that A contains itself led to a contradiction, it must have been wrong. If the answer is “no,” then set A does not contain itself. But again according to the defining condition, if set A does not belong to itself, then it would belong to set A. As with the story about the prisoner, we have contradictory propositions that imply one another. The assumption of no yields yes, which yields no, and so on. This type of paradox may seem amusing, but to Russell it threatened the very foundations of mathematics.^5 The definition of set A appears to be a perfectly reasonable one, and the question of whether set A belongs to itself also appears perfectly reasonable. Yet it cannot be answered. Without a resolution to this paradox the basic theory of mathematics was in question. To solve the problem, Russell invented a concept of a logical transformation as an operation that requires the equivalent of a quantum of time. Russell designed a set of logical operations in which a particular problem would be expressed as a “program” of operations to follow.^6 We then turn the program on and let it run. Each logical inference or other transformation is implemented in turn, and when the process is completed, we get our answer. If we apply this theoretical machine to the problem of set A, the logical operations are “executed” in turn. At a certain point the answer will be yes, but the program keeps running, and at a later point the answer becomes no. The program runs in an infinite loop, constantly alternating between yes and no. Russell then provides narrow and broad definitions of a set. In the narrow sense, a set has a definition that allows the construction of a program that can determine whether a given entity is a member of the set in a finite amount of time. According to this definition, set A (whose program produces an infinite loop) is not a true set, so the paradox is eliminated.^7 In the broad sense, the program defining the logical rules of set membership need not come to a halt in a finite amount of time, it just needs to come to an answer in a finite amount of time; it is allowed to change that answer as the program continues to run. According to this definition, set A is a proper set. The question of whether set A belongs to itself will be yes at one point in “time” and no at another point, and the program will alternate between the two. Thus, logical inferences are not implemented instantly, but rather one at a time with an orderly change of state between each. In our case, the answer is never yes and no at the same time. In the broad definition, set A is a particular type of set that is “unstable,” just as an electronic circuit can be unstable. Nonetheless, the contradiction is eliminated. Russell does not explicitly refer to time in his theory of types (of sets). He provides procedures for allowable transformations on propositions that can be considered meaningful within a logical system. This contrasts with the transformations generated by the logical system itself, which are used to determine the truth or falsity of propositions. Thus, according to Russell, certain propositions are neither true nor false and cannot be addressed by the axioms. In our discussion above, a proposition concerning an “unstable set” would not be meaningful. The theory is interesting in that we have one set of transformations generated by the axioms of a logical system determining truth or falsity and another set of transformations generated by the metarules of Russell’s theory of types determining meaningfulness. Russell’s transformations are algorithmic in nature, and the issues raised are similar to certain issues in computation theory that received attention after Turing devised his Turing machine. Though Russell did not explicitly link the theory of types to An unstoppable proposition: Russell’s Paradox computation theory (otherwise, we might be referring to a Russell Machine rather than a Turing Machine as a primary model of computation), Russell’s theory of types clearly provided a foundation for Turing’s later work. The lecture on logic delivered by the prisoner changed the situation. He has shown quite logically why it is not possible for him to be executed following the judge’s instructions. The judge then realizes that the prisoner’s belief that he cannot be executed makes it possible once again to execute him. Before the prisoner can formulate another lecture on logic (that is, before the “program” simulating this situation can alternate again to “impossible to execute”), the judge quickly implements his sentence. Principia Mathematica Russell expanded his theory to lay a new foundation for logic and the theory of sets in his first major work in mathematics, The Principles of Mathematics, published in 1903. He subsequently felt that all of mathematics should be recast in terms of his new theory of sets, since the concept of sets and their interactions is fundamental to all other mathematical disciplines. With the help of his friend and former tutor Alfred North Whitehead (1861-1947), he labored for nearly ten years to apply his new theory of sets and logic to all realms of mathematics. Russell reported that the effort nearly exhausted him, and even late in his life he felt that this had been the most intense work of his extremely prolific career.^8 It was probably his most influential. As it was, Whitehead and Russell did not manage to complete their reexamination. They nonetheless published their work in three volumes in 1910, 1912, and 1913 under the title Principia Mathematica. The work was truly revolutionary and provided a new methodology for all mathematics that was to follow. As significant as Principia was to mathematics in general, it was a pivotal development in terms of the foundations of the theory of computation that would be developed two decades later. Russell had created a theoretical model of a logic machine, which we now recognize as similar to a computer, particularly in its execution of logical operations in cycles.^9 Indeed, Turing’s subsequent theoretical model of a computer, the Turing Machine, has its roots directly in Russell’s theoretical logic engine.^10 Russell also created a concept of a logical programming language that is remarkably similar in many respects to one of the most recent programming languages, PROLOG, developed originally in France and now the basis for the Japanese Fifth Generation Computer project.^11 Principia was also influential on efforts by Allen Newell, Herbert Simon, and J.C. Shaw to develop theorem-proving machines in the 1950s.^12 Modern set theory, still based on Russell’s Principia, provides a foundation for much of mathematics. It is interesting to note that modern set theory is in turn based on Russell’s theoretical model of computation. Viewing things in this way, we could argue that mathematics is a branch of computation theory. What is particularly impressive about Russell’s achievement is that there were no computers even contemplated at the time he developed his theory. Russell needed to invent a theoretical model of a computer and programming to address a flaw in the foundation of logic itself. The Five Contributions of Turing We must know, we shall know. David Hilbert Turing was perhaps the pivotal figure in the development of the computer and its underlying theory. Building on the work of Bertrand Russell and Charles Babbage, he created his own theoretical model of a computer and in the process established modern computation theory.^13 He was also instrumental in the development of the first electronic computers, thus translating theory into reality. He developed specialized electronic computation engines to decode the German Enigma code, enabling the British to withstand the Nazi air force. He was also a major champion of the possibility of emulating human thought through computation.^14 He wrote (with his friend David Champernowne) the first chess-playing program and devised the only widely accepted test of machine intelligence (discussed from a variety of perspectives in several of the contributed articles at the end of chapter 2).^15 As a person, Turing was unconventional and extremely sensitive. He had a wide range of unusual interests ranging from the violin to morphogenesis (the differentiation of cells).^16 There were public reports of his homosexuality, which greatly disturbed him, and he died at the age of 41, a suspected suicide. The Enigma code By 1940 Hitler had the mainland of Europe in his grasp, and England was preparing for an anticipated invasion. The British government organized its best mathematicians and electrical engineers, including Alan Turing, with the mission of cracking the German military code. It was recognized that with the German air force enjoying superiority in the skies, failure to accomplish this mission was likely to doom the nation. In order not to be distracted from their task, the group lived in the tranquil pastures of Hertfordshire. The group was fortunate in having a working model of the German code machine Enigma, captured by the Polish Secret Service. Working with several hints gathered by British Intelligence, they were able to narrow the coding possibilities, but only slightly. Under Turing’s leadership, their strategy was to build an electromagnetic computer, use telephone relays to do an exhaustive search of all possible codes that the Enigma machine could produce, and apply these codes to intercepted messages. The strategy was a challenging one because an (electromagnetic) computer had never been built before. They named the machine Robinson, after a popular cartoonist who drew “Rube Goldberg” machines.^17 The group’s own Rube Goldberg succeeded brilliantly and provided the British with a transcription of nearly all significant Nazi messages. The German military subsequently made a modification to Enigma, adding two additional coding wheels, which greatly expanded the number of possible codes. To meet this new challenge, Turing and his fellow cryptoanalysts set to building a substantially faster machine called Colossus, built with two thousand electronic vacuum tubes.^18 Colossus and nine similar machines running in parallel did their job again and provided uninterrupted decoding of vital military intelligence to the Allied war effort. Colossus was regarded by the Turing team as the world’s first electronic digital computer, although unlike Harvard’s relay-based Mark I, it was not programmable. Of course, it did not need to be: it had only one job to do. Remarkably, the Germans relied on Enigma throughout the war. Refinements were added, but the world’s first computers built by Alan Turing and his associates were able to keep up with the increasing complexity. Use of this vital information required supreme acts of discipline on the part of the British government. Cities that were to be bombed by Nazi aircraft were not forewarned, lest preparations arouse German suspicions that their code had been cracked. The information provided by the Robinson and Colossus machines was used only with the greatest discretion, but the cracking of Enigma was enough to enable the Royal Air Force to win the Battle of Britain. Hilbert’s twenty-third problem and the Turing machine While many in England and elsewhere remain grateful to Turing for his contributions to the war effort, his greatest legacy is considered to be the establishment of the modern theory of computation. Yet his original goal was not the development of such a theory but rather to address one of the problems set down by his predecessor David Hilbert (1862-1943). The works of Hilbert, a German mathematician born in 1862, are still widely regarded as highly influential on the research goals of today’s mathematicians. He is credited with consolidating the accomplishments of nineteenth-century mathematics with such works as The Foundations of Geometry, published in 1899.^19 Perhaps of even greater significance, he set the agenda for twentieth-century mathematics as well with a list of the twenty-three most pressing unsolved problems that he presented at the 1900 International Mathematical Conference in Paris. In his address he predicted that these problems would occupy the attention of the next century of mathematicians. Hilbert appears to have been correct. The problems have been solved slowly and each solution has been regarded as a major event. Several that remain unsolved today are regarded by many mathematicians as the most important unsolved problems in mathematics. Hilbert’s twenty-third problem is whether or not an algorithm exists that can determine the truth or falsity of any logical proposition in a system of logic that is powerful enough to represent the natural numbers (numbers like 0, 1, 2, . . .). The statement of this problem was perhaps the first time that the concept of an algorithm was formally introduced into mathematics. The question remained unanswered until 1937. In that year Alan Turing presented a paper entitled “On Computable Numbers, with an Application to the Entscheidungsproblem” (the Entscheidungsproblem is the decision or halting problem).^20 The paper presented his concept of a Turing Machine, a theoretical model of a computer, which continues to form the basis of modern computational theory. A Turing machine consists of two primary (theoretical) units: a “tape drive” and a “computation unit.” The tape drive has a tape of infinite length on which there can be written (and subsequently read) any series of two symbols: 0 (zero) and 1 (one). The computation unit contains a program that consists of a sequence of commands made up from the list of operations below. Each “command” consists of two specified operations, one to be followed if the last symbol read by the machine was a 0 and one if it had just read a 1. Below are the Turing machine operations: • Read tape • Move tape left • Move tape right • Write 0 on the tape • Write 1 on the tape • Jump to another command • Halt The Turing machine has persisted as our primary theoretical model of computation because of its combination of simplicity and power.^21 Its simplicity derives from its very short list of capabilities, listed above. As for its power, Turing was able to show that this extremely simple machine can compute anything that any machine can compute, no matter how complex. If a problem cannot be solved by a Turing machine, then it cannot be solved by any machine (and according to the Church-Turing thesis, not by a human being either).^22 An unexpected discovery that Turing reports in his paper is the concept of unsolvable problems, that is, problems that are well defined with unique answers that can be shown to exist, but that we can also show can never be computed by a Turing machine. The fact that there are problems that cannot be solved by this particular theoretical machine may not seem particularly startling until one considers the other conclusion of Turing’s paper, namely, that the Turing machine can model any machine. A machine is regarded as any process that follows fixed laws. According to Turing, if we regard the human brain as subject to natural law, then Turing’s unsolvable problems cannot be solved by either machine or human thought, which leaves us with the perplexing situation of being able to define a problem, to prove that a unique answer exists, and yet know that the answer can never be known.^23 The busy beaver One of the most interesting of the unsolvable problems, the busy beaver problem, was discovered by Tibor Rado.^24 It may be stated as follows. Each Turing machine has a certain number of states that its internal program can be in. This corresponds to the number of steps in its internal program. There are a number of different 4-state Turing machines that are possible, a certain number of 5-state machines possible, and so on. Given a positive integer n, we construct all the Turing machines that have n states. The number of such machines will always be finite. Next, we eliminate those n-state Turing machines that get into an infinite loop (that is, never halt). Finally, we select the machine (one that halts) that writes the largest number of 1s on its tape. The number of 1s that this Turing machine writes is called the busy beaver of n. Rado showed that there is no algorithm, that is, no Turing machine, that can compute this function for all ns. The crux of the problem is sorting out those n-state Turing machines that get into infinite loops. If we program a Turing machine to generate and simulate all possible n-state Turing machines, this simulator itself goes into an infinite loop when it attempts to simulate one of the n-state Turing Machines that gets into an infinite loop. The busy beaver function can be computed for some ns, and interestingly, it is also an unsolvable problem to separate those ns for which we can determine the busy beaver of n from those for which we cannot. Aside from its interest as an example of an unsolvable problem, the busy beaver function is also interesting in that it can be considered to be itself an intelligent function. More precisely stated, it is a function that requires increasing intelligence to compute for increasing arguments. As we increase n, the complexity of the processes needed to compute the busy beaver of n increases. With n = 6, we can deal with addition, and the busy beaver of 6 equals 35. In other words, addition is the most complex operation that a Turing machine with only 6 steps in its program is capable of performing. A 6-state Turing machine is not capable, for example, of multiplication. At 7, the Busy Beaver does learn to multiply, and the busy beaver of 7 equals 22,961. At 8 it can exponentiate, and the number of 1s that our eighth busy beaver writes on its tape is approximately 10^43. By the time we get to 10, we are dealing with a process more complex than exponentiation, and to represent the busy beaver of 10 we need an exotic notation in which we have a stack of exponents the height of which is determined by another stack of exponents, The Busy Beaver: an intelligent function? the height of which is determined by another stack of exponents, and so on. For the twelfth busy beaver we need an even more exotic notation. It is likely that human intelligence (in terms of the complexity of mathematical operations that can be understood) is surpassed well before the busy beaver gets to 100. Turing showed that there are as many unsolvable problems as solvable ones, the number of each being the lowest order of infinity, the so-called countable infinity (that is, the number of integers). Turing also showed that the problem of determining the truth or falsity of any logical proposition in an arbitrary system of logic powerful enough to represent the natural numbers was an unsolvable problem. The answer, therefore, to Hilbert’s twenty-third problem posed 37 years earlier is no; no algorithm exists that can determine the truth or falsity of any logical proposition in a system of logic that is powerful enough to represent the natural numbers. The second and third answers to Hilbert’s question Around the same time Alonzo Church, an American mathematician and philosopher, published Church’s theorem, which examined Hilbert’s question in the context of arithmetic. Church independently discovered the same answer as Turing.^25 Also working independently, a young Czech mathematician, Kurt Gödel (1906-1978), sought to reexamine an issue that was not entirely settled by Whitehead and Russell’s Principia Mathematica.^26 Whitehead and Russell had sought to determine axioms that could serve as the basis for all of mathematics, but they were unable to prove conclusively that an axiomatic system that can generate the natural numbers (theirs or any other) would not give rise to contradictions. It was assumed that such a proof would be found sooner or later, but Gödel stunned the mathematical world by proving that within such a system there inevitably exist propositions that can be neither proved nor disproved. Some have interpreted Gödel’s theorem to imply that such uncertain propositions are simply indeterminate, neither true nor false. This misses the depth of Gödel’s insight, however. Such propositions, according to Gödel, are not indeterminate; they are definitely either true or false. It is just that we can never determine which. Another implication is that in such axiomatic systems it is not certain that the axioms will not result in contradictions. Gödel’s incompleteness theorem has been called the most important in all mathematics, and its implications are still being debated.^27 One of the implications is that the answer to Hilbert’s twenty-third problem is again no. Taken together, the work of Turing, Church, and Gödel, all published in the 1930s, represented the first formal proofs that there are definite limits to what logic, mathematics, and computation can do. These discoveries strongly contradict Wittgenstein’s statement in the Tractatus that “if a question can be framed, it can be answered” (6.5). Hope and order versus distress and perplexity As a final comment on Turing’s, Church’s and Gödel’s perplexing insights into the nature of logic, it is interesting to note the stark contrast of the mood and attitude of the intellectual and cultural life in Europe and the United States at the turn of the century in comparison with that of several decades later.^28 Music had shifted from the romantic style of Brahms (1833-1897) and the early Mahler (1860-1911) to the atonality of Schoenberg (1874-1951). Art and poetry had made the same switch from romantic styles to the cubism and expressionism of Picasso (1881-1973) and the minimalism of Pound (1885-1972), Eliot (1888-1965), and Williams (1883-1963). It is not unusual for changes in attitude and world view to be reflected across the arts, but it is interesting to note that the shift was reflected in science and mathematics as well. In physics, mechanics had gone from a fully refined and consistent Newtonian model to a paradoxical quantum model. The most puzzling aspect of quantum mechanics and one of its essential features, the Heisenberg uncertainty principle, is its conclusion that there are profound limits to what human beings can know. In addition, the principle of duality, which had existed previously only in metaphysical doctrine, was now firmly established in the apparently contradictory wave-particle nature of light. Perhaps most disturbing, mathematics itself had gone from its turn-of-the-century emphasis on comprehensive formalisms that covered all of mathematics to a conclusion in the mid 1930s that logic had inherent and irremovable contradictions and that problems existed that could never be solved. Turing’s test Having established a theory of computation and having played a major role in the implementation of that theory, Turing’s interest ran to speculation on the ultimate power of this new technology. He was an enthusiast for the potential of machine intelligence and believed that it was feasible, although he appeared to have a reasonably realistic sense of how long such a development would take. In a paper entitled, “Computing Machinery and Intelligence,” published in the journal Mind in 1950, Turing describes a means for determining whether or not a machine is intelligent: the Turing test. It should be noted that a computer “passing” the Turing test is an indication that it is intelligent. The converse of this statement does not necessarily hold. A machine (or organism) unable to pass the test does not necessarily indicate a lack of intelligence. Some observers ascribe a high level of intelligence to certain species of animals such as dolphins and whales, but these animals are obviously in no position to pass the Turing test (they have no fingers, for one thing). To date no computer has come close to passing this test. The test basically involves the ability of the computer to imitate human performance. Narrower versions of the test have been proposed. For example, a computer chess program was recently able to “pass” a narrow version of the Turing test in that observers (again, observing through terminals) were unable to distinguish its playing from that of a skilled human chess player. Another variation-one involving the ability of a computer to compose stanzas of poetry-is provided in “A (Kind of) Turing Test” in chapter 9. Computers are now beginning to imitate human performance within certain well-defined domains. As Dan Dennett said in his article at the end of chapter 2, such narrow formulations of the Turing test fall far short of the original. I discuss the prospect of a computer passing the original Turing test in chapter 10. Turing expected that a computer would pass his test by the end of the century and remarked that by that time “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” Turing’s prediction contrasted with other statements around the same time that were much more optimistic in terms of time frame. (In 1965 Herbert Simon predicted that by 1985 “machines will be capable of doing any work that a man can do.”^29) Turing was as optimistic as anyone with regard to the power of cybernetic technology.^30 Yet he appears not to have underestimated (at least not as much as some other observers) the difficulty of the problems that remained to be solved. The Church-Turing thesis In addition to finding some profound limits to the powers of computation, Church and Turing also advanced, independently, an assertion that has become known as the Church-Turing thesis: if a problem that can be presented to a Turing machine is not solvable by one, then it is also not solvable by human thought. Others have restated this thesis to propose an essential equivalence between what a human can think or know and what is computable. The Church-Turing thesis can be viewed as a restatement in somewhat more precise terms of one of Wittgenstein’s primary theses in the Tractatus. I should point out that although the existence of Turing’s unsolvable problems is a mathematical certainty, the Church-Turing thesis is not a mathematical proposition at all. It is a conjecture that, in various disguises, is at the heart of some of our most profound debates in the philosophy of mind.^31 The Church-Turing thesis has both a negative and a positive side. The negative side is that problems that cannot be solved through any theoretical means of computation also cannot be solved by human thought. Accepting this thesis means that there are questions for which answers can be shown to exist but can never be found (and to date no human has ever solved an unsolvable problem). The positive side is that if humans can solve a problem or engage in some intelligent activity, then machines can ultimately be constructed to perform in the same way. This is a central thesis of the AI movement. Machines can be made to perform intelligent functions; intelligence is not the exclusive province of human thought. We can thus arrive at another possible definition of artificial intelligence: AI represents attempts to provide practical demonstrations of the Church-Turing thesis. In its strongest formulation, the Church-Turing thesis addresses issues of determinism and free will. Free will, which we can consider to be purposeful activity that is neither determined nor random, would appear to contradict the Church-Turing thesis. Nonetheless, the truth of the thesis is ultimately a matter of personal belief, and examples of intelligent behavior by machines are likely to influence one’s belief on at least the positive side of the question.
{"url":"http://www.kurzweilai.net/the-age-of-intelligent-machines-chapter-three-mathematical-roots","timestamp":"2014-04-21T02:04:58Z","content_type":null,"content_length":"58157","record_id":"<urn:uuid:11cd35c9-49fb-43ed-9128-ebe3d0ddcf9a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Why logarithmic functions instead of radical functions? December 4th 2008, 07:24 AM #1 Feb 2007 What is the relationship between logarithms and radicals? Here's my question: what is the relationship between radicals and logarithms? How are they related? (My original question is below, but the answer occurred to me after I posted it) I have a simple question--why is the inverse of an exponential function a logarithmic function instead of a radical function? If $F(x) = 5^x$, wouldn't the inverse be $x = 5^y$, and couldn't that be written $\sqrt[5]{x}$? Why must it be written $log_5x$ Last edited by shirkdeio; December 4th 2008 at 07:43 AM. Reason: new question Here's my question: what is the relationship between radicals and logarithms? How are they related? (My original question is below, but the answer occurred to me after I posted it) I have a simple question--why is the inverse of an exponential function a logarithmic function instead of a radical function? If $F(x) = 5^x$, wouldn't the inverse be $x = 5^y$, and couldn't that be written $\sqrt[5]{x}$? Why must it be written $log_5x$ If $y = 5^x$ then $x = log_5 \,y$ If $y = x^5$ then $x = \sqrt[5] y$ The inverse function of y=f(x) is $x = f^{-1}(y)$ You have to express x function of y If $y = 5^x$ then $x = log_5\,y$ $\sqrt[x]{y} = 5$ is also true but it is not in the form $x = f^{-1}(y)$ It is the same as per the example : If $y = 5x$ then $x = \frac{y}{5}$ $\frac{y}{x} = 5$ is also true but it is not in the form $x = f^{-1}(y)$ Thank you. I think I better understand logarithms now. I appreciate the help! December 4th 2008, 07:52 AM #2 MHF Contributor Nov 2008 December 4th 2008, 08:24 AM #3 Feb 2007 December 4th 2008, 08:38 AM #4 MHF Contributor Nov 2008 December 4th 2008, 10:57 AM #5 Feb 2007
{"url":"http://mathhelpforum.com/pre-calculus/63280-why-logarithmic-functions-instead-radical-functions.html","timestamp":"2014-04-20T00:47:13Z","content_type":null,"content_length":"48030","record_id":"<urn:uuid:49f092a8-1348-474a-a8aa-75d84d2882fb>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
A 7300 N Elevator Is To Be Given An Acceleration ... | Chegg.com A 7300 N elevator is to be given an acceleration of 0.100 g by connecting it to a cable of negligible weight wrapped around a turning cylindrical shaft. If the shaft's diameter can be no larger than 16.0 cm due to space limitations, what must be its minimum angular acceleration to provide the required acceleration of the elevator? (answer in rad/s^2)
{"url":"http://www.chegg.com/homework-help/questions-and-answers/7300-n-elevator-given-acceleration-0100-g-connecting-cable-negligible-weight-wrapped-aroun-q1616020","timestamp":"2014-04-25T06:55:15Z","content_type":null,"content_length":"21031","record_id":"<urn:uuid:1ae6ab5a-c100-4ccd-8e6f-8d1f7682647c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Methuen Precalculus Tutor Find a Methuen Precalculus Tutor ...Nevertheless, I don't pretend that concrete results aren't important. I give meaningful test-taking tips (which you won't find in any book) and teach with example problems constantly in view (which I make up on the spot to target the EXACT source of your confusion). But the bottom line is that, ... 47 Subjects: including precalculus, English, chemistry, reading ...I avoid formulas, instead teaching the how and why from previously learned knowledge. Excellent math skills and knowledge are not a bag of tricks easily learned; like athletic skills they take long hard hours of practice guided by good coaches. At the beginning of my calculus courses, I told my... 9 Subjects: including precalculus, calculus, geometry, algebra 1 ...For several years, I helped design jet engines for commercial aircraft. I have a Professional Engineering License and a patent issued by the US patent office. My tutoring experience includes three years tutoring at the Caridad Center of the Migrant Association of South Florida. 14 Subjects: including precalculus, physics, calculus, geometry ...I was a co-captain of the math team, and I did baseball and track. I took as many math and science classes as possible, including AP statistics and calculus. I got As in both classes and 4/5 on both the national tests. 29 Subjects: including precalculus, English, finance, economics ...I am experienced with Common Core Standards and MCAS preparation. My course teaching experience includes Algebra 1, Algebra 2, PreCalculus, Computer Programming and Robotics. I spent over 10 years working as a software engineer programming in C and C++. I have recently transitioned into the field of education. 22 Subjects: including precalculus, algebra 1, algebra 2, trigonometry
{"url":"http://www.purplemath.com/Methuen_Precalculus_tutors.php","timestamp":"2014-04-17T07:44:34Z","content_type":null,"content_length":"23869","record_id":"<urn:uuid:e41164fb-a3f2-464b-9975-268bfbc69fa4>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: TOUCH DETERMINATION BY TOMOGRAPHIC RECONSTRUCTION Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A touch-sensitive apparatus comprises a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points. Actual detection lines are defined between pairs of incoupling and outcoupling points to extend across a surface portion of the panel. The signals may be in the form of light, and objects touching the surface portion may affect the light via frustrated total internal reflection (FTIR). A signal generator is coupled to the incoupling points to generate the signals, and a signal detector is coupled to the out-coupling points to generate an output signal. A data processor operates on the output signal to enable identification of touching objects. The output signal is processed (40) to generate a set of data samples, which are indicative of detected energy for at least a subset of the actual detection lines. The set of data samples is processed (42) to generate a set of matched samples, which are indicative of estimated detected energy for fictitious detection lines that have a location on the surface portion that matches a standard geometry for tomographic reconstruction. The set of matched samples is processed (44, 46) by tomographic reconstruction to generate data indicative of a distribution of an energy-related parameter within at least part of the surface portion. A method of enabling touch determination based on an output signal from a touch-sensitive apparatus, the touch-sensitive apparatus comprising a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining actual detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points, at least one signal generator coupled to the incoupling points to generate the signals, and at least one signal detector coupled to the outcoupling points to generate the output signal, the method comprising: processing the output signal to generate a set of data samples, wherein the data samples are indicative of detected energy for at least a subset of the actual detection lines, processing the set of data samples to generate a set of matched samples, wherein the matched samples are indicative of estimated detected energy for fictitious detection lines that have a location on the surface portion that matches a standard geometry for tomographic reconstruction, and processing the set of matched samples by tomographic reconstruction to generate data indicative of a distribution of an energy-related parameter within at least part of the surface portion. The method of claim 1, wherein the step of processing the output signal comprises: generating the data samples in a two-dimensional sample space, wherein each data sample is representative of an actual detection line and is defined by a signal value and two dimension values that define the location of the actual detection line on the surface portion. The method of claim 2, wherein the step of processing the set of data samples comprises: generating estimated signal values of the matched samples at predetermined locations in the two-dimensional sample space, wherein the predetermined locations correspond to the fictitious detection lines. The method of claim 3, wherein the estimated signal values are generated by interpolation based on the signal values of the data samples. The method of claim 4, wherein each estimated signal value is generated by interpolation of the signal values of neighboring data samples in the two-dimensional sample space. The method of claim 4, wherein the step of processing the set of data samples further comprises: obtaining a predetermined two-dimensional interpolation function with nodes corresponding to the set of data samples, and calculating the estimated signal values according to the interpolation function and based on the signal values of the data samples. The method of claim 6, further comprising: receiving exclusion data identifying one or more data samples to be excluded, wherein the step of processing the data samples comprises identifying the node corresponding to each data sample to be excluded, re-designing the predetermined interpolation function without each thus-identified node, and calculating the estimated signal values according to the re-designed interpolation function and based on the signal values of the data samples in the nodes of the re-designed interpolation function. The method of claim 3, wherein the step of generating estimated signal values comprises, for each matched sample: calculating a weighted contribution to the matched sample from each of at least a subset of the data samples, and aggregating the weighted contributions, wherein each weighted contribution is calculated as a function of the signal value of the data sample and a distance in the sample space between the matched sample and the data sample. The method of claim 3, wherein the matched samples are arranged as at least one of rows and columns in the two-dimensional sample space. The method of claim 9, wherein the matched samples are arranged with equidistant spacing within each of said at least one of rows and columns. The method of claim 1, wherein the step of processing the set of data samples comprises: operating a two-dimensional Fourier transformation algorithm designed for irregularly sampled data on the set of data samples to generate a set of Fourier coefficients arranged in a Cartesian grid, and generating the estimated signal values by operating a two-dimensional inverse Fourier transformation algorithm on the set of Fourier coefficients to generate the set of matched samples. The method of claim 3, wherein the step of processing the set of matched samples comprises: applying a one-dimensional high-pass filtering of the matched samples in the two-dimensional sample space to generate filtered samples, and processing the filtered samples to generate at set of back projection values indicative of said distribution. The method of claim 2, wherein the surface portion defines a sampling area in the two-dimensional sample space, and wherein, if the actual detection lines given by the geometric arrangement of incoupling and outcoupling points result in at least one contiguous region without data samples within the sampling area, the step of processing the set of data samples comprises: obtaining a predetermined set of estimated sampling points within the contiguous region, and for each estimated sampling point, identifying the location of a corresponding fictitious detection line on the surface portion; identifying, for each intersection point between the corresponding fictitious detection line and the actual detection lines or between the corresponding fictitious detection line and the fictitious detection lines for the set of matched samples, an intersection point value as the smallest signal value of all data samples corresponding to the actual detection lines associated with the intersection point; and calculating a signal value of the estimated sampling point as a function of the intersection point values. The method of claim 13, wherein the signal value of the estimated sampling point is given by the largest intersection point value. The method of claim 13, further comprising, for each estimated sampling point: identifying a number of local maxima in the intersection point values, and calculating the signal value of the estimated sampling point as a combination of the local maxima. The method of claim 2, wherein the dimension values comprise a rotation angle of the detection line in the plane of the panel, and a distance of the detection line in the plane of the panel from a predetermined origin. The method of claim 2, wherein the dimension values comprise an angular location of the incoupling or outcoupling point of the detection line, and a rotation angle of the detection line in the plane of the panel. The method of claim 17, wherein the standard geometry is a fan geometry, wherein the touch surface has a non-circular perimeter, and wherein the angular location is defined by an intersection between the actual detection line and a fictitious circle arranged to circumscribe the touch surface. The method of claim 1, wherein the standard geometry is one of a parallel geometry and a fan geometry. The method of claim 1, wherein said signals comprise one of electrical energy, light, magnetic energy, sonic energy and vibration energy. The method of claim 1, wherein the panel defines a touch surface and an opposite surface, wherein said at least one signal generator is arranged to provide light inside the panel, such that the light propagates from the incoupling points by internal reflection between the touch surface and the opposite surface to the outcoupling points for detection by said at least one signal detector, and wherein the touch-sensitive apparatus is configured such that the propagating light is locally attenuated by one or more objects touching the touch surface. A computer program product comprising computer code which, when executed on a data-processing system, is adapted to carry out the method of claim 1. 23. A device for enabling touch determination based on an output signal of a touch-sensitive apparatus, said touch-sensitive apparatus comprising a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining actual detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points, means for generating the signals at the incoupling points, and means for generating the output signal based on detected signals at the outcoupling points, said device comprising: means for receiving the output signal; means for processing the output signal to generate a set of data samples, wherein the data samples are indicative of detected energy for at least a subset of the actual detection lines; means for processing the set of data samples to generate a set of matched samples, wherein the matched samples are indicative of estimated detected energy for fictitious detection lines that have a location on the surface portion that matches a standard geometry for tomographic reconstruction; and means for processing the set of matched samples by tomographic reconstruction to generate data indicative of a distribution of an energy-related parameter within at least part of the surface portion. A touch-sensitive apparatus, comprising: a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining actual detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points; means for generating the signals at the incoupling points; means for generating an output signal based on detected signals at the outcoupling points; and the device for enabling touch determination according to claim 23. 25. A touch-sensitive apparatus, comprising: a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining actual detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points; at least one signal generator coupled to the incoupling points to generate the signals; at least one signal detector coupled to the outcoupling points to generate an output signal; and a signal processor connected to receive the output signal and configured to: process the output signal to generate a set of data samples, wherein the data samples are indicative of detected energy for at least a subset of the actual detection lines, process the set of data samples to generate a set of matched samples, wherein the matched samples are indicative of estimated detected energy for fictitious detection lines that have a location on the surface portion that matches a standard geometry for tomographic reconstruction, and process the set of matched samples by tomographic reconstruction to generate data indicative of a distribution of an energy-related parameter within at least part of the surface portion. The present application claims the benefit of Swedish patent application No. 1050434-8, filed on May 3, 2010, Swedish patent application No. 1051062-6, filed on Oct. 11, 2010, and U.S. provisional application No. 61/282,973, filed on May 3, 2010, all of which are incorporated herein by reference. TECHNICAL FIELD [0002] The present invention relates to touch-sensitive panels and data processing techniques in relation to such panels. BACKGROUND ART [0003] To an increasing extent, touch-sensitive panels are being used for providing input data to computers, electronic measurement and test equipment, gaming devices, etc. The panel may be provided with a graphical user interface (GUI) for a user to interact with using e.g. a pointer, stylus or one or more fingers. The GUI may be fixed or dynamic. A fixed GUI may e.g. be in the form of printed matter placed over, under or inside the panel. A dynamic GUI can be provided by a display screen integrated with, or placed underneath, the panel or by an image being projected onto the panel by a There are numerous known techniques for providing touch sensitivity to the panel, e.g. by using cameras to capture light scattered off the point(s) of touch on the panel, or by incorporating resistive wire grids, capacitive sensors, strain gauges, etc into the panel. US2004/0252091 discloses an alternative technique which is based on frustrated total internal reflection (FTIR). Light sheets are coupled into a panel to propagate inside the panel by total internal reflection. When an object comes into contact with a surface of the panel, two or more light sheets will be locally attenuated at the point of touch. Arrays of light sensors are located around the perimeter of the panel to detect the received light for each light sheet. A coarse tomographic reconstruction of the light field across the panel surface is then created by geometrically back-tracing and triangulating all attenuations observed in the received light. This is stated to result in data regarding the position and size of each contact area. US2009/0153519 discloses a panel capable of conducting signals. A "tomograph" is positioned adjacent the panel with signal flow ports arrayed around the border of the panel at discrete locations. Signals (b) measured at the signal flow ports are tomographically processed to generate a two-dimensional representation (x) of the conductivity on the panel, whereby touching objects on the panel surface can be detected. The presented technique for tomographic reconstruction is based on a linear model of the tomographic system, Ax=b. The system matrix A is calculated at factory, and its pseudo inverse A is calculated using Truncated SVD algorithms and operated on the measured signals to yield the two-dimensional (2D) representation of the conductivity: x=A b. The suggested method is both demanding in the term of processing and lacks suppression of high frequency components, possibly leading to much noise in the 2D representation. US2009/0153519 also makes a general reference to Computer Tomography (CT). CT methods are well-known imaging methods which have been developed for medical purposes. CT methods employ digital geometry processing to reconstruct an image of the inside of an object based on a large series of projection measurements through the object. Various CT methods have been developed to enable efficient processing and/or precise image reconstruction, e.g. Filtered Back Projection, ART, SART, etc. Often, the projection measurements are carried out in accordance with a standard geometry which is given by the CT method. Clearly, it would be desirable to capitalize on existing CT methods for reconstructing the 2D distribution of an energy-related parameter (light, conductivity, etc) across a touch surface based on a set of projection measurements. SUMMARY [0008] It is an object of the invention to enable touch determination on a panel based on projection measurements by use of existing CT methods. Another objective is to provide a technique that enables determination of touch-related data at sufficient precision to discriminate between a plurality of objects in simultaneous contact with a touch surface. This and other objects, which may appear from the description below, are at least partly achieved by means of a method of enabling touch determination, a computer program product, a device for enabling touch determination, and a touch-sensitive apparatus according to the independent claims, embodiments thereof being defined by the dependent claims. A first aspect of the invention is a method of enabling touch determination based on an output signal from a touch-sensitive apparatus, which comprises a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining actual detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points, at least one signal generator coupled to the incoupling points to generate the signals, and at least one signal detector coupled to the outcoupling points to generate the output signal. The method comprises: processing the output signal to generate a set of data samples, wherein the data samples are indicative of detected energy for at least a subset of the actual detection lines; processing the set of data samples to generate a set of matched samples, wherein the matched samples are indicative of estimated detected energy for fictitious detection lines that have a location on the surface portion that matches a standard geometry for tomographic reconstruction; and processing the set of matched samples by tomographic reconstruction to generate data indicative of a distribution of an energy-related parameter within at least part of the surface portion. In one embodiment, the step of processing the output signal comprises: generating the data samples in a two-dimensional sample space, wherein each data sample is representative of an actual detection line and is defined by a signal value and two dimension values that define the location of the actual detection line on the surface portion. In one embodiment, the step of processing the set of data samples comprises: generating estimated signal values of the matched samples at predetermined locations in the two-dimensional sample space, wherein the predetermined locations correspond to the fictitious detection lines. The estimated signal values may be generated by interpolation based on the signal values of the data samples, and each estimated signal value may be generated by interpolation of the signal values of neighboring data samples in the two-dimensional sample space. In one embodiment, the step of processing the set of data samples further comprises: obtaining a predetermined two-dimensional interpolation function with nodes corresponding to the set of data samples, and calculating the estimated signal values according to the interpolation function and based on the signal values of the data samples. The method may further comprise a step of receiving exclusion data identifying one or more data samples to be excluded, wherein the step of processing the data samples comprises identifying the node corresponding to each data sample to be excluded, re-designing the predetermined interpolation function without each thus-identified node, and calculating the estimated signal values according to the re-designed interpolation scheme and based on the signal values of the data samples in the nodes of the re-designed interpolation scheme. In one embodiment, the step of generating estimated signal values comprises, for each matched sample: calculating a weighted contribution to the matched sample from at least a subset of the data samples, and aggregating the weighted contributions, wherein each weighted contribution is calculated as a function of the signal value of the data sample and a distance in the sample space between the matched sample and the data sample. In one embodiment, the matched samples are arranged as rows and/or columns in the two-dimensional sample space. The matched samples may be arranged with equidistant spacing within each of said rows and/or columns. In an alternative embodiment, the step of processing the set of data samples comprises: operating a two-dimensional Fourier transformation algorithm designed for irregularly sampled data on the set of data samples to generate a set of Fourier coefficients arranged in a Cartesian grid; and generating the estimated signal values by operating a two-dimensional inverse FFT algorithm on the set of Fourier coefficients to generate the set of matched samples. In one embodiment, the step of processing the set of matched samples comprises: applying a one-dimensional high-pass filtering of the matched samples in the two-dimensional sample space to generate filtered samples, and processing the filtered samples to generate at set of back projection values indicative of said distribution. In one embodiment, the surface portion defines a sampling area in the two-dimensional sample space, and the step of processing comprises, if the actual detection lines given by the geometric arrangement of incoupling and outcoupling points result in at least one contiguous region without data samples within the sampling area, the steps of: obtaining a predetermined set of estimated sampling points within the contiguous region, and, for each estimated sampling point, identifying the location of a corresponding fictitious detection line on the surface portion; identifying, for each intersection point between the corresponding fictitious detection line and the actual detection lines and/or between the corresponding fictitious detection line and the fictitious detection lines for the set of matched samples, an intersection point value as the smallest signal value of all data samples corresponding to the actual detection lines associated with the intersection point; and calculating a signal value of the estimated sampling point as a function of the intersection point values. In one implementation, the signal value of the estimated sampling point may be given by the largest intersection point value. In another implementation, the method further comprises, for each estimated sampling point: identifying a number of local maxima in the intersection point values, and calculating the signal value of the estimated sampling point as a combination of the local maxima. In one embodiment, the dimension values comprise a rotation angle of the detection line in the plane of the panel, and a distance of the detection line in the plane of the panel from a predetermined In another embodiment, the dimension values comprise an angular location of the incoupling or outcoupling point of the detection line, and a rotation angle of the detection line in the plane of the panel. In one implementation, the standard geometry is a fan geometry, the touch surface has a non-circular perimeter, and the angular location is defined by an intersection between the detection line and a fictitious circle arranged to circumscribe the touch surface. In one embodiment, the standard geometry is one of a parallel geometry and a fan geometry. In one embodiment, the signals comprise one of electrical energy, light, magnetic energy, sonic energy and vibration energy. In one embodiment, the panel defines a touch surface and an opposite surface, wherein said at least one signal generator is arranged to provide light inside the panel, such that the light propagates from the incoupling points by internal reflection between the touch surface and the opposite surface to the outcoupling points for detection by said at least one signal detector, and wherein the touch-sensitive apparatus is configured such that the propagating light is locally attenuated by one or more objects touching the touch surface. A second aspect of the invention is a computer program product comprising computer code which, when executed on a data-processing system, is adapted to carry out the method of the first aspect. A third aspect of the invention is a device for enabling touch determination based on an output signal of a touch-sensitive apparatus, which comprises a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining actual detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points, means for generating the signals at the incoupling points, and means for generating the output signal based on detected signals at the outcoupling points. The device comprises: means for receiving the output signal; means for processing the output signal to generate a set of data samples, wherein the data samples are indicative of detected energy for at least a subset of the actual detection lines; means for processing the set of data samples to generate a set of matched samples, wherein the matched samples are indicative of estimated detected energy for fictitious detection lines that have a location on the surface portion that matches a standard geometry for tomographic reconstruction; and means for processing the set of matched samples by tomographic reconstruction to generate data indicative of a distribution of an energy-related parameter within at least part of the surface portion. A fourth aspect of the invention is a touch-sensitive apparatus, comprising: a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining actual detection lines that extend across a surface portion of the panel between pairs of incoupling and outcoupling points; means for generating the signals at the incoupling points; means for generating an output signal based on detected signals at the outcoupling points; and the device for enabling touch determination according to the third aspect. A fifth aspect of the invention is a touch-sensitive apparatus, comprising: a panel configured to conduct signals from a plurality of peripheral incoupling points to a plurality of peripheral outcoupling points, thereby defining actual detection lines that extend across a surface portion of the panel between pairs of incoupling and out-coupling points; at least one signal generator coupled to the incoupling points to generate the signals; at least one signal detector coupled to the outcoupling points to generate an output signal; and a signal processor connected to receive the output signal and configured to: process the output signal to generate a set of data samples, wherein the data samples are indicative of detected energy for at least a subset of the actual detection lines, process the set of data samples to generate a set of matched samples, wherein the matched samples are indicative of estimated detected energy for fictitious detection lines that have a location on the surface portion that matches a standard geometry for tomographic reconstruction, and process the set of matched samples by tomographic reconstruction to generate data indicative of a distribution of an energy-related parameter within at least part of the surface portion. Any one of the embodiments of the first aspect can be combined with the second to fifth aspects. Still other objectives, features, aspects and advantages of the present invention will appear from the following detailed description, from the attached claims as well as from the drawings. BRIEF DESCRIPTION OF DRAWINGS [0031] Embodiments of the invention will now be described in more detail with reference to the accompanying schematic drawings. FIG. 1 is a plan view of a touch-sensitive apparatus. FIG. 2A-2B are top plan views of a touch-sensitive apparatus with an interleaved and non-interleaved arrangement, respectively, of emitters and sensors. FIGS. 3A-3B are side and top plan views of touch-sensitive systems operating by frustrated total internal reflection (FTIR). FIG. 4A is a flow chart of a reconstruction method, and FIG. 4B is a block diagram of a device that implements the method of FIG. 4A. FIG. 5 illustrates the underlying principle of the Projection-Slice Theorem. FIG. 6 illustrates the applicability of filtering for back projection processing. FIG. 7 illustrates a parallel geometry used in tomographic reconstruction. FIGS. 8A-8H illustrate a starting point, intermediate results and final results of a back projection process using a parallel geometry. FIG. 9 illustrates a fan geometry used in tomographic reconstruction. FIGS. 10A-10C illustrate intermediate and final results of a back projection process using a fan geometry. FIG. 11 is graph of projection values collected in the fan geometry of FIG. 9 mapped to a sampling space for a parallel geometry. FIG. 12A is a graph of sampling points defined by interleaved arrangement in FIG. 2A, FIGS. 12B-12C illustrate discrepancies between detection lines in an interleaved arrangement and a fan geometry, and FIG. 12D is a graph of sampling points for the non-interleaved arrangement in FIG. 2B. FIG. 13 is a reference image mapped to an interleaved arrangement. FIG. 14A is a graph of a 2D interpolation function for an interleaved arrangement, FIG. 14B illustrates the generation of interpolation points using the interpolation function of FIG. 14A, FIG. 14C is an interpolated sinogram generated based on the reference image in FIG. 13, and FIG. 14D is a reconstructed attenuation field. FIGS. 15A-15D and FIGS. 16A-16B illustrate how the 2D interpolation function is updated when sampling points are removed from reconstruction. FIG. 17 is a reference image mapped to a non-interleaved arrangement. FIGS. 18A-18B illustrate a first variant for reconstruction in a non-interleaved arrangement. FIGS. 19A-19B illustrate a second variant for reconstruction in a non-interleaved arrangement. FIGS. 20A-20B illustrate a third variant for reconstruction in a non-interleaved arrangement. FIGS. 21A-21B illustrate a fourth variant for reconstruction in a non-interleaved arrangement. FIGS. 22A-22F illustrate a fifth variant for reconstruction in a non-interleaved arrangement. FIGS. 23A-23E illustrate a sixth variant for reconstruction in a non-interleaved arrangement. FIG. 24 is a flowchart of a process for filtered back projection. FIGS. 25A-25B illustrate a first variant for reconstruction in an interleaved arrangement using a tomographic algorithm designed for fan geometry. FIGS. 26A-26B illustrate a second variant for reconstruction in an interleaved arrangement using a tomographic algorithm designed for fan geometry. FIG. 27 illustrates the use of a circle for defining a two-dimensional sample space of a touch-sensitive apparatus. FIG. 28A-28D illustrate a third variant for reconstruction in an interleaved arrangement using a tomographic algorithm designed for fan geometry. FIG. 29 shows the reconstructed attenuation field in FIG. 22F after image enhancement processing. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS [0060] The present invention relates to techniques for enabling extraction of touch data for at least one object, and typically multiple objects, in contact with a touch surface of a touch-sensitive apparatus. The description starts out by presenting the underlying concept of such a touch-sensitive apparatus, especially an apparatus operating by frustrated total internal reflection (FTIR) of light. Then follows an example of an overall method for touch data extraction involving tomographic reconstruction. The description continues to generally explain and exemplify the theory of tomographic reconstruction and its use of standard geometries. Finally, different inventive aspects of applying techniques for tomographic reconstruction for touch determination are further explained and exemplified. Throughout the description, the same reference numerals are used to identify corresponding elements. 1. Touch-Sensitive Apparatus FIG. 1 illustrates a touch-sensitive apparatus 100 which is based on the concept of transmitting energy of some form across a touch surface 1, such that an object that is brought into close vicinity of, or in contact with, the touch surface 1 causes a local decrease in the transmitted energy. The touch-sensitive apparatus 100 includes an arrangement of emitters and sensors, which are distributed along the periphery of the touch surface. Each pair of an emitter and a sensor defines a detection line, which corresponds to the propagation path for an emitted signal from the emitter to the sensor. In FIG. 1, only one such detection line D is illustrated to extend from emitter 2 to sensor 3, although it should be understood that the arrangement typically defines a dense grid of intersecting detection lines, each corresponding to a signal being emitted by an emitter and detected by a sensor. Any object that touches the touch surface along the extent of the detection line D will thus decrease its energy, as measured by the sensor 3. The arrangement of sensors is electrically connected to a signal processor 10, which samples and processes an output signal from the arrangement. The output signal is indicative of the received energy at each sensor 3. As will be explained below, the signal processor 10 may be configured to process the output signal by a tomographic technique to recreate an image of the distribution of an energy-related parameter (for simplicity, referred to as "energy distribution" in the following) across the touch surface 1. The energy distribution may be further processed by the signal processor 10 or by a separate device (not shown) for touch determination, which may involve extraction of touch data, such as a position (e.g. x, y coordinates), a shape or an area of each touching object. In the example of FIG. 1, the touch-sensitive apparatus 100 also includes a controller 12 which is connected to selectively control the activation of the emitters 2. The signal processor 10 and the controller 12 may be configured as separate units, or they may be incorporated in a single unit. One or both of the signal processor 10 and the controller 12 may be at least partially implemented by software executed by a processing unit. The touch-sensitive apparatus 100 may be designed to be used with a display device or monitor, e.g. as described in the Background section. Generally, such a display device has a rectangular extent, and thus the touch-sensitive apparatus 100 (the touch surface 1) is also likely to be designed with a rectangular shape. Further, the emitters 2 and sensors 3 all have a fixed position around the perimeter of the touch surface 1. Thus, in contrast to a conventional tomographic apparatus used e.g. in the medical field, there will be no possibility of rotating the complete measurement system. As will be described in further detail below, this puts certain limitations on the use of standard tomographic techniques for recreating/reconstructing the energy distribution within the touch surface 1. In the following, embodiments of the invention will be described in relation to two main arrangements of emitters 2 and sensors 3. A first main arrangement, shown in FIG. 2A, is denoted "interleaved arrangement" and has emitters 2 and sensors 3 placed one after the other along the periphery of the touch surface 1. Thus, every emitter 2 is placed between two sensors 3. The distance between neighboring emitters 2 is the same along the periphery. The same applies for the distance between neighboring sensors 3. A second main arrangement, shown in FIG. 2B, is denoted "non-interleaved arrangement" and has merely sensors 3 on two adjacent sides (i.e. sides connected via a corner), and merely emitters 2 on its other sides. The interleaved arrangement may be preferable since it generates a more uniform distribution of detection lines. However, there are electro-optical aspects of the interleaved system that may favor the use of the non-interleaved arrangement. For example, the interleaved arrangement may require the emitters 2, which may be fed with high driving currents, to be located close to the sensors 3, which are configured to detect weak photo-currents. This may lead to undesired detection noise. The electrical connection to the emitters 2 and sensors 3 may also be somewhat demanding since the emitters 2 and sensors 3 are dispersed around the periphery of the touch surface 1. Thus, there may be reasons for using a non-interleaved arrangement instead of an interleaved arrangement, since the former obviates these potential obstacles. It is to be understood that there are many variations and blends of these two types of arrangements. For example, the sensor-sensor, sensor-emitter, emitter-emitter distance(s) may vary along the periphery, and/or the blending of emitters and sensors may be different, e.g. there may be two or more emitters/sensors between every emitter/sensor, etc. Although the following examples are given for the first and second main arrangements, specifically a rectangular touch surface with a 16:9 aspect ratio, this is merely for the purpose of illustration, and the concepts of the invention are applicable irrespective of aspect ratio, shape of the touch surface, and arrangement of emitters and sensors. In the embodiments shown herein, at least a subset of the emitters 2 may be arranged to emit energy in the shape of a beam or wave that diverges in the plane of the touch surface 1, and at least a subset of the sensors 3 may be arranged to receive energy over a wide range of angles (field of view). Alternatively or additionally, the individual emitter 2 may be configured to emit a set of separate beams that propagate to a number of sensors 3. In either embodiment, each emitter 2 transmits energy to a plurality of sensors 3, and each sensor 3 receives energy from a plurality of emitters 2. The touch-sensitive apparatus 100 may be configured to permit transmission of energy in one of many different forms. The emitted signals may thus be any radiation or wave energy that can travel in and across the touch surface 1 including, without limitation, light waves in the visible or infrared or ultraviolet spectral regions, electrical energy, electromagnetic or magnetic energy, or sonic and ultrasonic energy or vibration energy. In the following, an example embodiment based on propagation of light will be described. FIG. 3A is a side view of a touch-sensitive apparatus 100 which includes a light transmissive panel 4, one or more light emitters 2 (one shown) and one or more light sensors 3 (one shown). The panel 4 defines two opposite and generally parallel surfaces 5, 6 and may be planar or curved. A radiation propagation channel is provided between two boundary surfaces 5, 6 of the panel 4, wherein at least one of the boundary surfaces allows the propagating light to interact with a touching object 7. Typically, the light from the emitter(s) 2 propagates by total internal reflection (TIR) in the radiation propagation channel, and the sensors 3 are arranged at the periphery of the panel 4 to generate a respective measurement signal which is indicative of the energy of received light. As shown in FIG. 3A, the light may be coupled into and out of the panel 4 directly via the edge portion that connects the top and bottom surfaces 5, 6 of the panel 4. Alternatively, not shown, a separate coupling element (e.g. in the shape of a wedge) may be attached to the edge portion or to the top or bottom surface 5, 6 of the panel 4 to couple the light into and/or out of the panel 4. When the object 7 is brought sufficiently close to the boundary surface, part of the light may be scattered by the object 7, part of the light may be absorbed by the object 7, and part of the light may continue to propagate unaffected. Thus, when the object 7 touches a boundary surface of the panel (e.g. the top surface 5), the total internal reflection is frustrated and the energy of the transmitted light is decreased. This type of touch-sensitive apparatus is denoted "FTIR system" (FTIR--Frustrated Total Internal Reflection) in the following. The touch-sensitive apparatus 100 may be operated to measure the energy of the light transmitted through the panel 4 on a plurality of detection lines. This may, e.g., be done by activating a set of spaced-apart emitters 2 to generate a corresponding number of light sheets inside the panel 4, and by operating a set of sensors 3 to measure the transmitted energy of each light sheet. Such an embodiment is illustrated in FIG. 3B, where each emitter 2 generates a beam of light that expands in the plane of the panel 4 while propagating away from the emitter 2. Each beam propagates from one or more entry or incoupling points within an incoupling site on the panel 4. Arrays of light sensors 3 are located around the perimeter of the panel 4 to receive the light from the emitters 2 at a number of spaced-apart outcoupling points within an outcoupling site on the panel 4. It should be understood that the incoupling and outcoupling points merely refer to the position where the beam enters and leaves, respectively, the panel 4. Thus, one emitter/sensor may be optically coupled to a number of incoupling/outcoupling points. In the example of FIG. 3B, however, the detection lines D are defined by individual emitter-sensor pairs. The light sensors 3 collectively provide an output signal, which is received and sampled by the signal processor 10. The output signal contains a number of sub-signals, also denoted "projection signals", each representing the energy of light emitted by a certain light emitter 2 and received by a certain light sensor 3, i.e. the received energy on a certain detection line. Depending on implementation, the signal processor 10 may need to process the output signal for identification of the individual sub-signals. Irrespective of implementation, the signal processor 10 is able to obtain an ensemble of measurement values that contains information about the distribution of an energy-related parameter across the touch surface 1. The light emitters 2 can be any type of device capable of emitting light in a desired wavelength range, for example a diode laser, a VCSEL (vertical-cavity surface-emitting laser), or alternatively an LED (light-emitting diode), an incandescent lamp, a halogen lamp, etc. The light sensors 3 can be any type of device capable of detecting the energy of light emitted by the set of emitters, such as a photodetector, an optical detector, a photoresistor, a photovoltaic cell, a photodiode, a reverse-biased LED acting as photodiode, a charge-coupled device (CCD) etc. The emitters 2 may be activated in sequence, such that the received energy is measured by the sensors 3 for each light sheet separately. Alternatively, all or a subset of the emitters 2 may be activated concurrently, e.g. by modulating the emitters 2 such that the light energy measured by the sensors 3 can be separated into the sub-signals by a corresponding de-modulation. Reverting to the emitter-sensor-arrangements in FIG. 2, the spacing between neighboring emitters 2 and sensors 3 in the interleaved arrangement (FIG. 2A) and between neighboring emitters 2 and neighboring sensors 3, respectively, in the non-interleaved arrangement (FIG. 2B) is generally from about 1 mm to about 20 mm. For practical as well as resolution purposes, the spacing is generally in the 2-10 mm range. In a variant of the interleaved arrangement, the emitters 2 and sensors 3 may partially or wholly overlap, as seen in a plan view. This can be accomplished by placing the emitters 2 and sensors 3 on opposite sides of the panel 4, or in some equivalent optical arrangement. It is to be understood that FIG. 3 merely illustrates one example of an FTIR system. Further examples of FTIR systems are e.g. disclosed in U.S. Pat. No. 6,972,753, U.S. Pat. No. 7,432,893, US2006/ 0114237, US2007/0075648, WO2009/048365, WO2010/006882, WO2010/006883, WO2010/006884, WO2010/006885, WO2010/006886, and International application No. PCT/SE2009/051364, which are all incorporated herein by this reference. The inventive concept may be advantageously applied to such alternative FTIR systems as well. 2. Transmission As indicated in FIG. 3A, the light will not be blocked by the touching object 7. Thus, if two objects 7 happen to be placed after each other along a light path from an emitter 2 to a sensor 3, part of the light will interact with both objects 7. Provided that the light energy is sufficient, a remainder of the light will reach the sensor 3 and generate an output signal that allows both interactions (touch points) to be identified. Thus, in multi-touch FTIR systems, the transmitted light may carry information about a plurality of touches. In the following, T is the transmission for the j:th detection line, T is the trans-mission at a specific position along the detection line, and A is the relative attenuation at the same point. The total transmission (modeled) along a detection line is thus: T j = v T v = v ( 1 - A v ) ##EQU00001## The above equation is suitable for analyzing the attenuation caused by discrete objects on the touch surface, when the points are fairly large and separated by a distance. However, a more correct definition of attenuation through an attenuating medium may be used: I j = I 0 , j - ∫ a ( x ) x → T j = I j / I 0 , j = - ∫ a ( x ) x ##EQU00002## In this formulation, I represents the transmitted energy on detection line D with attenuating object(s), I ,j represents the transmitted energy on detection line D without attenuating objects, and a(x) is the attenuation coefficient along the detection line D . We also let the detection line interact with the touch surface along the entire extent of the detection line, i.e. the detection line is represented as a mathematical line. To facilitate the tomographic reconstruction as described in the following, the measurement values may be divided by a respective background value. By proper choice of background values, the measurement values are thereby converted into transmission values, which thus represent the fraction of the available light energy that has been measured on each of the detection lines. The theory of the Radon transform (see below) deals with line integrals, and it may therefore be proper to use the logarithm of the above expression: 3. Reconstruction and Touch Data Extraction FIG. 4A illustrates an embodiment of a method for reconstruction and touch data extraction in an FTIR system. The method involves a sequence of steps 40-48 that are repeatedly executed, typically by the signal processor 10 (FIGS. 1 and 3). In the context of this description, each sequence of steps 40-48 is denoted a sensing instance. Each sensing instance starts by a data collection step 40, in which measurement values are sampled from the light sensors 3 in the FTIR system, typically by sampling a value from each of the aforesaid sub-signals. The data collection results in one projection value for each detection line. It may be noted that the data may, but need not, be collected for all available detection lines in the FTIR system. The data collection step 40 may also include pre-processing of the measurement values, e.g. filtering for noise reduction, conversion of measurement values into transmission values (or equivalently, attenuation values), conversion into logarithmic values, etc. In a re-calculation step 42, the set of projection values are processed for generation of an updated set of projection values that represent fictitious detection lines with a location on the touch surface that matches a standard geometry for tomographic reconstruction. This step typically involves an interpolation among the projection values as located in a 2D sample space which is defined by two dimensions that represent the unique location of the detection lines on the touch surface. In this context, a "location" refers to the physical extent of the detection line on the touch surface as seen in a plan view. The re-calculation step 42 will be further explained and motivated in Chapter 6 below. In a filtering step 44, the updated set of projection values is subjected to a filtering aiming at increasing high spatial frequencies in relation to low spatial frequencies amongst the set of projection values. Thus, step 44 results in a filtered version of the updated set of projection values, denoted "filtered set" in the following. Typically, step 44 involves applying a suitable 1D filter kernel to the updated set of projection values. The use of filter kernels will be further explained and motivated in Chapter 4 below. In certain embodiments, it may be advantageous to apply a low-pass filter to the updated set of projection values before applying the 1D filter kernel. In a reconstruction step 46, an "attenuation field" across the touch surface is reconstructed by processing the filtered set in the 2D sample space. The attenuation field is a distribution of attenuation values across the touch surface (or a relevant part of the touch surface), i.e. an energy-related parameter. As used herein, "the attenuation field" and "attenuation values" may be given in terms of an absolute measure, such as light energy, or a relative measure, such as relative attenuation (e.g. the above-mentioned attenuation coefficient) or relative transmission. Step 46 may involve applying a back projection operator to the filtered set of projection values in the 2D sample space. Such an operator typically generates an individual attenuation value by calculating some form of weighted sum of selected projection values included the filtered set. The use of a back projection operator will be further explained and motivated in Chapters 4 and 5 below. The attenuation field may be reconstructed within one or more subareas of the touch surface. The subareas may be identified by analyzing intersections of detection lines across the touch surface, based on the above-mentioned projection signals. Such a technique for identifying subareas is further disclosed in Applicant's U.S. provisional patent application No. 61/272,665, which was filed on Oct. 19, 2009 and which is incorporated herein by this reference. In a subsequent extraction step 48, the reconstructed attenuation field is processed for identification of touch-related features and extraction of touch data. Any known technique may be used for isolating true (actual) touch points within the attenuation field. For example, ordinary blob detection and tracking techniques may be used for finding the actual touch points. In one embodiment, a threshold is first applied to the attenuation field, to remove noise. Any areas with attenuation values that exceed the threshold, may be further processed to find the center and shape by fitting for instance a two-dimensional second-order polynomial or a Gaussian bell shape to the attenuation values, or by finding the ellipse of inertia of the attenuation values. There are also numerous other techniques as is well known in the art, such as clustering algorithms, edge detection algorithms, etc. Any available touch data may be extracted, including but not limited to x,y coordinates, areas, shapes and/or pressure of the touch points. After step 48, the extracted touch data is output, and the process returns to the data collection step 40. It is to be understood that one or more of steps 40-48 may be effected concurrently. For example, the data collection step 40 of a subsequent sensing instance may be initiated concurrently with any of steps 42-48. In can also be noted that the recalculation and filtering steps 42, 44 can be merged into one single step, since these steps generally involve linear operations. The touch data extraction process is typically executed by a data processing device (cf. signal processor 10 in FIGS. 1 and 3) which is connected to sample the measurement values from the light sensors 3 in the FTIR system. FIG. 4B shows an example of such a data processing device 10 for executing the process in FIG. 4A. In the illustrated example, the device 10 includes an input 400 for receiving the output signal. The device 10 further includes a data collection element (or means) 402 for processing the output signal to generate the above-mentioned set of projection values, and a recalculation element (or means) 404 for generating the above-mentioned updated set of projection values. There is also provided a filtering element (or means) 406 for generating the above-mentioned filtered set. The device 10 further includes a reconstruction element (or means) 408 for generating the reconstructed attenuation field by processing the filtered set, and an output 410 for outputting the reconstructed attenuation field. In the example of FIG. 4B, the actual extraction of touch data is carried out by a separate device 10' which is connected to receive the attenuation field from the data processing device 10. The data processing device 10 may be implemented by special-purpose software (or firmware) run on one or more general-purpose or special-purpose computing devices. In this context, it is to be understood that each "element" or "means" of such a computing device refers to a conceptual equivalent of a method step; there is not always a one-to-one correspondence between elements/means and particular pieces of hardware or software routines. One piece of hardware sometimes comprises different means/elements. For example, a processing unit serves as one element/means when executing one instruction, but serves as another element/means when executing another instruction. In addition, one element/means may be implemented by one instruction in some cases, but by a plurality of instructions in some other cases. Such a software controlled computing device may include one or more processing units, e.g. a CPU ("Central Processing Unit"), a DSP ("Digital Signal Processor"), an ASIC ("Application-Specific Integrated Circuit"), discrete analog and/or digital components, or some other programmable logical device, such as an FPGA ("Field Programmable Gate Array"). The data processing device 10 may further include a system memory and a system bus that couples various system components including the system memory to the processing unit. The system bus may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory may include computer storage media in the form of volatile and/or non-volatile memory such as read only memory (ROM), random access memory (RAM) and flash memory. The special-purpose software may be stored in the system memory, or on other removable/non-removable volatile/non-volatile computer storage media which is included in or accessible to the computing device, such as magnetic media, optical media, flash memory cards, digital tape, solid state RAM, solid state ROM, etc. The data processing device 10 may include one or more communication interfaces, such as a serial interface, a parallel interface, a USB interface, a wireless interface, a network adapter, etc, as well as one or more data acquisition devices, such as an A/D converter. The special-purpose software may be provided to the data processing device 10 on any suitable computer-readable medium, including a record medium, a read-only memory, or an electrical carrier signal. 4. Tomographic Techniques Tomographic reconstruction, which is well-known per se, may be based on the mathematics describing the Radon transform and its inverse. The following theoretical discussion is limited to the 2D Radon transform. The general concept of tomography is to do imaging of a medium by measuring line integrals through the medium for a large set of angles and positions. The line integrals are measured through the image plane. To find the inverse, i.e. the original image, many algorithms use the so-called Projection-Slice Theorem. Several efficient algorithms have been developed for tomographic reconstruction, e.g. Filtered Back Projection (FBP), FFT-based algorithms, ART (Algebraic Reconstruction Technique), SART (Simultaneous Algebraic Reconstruction Technique), etc. Filtered Back Projection is a widely used algorithm, and there are many variants and extensions thereof. Below, a brief outline of the underlying mathematics for FBP is given, for the sole purpose of facilitating the following discussion about the inventive concept and its merits. 4.1 Projection-Slice Theorem Many tomographic reconstruction techniques make use of a mathematical theorem called Projection-Slice Theorem. This Theorem states that given a two-dimensional function f(x, y), the one- and two-dimensional Fourier transforms and , a projection operator that projects a two-dimensional (2D) function onto a one-dimensional (1D) line, and a slice operator S that extracts a central slice of a function, the following calculations are equal: This relation is illustrated in FIG. 5. The right-hand side of the equation above essentially extracts a 1D line of the 2D Fourier transform of the function f(x, y). The line passes through the origin of the 2D Fourier plane, as shown in the right-hand part of FIG. 5. The left-hand side of the equation starts by projecting (i.e. integrating along 1D lines in the projection direction p) the 2D function onto a 1D line (orthogonal to the projection direction p), which forms a "projection" that is made up of the projection values for all the different detection lines extending in the projection direction p. Thus, taking a 1D Fourier transform of the projection gives the same result as taking a slice from the 2D Fourier transform of the function f(x, y). In the context of the present disclosure, the function f(x, y) corresponds to the attenuation coefficient field a(x) (generally denoted "attenuation field" herein) to be reconstructed. 4.2 Radon Transform First, it can be noted that the attenuation vanishes outside the touch surface. For the following mathematical discussion, we define a circular disc that circumscribes the touch surface, Ω ={x: |x|≦r}, with the attenuation field set to zero outside of this disc. Further, the projection value for a given detection line is given by: ( θ , s ) = ( a ) ( θ , s ) = ∫ s = x θ a ( x ) x ##EQU00003## Here, we let θ=(cos φ, sin φ) be a unit vector denoting the direction normal to the detection line, and s is the shortest distance (with sign) from the detection line to the origin (taken as the centre of the screen, cf. FIG. 5).). Note that θ is perpendicular to the above-mentioned projection direction vector, p. This means that we can denote g (θ, s) by g(φ, s) since the latter notation more clearly indicates that g is a function of two variables and not a function of one scalar and one arbitrary vector. Thus, the projection value for a detection line could be expressed as g(φ, s), i.e. as a function of the angle of the detection line to a reference direction, and the distance of the detection line to an origin. We let the angle span the range 0≦φ<π, and since the attenuation field has support in Ω , it is sufficient to consider s in the interval -r≦s≦r. The set of projections collected for different angles and distances may be stacked together to form a "sinogram". Our goal is now to reconstruct the attenuation field a(x) given the measured Radon transform, g=a. The Radon transform operator is not invertible in the general sense. To be able to find a stable inverse, we need to impose restrictions on the variations of the attenuation One should note that the Radon transform is the same as the above-mentioned projection operator in the Projection-Slice Theorem. Hence, taking the 1D Fourier transform of g(φ, s) with respect to the s variable results in central slices from the 2D Fourier transform of the attenuation field a(x). 4.3 Continuous vs. Discrete Tomography The foregoing sections 4.1-4.2 describe the mathematics behind tomographic reconstruction using continuous functions and operators. However, in a real world system, the measurement data represents a discrete sampling of functions, which calls for modifications of the algorithms. For a thorough description of such modifications, we refer to the mathematical literature, e.g. "The Mathematics of Computerized Tomography" by Natterer, and "Principles of Computerized Tomographic Imaging" by Kak and Slaney. One important modification is a need for a filtering step when operating on discretely sampled functions. The need for filtering can intuitively be understood by considering the Projection-Slice Theorem in a system with discrete sampling points and angles, i.e. a finite set of detection lines. According to this Theorem, for each angle φ, we take the 1D discrete Fourier transform of g(φ, s) with respect to the s variable and put the result into the Fourier plane as slices through the origin of the 2D Fourier transform of the original function a(x). This is illustrated in the left-hand part of FIG. 6 for a single projection. When we add information from several different projections, the density of sampling points will be much higher near the origin of the 2D Fourier transform plane. Since the information density is much higher at low frequencies, an unfiltered back projection will yield a blurring from the low frequency components. To compensate for the non-uniform distribution of sampling points in the 2D Fourier transform plane, we may increase the amount of information about the high spatial frequencies. This can be achieved by filtering, which can be expressed as a multiplication/weighting of the data points in the 2D Fourier transform plane. This is exemplified in the right-hand part of FIG. 6, where the amplitude of the high spatial frequencies are increased and the amplitude of the low frequency components is decreased. This multiplication in the 2D Fourier transform plane can alternatively be expressed as a convolution in the spatial domain, i.e. with respect to the s variable, using the inverse Fourier transform of the weighting function. The multiplication/weighting function in the 2D Fourier transform plane is rotationally symmetric. Thus, we can make use of the Projection-Slice Theorem to get the corresponding 1D convolution kernel in the projection domain, i.e. the kernel we should use on the projections gathered at specific angles. This also means that the convolution kernel will be the same for all projection angles. 4.4 Filtering and Back Projection As explained in the foregoing section, the sinogram data is first filtered and then back-projected. The filtering can be done by multiplication with a filter W in the Fourier domain. There are also efficient ways of implementing the filtering as a convolution by a filter W in the spatial domain. In one embodiment, the filtering is done on the s parameter only, and may be described by the following expression: is a back projection operator defined as: ( # v ) ( x ) = 2 ∫ 0 π v ( θ , x θ ) Φ , ##EQU00004## and W . The idea is to choose the w (s)-filter such that W (X)≈δ(x). This is typically accomplished by working in the Fourier domain, taking (ξ) as a step function supported in a circular disc of radius b, and letting b→∞. The corresponding filter in the spatial domain is w b ( s ) = ( b 2 π ) 2 ( sin c ( bs ) - 1 2 ( sin c ( bs 2 ) ) 2 ) , ##EQU00005## with continuous extension across the singularity at s=0. In the literature, several variants of the filter can be found, e.g. Ram-Lak, Shepp-Logan, Cosine, Hann, and Hamming 5. Standard Geometries for Tomographic Processing Tomographic processing is generally based on standard geometries. This means that the mathematical algorithms presume a specific geometric arrangement of the detection lines in order to attain a desired precision and/or processing efficiency. The geometric arrangement may be selected to enable a definition of the projection values in a 2D sample space, inter alia to enable the above-mentioned filtering in one of the dimensions of the sample space before the back projection. In conventional tomography, the measurement system (i.e. the location of the incoupling points and/or outcoupling points) is controlled or set to yield the desired geometric arrangement of detection lines. Below follows a brief presentation of the two major standard geometries used in conventional tomography e.g. in the medical field. 5.1 Parallel Geometry The parallel geometry is exemplified in FIG. 7. Here, the system measures projection values of a set of detection lines for a given angle φ . In FIG. 7, the set of detection lines D are indicated by dashed arrows, and the resulting projection is represented by the function g(φ , s). The measurement system is then rotated slightly around the origin of the x,y coordinate system in FIG. 7, to collect projection values for a new set of detection lines at this new rotation angle. As shown by the dashed arrows, all detection lines are parallel to each other for each rotation angle. The system generally measures projection values (line integrals) for angles spanning the range 0≦φ<π. When all the projections are collected, they can be arranged side by side in a data structure to form a sinogram. The sinogram is generally given in a 2D sample space defined by dimensions that uniquely assign each projection value to a specific detection line. In the case of a parallel geometry, the sample space is typically defined by the angle parameter φ and the distance parameter s. Below, the use of a parallel geometry in tomographic processing is further exemplified in relation to a known attenuation field shown in FIG. 8A, in which the right-end bar indicates the coding of gray levels to attenuation strength (%). FIG. 8B is a graph of the projection values as a function of distance s for the projection obtained at φ=π/6 in the attenuation field of FIG. 8A. FIG. 8C illustrates the sinogram formed by all projections collected from the attenuation field, where the different projections are arranged as vertical sequences of values. For reference, the projection shown in FIG. 8B is marked as a dashed line in FIG. 8C. The filtering step, i.e. convolution, is now done with respect to the s variable, i.e. in the vertical direction in FIG. 8C. As mentioned above, there are many different filter kernels that may be used in the filtering. FIG. 8D illustrates the central part of a discrete filter kernel w that is used in the following examples. As shown, the absolute magnitude of the filter values quickly drop off from the center of the kernel (k=0). In many practical implementations, it is possible to use only the most central parts of the filter kernel, thereby decreasing the number of processing operations in the filtering step. Since the filtering step is a convolution, it may be computationally more efficient to perform the filtering step in the Fourier domain. For each column of values in the φ-s-plane, a discrete 1D Fast Fourier transform is computed. Then, the thus-transformed values are multiplied by the 1D Fourier transform of the filter kernel. The filtered sinogram is then obtained by taking the inverse Fourier transform of the result. This technique can reduce the complexity from O(n ) down to O(nlog (n)) of the filtering step for each φ, where n is the number of sample points (projection values) with respect to the s variable. FIG. 8E shows the filtered sinogram that is obtained by operating the filter kernel in FIG. 8D on the sinogram in FIG. 8C. The next step is to apply the back projection operator. Fundamental to the back projection operator is that a single position in the attenuation field is represented by a sine function in the sinogram. Thus, to reconstruct each individual attenuation value in the attenuation field, the back projection operator integrates the values of the filtered sinogram along the corresponding sine function. To illustrate this concept, FIG. 8E shows three sine functions P1-P3 that correspond to three different positions in the attenuation field of FIG. 8A. Since the location of a reconstructed attenuation value will not coincide exactly with all of the relevant detection lines, it may be necessary to perform linear interpolation with respect to the s variable where the sine curve crosses between two projection values. Another approach, which is less computationally effective, is to compute the filtered values at the crossing points by applying individual filtering kernels. The interpolation is exemplified in FIG. 8F, which is an enlarged view of FIG. 8E and in which x indicates the different filtered projection values of the filtered sinogram. The contribution to the back projection value for the sine curve P1 from the illustrated small part of the φ-s-plane becomes: )(w*g).- sub.27,175+z (- w*g) [28,174] [0125] The weights z in the linear interpolation is given by the normalized distance from the sine curve to the projection value, i.e. 0≦z FIG. 8G shows the reconstructed attenuation field that is obtained by applying the back projection operator on the filtered sinogram in FIG. 8E. It should be noted that the filtering step is important for the reconstruction to yield useful data. FIG. 8H shows the reconstructed attenuation field that is obtained when the filtering step is omitted. 5.2 Fan Geometry Another major type of tomography arrangement is based on sampling of data from a single emitter, instead of measuring parallel projections at several different angles. This so-called fan geometry is exemplified in FIG. 9. As shown, the emitter emits rays in many directions, and sensors are placed to measure the received energy from this single emitter on a number of detection lines D, illustrated by dashed lines in FIG. 9. Thus, the measurement system collects projection values for a set of detection lines D extending from the emitter when located at angle β . In the illustrated example, each detection line D is defined by the angular location β of the emitter with respect to a reference angle (β=0 coinciding with the x-axis), and the angle α of the detection line D with respect to a reference line (in this example, a line going from the emitter through the origin). The measurement system is then rotated slightly (δβ) around the origin of the x,y coordinate system in FIG. 9, to collect a new set of projection values for this new angular location. It should be noted that the rotation might not be limited to 0≦β<π, but could be extended, as is well-known to the skilled person. The following example is given for a full rotation: 0≦β<2π. Fan beam tomographs may be categorized as equiangular or equidistant. Equiangular systems collect information at the same angle (as seen from the emitter) between neighboring sensors. Equiangular systems may be configured with emitter and sensors placed on a circle, or the sensors may be non-equidistantly arranged on a line opposite to the emitter. Equidistant systems collect information at the same distance between neighboring sensors. Equidistant systems may be configured with sensors placed on a line opposite to the emitter. The following example is given for an equiangular system, and based on the known attenuation field shown in FIG. 8A. For a thorough description of the different types of fan (beam) geometries, we refer to the literature. FIG. 10A illustrates the sinogram formed by all projections collected from the attenuation field in FIG. 8A, by the measurement system outlined in FIG. 9. In FIG. 10A, the different projections are arranged as vertical sequences of values. It could be noted that the sinogram is given in a 2D sample space defined by the angular emitter location parameter β and the angular direction parameter α. In an exemplifying tomographic processing of the sinogram in FIG. 10A, an angle correction is first applied on all collected projections according to: )COS(.alp- ha. The filtering step, i.e. convolution, is now done with respect to the α variable of the angle-corrected sinogram, i.e. corresponding to the vertical direction in the angle-corrected sinogram. As mentioned above, there are many different filter kernels that may be used in the filtering. The following example uses a filter kernel similar to the one shown in FIG. 8C. For example, many symmetric high-pass filters with a coefficient sum equal to zero may enable adequate reconstruction of the attenuation field. However, a careful choice of filter may be needed in order to reduce reconstruction artifacts. The result may also be improved by applying a smoothing filter in this step, as is well-known in the art. Like in the parallel geometry, the filtering may involve a convolution in the spatial domain or a multiplication in the Fourier domain. The filtered sinogram obtained by operating the filter kernel on the angle-corrected sinogram is shown in FIG. 10B. The next step is to apply the back projection operator. The back projection operator is different from the one used in the above-described parallel geometry. In the fan geometry, the back projection step may be given by the expression: ( # v ) ( x ) = δ β β i 1 x - D i 2 ( ( 1 - z ) v ( α k , β i ) + z v ( α k + 1 , β i ) ) , ##EQU00006## where D is the position of the source giving the β projection, z is a parameter that describes the linear interpolation between the detection lines and a ray that extends from the source through the location of the respective attenuation value to be FIG. 10C shows the reconstructed attenuation field that is obtained by applying the back projection operator on the filtered sinogram in FIG. 10B. 5.3 Re-Sorting Algorithms Another approach to do the filtered back projection for a fan geometry is to choose the locations of emitters and sensors such that it is possible to re-sort the data into a parallel geometry. Generally, such re-sorting algorithms are designed to achieve regularly spaced data samples in the φ-s-plane. More information about re-sorting algorithms is e.g. found in "Principles of Computerized Tomographic Imaging" by Kak and Slaney. To further explain the concept of re-sorting, FIG. 11 shows the data samples (projection values) collected from two different emitters (i.e. two different values of β) in an equiangular fan beam tomograph. The data samples are mapped to a φ-s-plane. It can be noted that the projection values obtained from a single emitter do not show up as a straight vertical line with respect to the s variable. It can also be seen that the φ values differ only by a constant, and that the s values are identical for the two different projections. One re-sorting approach is thus to collect projection values that originate from detection lines with the same φ values (i.e. from different emitters) and let these constitute a column in the φ-s-plane. However, this leads to a non-uniform spacing of the s values, which may be overcome by interpolating (re-sampling) the projection values with respect to the s variable. It should be noted that this procedure is a strictly 1D interpolation and that all columns undergo the same transform. It should also be noted that this procedure transforms one standard tomography geometry into another standard tomography geometry. In order for the re-sorting algorithms to work, it is essential (as stated in the literature) that δβ=δα, i.e. the angular rotation between two emitter locations is the same as the angular separation between two detection lines. Only when this requirement is fulfilled, the projection values will form columns with respect to the s variable. 6. Use of Tomographic Processing for Touch Determination FIG. 12A illustrates the sampling points (corresponding to detection lines, and thus to measured projection values) in the φ-s-plane for the interleaved system shown in FIG. 2A. Due to the irregularity of the sampling points, it is difficult to apply the above-described filter. The irregularity of the sampling points also makes it difficult to apply a re-sorting algorithm. In FIG. 12A, the solid lines indicate the physical limits of the touch surface. It can be noted that the angle φ actually spans the range from 0 to 2π, since the incoupling and outcoupling points extend around the entire perimeter. However, a detection line is the same when rotated by π, and the projection values can thus be rearranged to fall within the range of 0 to π. This rearrangement is optional; the data processing can be done in the full range of angles with a correction of some constants in the back projection function. When comparing the interleaved arrangement in FIG. 2A with the fan geometry in FIG. 9, we see that the angular locations β are not equally spaced, and that angular directions α are neither equiangular nor equidistant. Also, the values attained by α are different for different β . The different β values for the interleaved arrangement are shown in FIG. 12B. In an ideal fan beam tomograph, this plot would be a straight line. The step change at emitter 23 is caused by the numbering of the emitters (in this example, the emitters are numbered counter-clockwise starting from the lower-left corner in FIG. 2A). FIG. 12C exemplifies the variation in α values for emitter 10 (marked with crosses) and emitter 14 (marked with circles) in FIG. 2A. In an ideal equiangular fan beam tomograph, this plot would result in two straight lines, with a separation in the vertical direction arising from the numbering of the sensors. Instead, FIG. 12C shows a lack of regularity for both the individual emitter and between different emitters. Another aspect is that the fan geometry assumes that the source is positioned, for all projections, at the same distance from the origin, which is not true for an interleaved arrangement around a non-circular touch surface. FIG. 12D illustrates the sampling points in the φ-s-plane for the non-interleaved system shown in FIG. 2B. Apart from the irregularity of sampling points, there are also large portions of the φ-s-plane that lack sampling points due to the non-interleaved arrangement of incoupling and outcoupling points. Thus, it is not viable to apply a filter directly on the sampling points mapped to a sample space such as the φ-s-plane or the β-α-plane, and the sampling points cannot be re-sorted to match any standard tomography geometry. This problem is overcome by the re-calculation step (42 in FIG. 4A), which processes the projection values of the sampling points for generation of projection values for an updated set of sampling points. The updated set of sampling points represent a corresponding set of fictitious detection lines. These fictitious detection lines have a location on the touch surface that matches a standard geometry, typically the parallel geometry or the fan geometry. The generation of projection values of an updated set of sampling points may be achieved by interpolating the original sampling points. The objective of the interpolation is to find an interpolation function that can produce interpolated values at specific interpolation points in the sample space given a set of measured projection values at the original sampling points. The interpolation points, possibly together with part of the original sampling points, form the above-mentioned updated set of sampling points. This updated set of sampling points is generated to be located in accordance with, for instance, the parallel geometry or the fan geometry. The density of the updated set of sampling points is preferably similar to the average density of the original sampling points in the sample space. Many different interpolating functions can be used for this purpose, i.e. to interpolate data points on a two-dimensional grid. Input to such an interpolation function is the original sampling points in the sample space as well as the measured projection value for each original sampling point. Most interpolating functions involve applying a linear operator on the measured projection values. The coefficients in the linear operator are given by the known locations of the original sampling points and the interpolation points in the sample space. The linear operator may be pre-computed and then applied on the measured projection values in each sensing instance (cf. iteration of steps 40-48 in FIG. 4A). Some non-limiting examples of suitable interpolation functions include Delaunay triangulation, and other types of interpolation using triangle grids, bicubic interpolation, e.g. using spline curves or Bezier surfaces, Sinc/Lanczos filtering, nearest-neighbor interpolation, and weighted average interpolation. Alternatively, the interpolation function may be based on Fourier transformation(s) of the measured projection values. Below, the use of different interpolation functions in the re-calculation step (step 42 in FIG. 4A) will be further exemplified. Sections 6.1 and 6.2 exemplify the use of Delaunay triangulation, section 6.3 exemplifies the use of Fourier transformation techniques, and section 6.4 exemplifies the use of weighted average interpolation. In the examples that are based on Delaunay triangulation, the sampling points are placed at the corners of a mesh of non-overlapping triangles. The values of the interpolation points are linearly interpolated in the triangles. The triangles can be computed using the well-known Delaunay algorithm. To achieve triangles with reduced skewness, it is usually necessary to rescale the dimensions of the sample space (φ, s and β, α, respectively) to the essentially same length, before applying the Delaunay triangulation algorithm. In all of the following examples, the interpolation function is able to produce output values for any given position in the sample space. However, the frequency information in the updated set of sampling points will be limited according to the density of original sampling points in the sample space. Thus, wherever the original density is high, the updated set of sampling points can mimic high frequencies present in the sampled data. Wherever the original density is low, as well as if there are large gaps in the sample space, the updated set will only be able to produce low frequency variations. Non-interleaved arrangements (see FIG. 2B), will produce a sample space with one or more contiguous regions (also denoted "gap regions") that lack sampling points (see FIG. 12D). These gap regions may be left as they are, or be populated by interpolation points, or may be handled otherwise, as will be explained below in relation to a number of examples. The following examples will illustrate re-calculation of sampling points into a parallel geometry and a fan geometry, respectively. Each example is based on a numerical simulation, starting from a reference image that represents a known attenuation field on the touch surface. Based on this known attenuation field, the projection values for all detection lines have been estimated and then used in a tomographic reconstruction according to steps 40-46 in FIG. 4A, to produce a reconstructed attenuation field. Thus, the estimated projection values are used as "measured projection values" in the following examples. In the examples, two different merit values are used for comparing the quality of the reconstructed attenuation fields for different embodiments. The first merit value m is defined as: 1 = f f - f # , ##EQU00007## where f is a reference image (i.e. the known attenuation field) and f is the reconstructed attenuation field. The first merit value intends to capture the similarity between the original image and the reconstructed image. The second merit value m is defined as: 2 = f f = 0 f - f # , ##EQU00008## i.e. the denominator only includes absolute differences in the regions where the attenuation values are zero in the reference image. The second merit value thus intends to capture the noise in the reconstructed image by analyzing the regions of the image where there should be no attenuation present. 6.1 Re-Calculation into a Parallel Geometry The following examples will separately illustrate the re-calculation into a standard parallel geometry for an interleaved arrangement and for a non-interleaved arrangement. Since the re-calculation is made for a parallel geometry, the following examples are given for processing in the φ-s-plane. 6.1.1 Example: Interleaved Arrangement This example is given for the interleaved arrangement shown in FIG. 2A, assuming the reference image shown in FIG. 13. The reference image is thus formed by five touching objects 7 of different size and attenuation strength that are distributed on the touch surface 1. For reasons of clarity, FIG. 13 also shows the emitters 2 and sensors 3 in relation to the reference image. FIG. 14A is a plan view of the resulting sample space, where a mesh of non-overlapping triangles have been adapted to the sampling points so as to provide a two-dimensional interpolation function. FIG. 14B is a close-up of FIG. 14A to illustrate the sampling points (stars) and the Delaunay triangulation (dotted lines extending between the sampling points). FIG. 14B also illustrates the interpolation points (circles). Thus, the values of the interpolation points are calculated by operating the Delaunay triangulation on the projection values in the sampling points. In the illustrated example, the interpolation points replace the sampling points in the subsequent calculations. In other words, the sinogram formed by the measured projection values is replaced by an interpolated sinogram formed by interpolated projection values. Thereby, it is possible to obtain a uniform density of interpolation points across the sample space, if desired. Each interpolation point corresponds to a fictitious detection line that extends across the touch surface in accordance with a parallel geometry. Thus, the interpolation is designed to produce a set of fictitious detection lines that match a parallel geometry, that allows a reconstruction of the attenuation field using standard algorithms. As shown, the interpolation points are arranged as columns (i.e. with respect to the s variable) in the sample space, allowing subsequent 1D filtering with respect to the variable. In this example, the interpolation points are arranged with equidistant spacing with respect to the s variable, which has been found to improve the reconstruction quality and facilitate the subsequent reconstruction processing, e.g. the 1D filtering. Preferably, the inter-column distance is the same for all columns since this will make the back projection integral perform better. In the interpolated sinogram, each φ value with its associated s values (i.e. each column) corresponds to a set of mutually parallel (fictitious) detection lines, and thus the data is matched to a parallel geometry in a broad sense. FIG. 14C illustrates the interpolated sinogram, i.e. the interpolated projection values that has been calculated by operating the interpolation function in FIG. 14A on the measured projection values. After filtering the interpolated sinogram with respect to the s variable, using the filter in FIG. 8D, and applying the back projection operator on the thus filtered sinogram, a reconstructed attenuation field is obtained as shown in FIG. 14D, having merit values: m =1.3577 and m Variants for generating the updated set of sampling points are of course possible. For example, different interpolation techniques may be used concurrently on different parts of the sample space, or certain sampling points may be retained whereas others are replaced by interpolated points in the updated set of sampling points. As will be explained in the following, the generation of the updated set of sampling points may be designed to allow detection lines to be removed dynamically during operation of the touch-sensitive apparatus. For example, if an emitter or a sensor starts to perform badly, or not at all, during operation of the apparatus, this may have a significant impact on the reconstructed attenuation field. It is conceivable to provide the apparatus with the ability of identifying faulty detection lines, e.g. by monitoring temporal changes in output signal of the light sensors, and specifically the individual projection signals. The temporal changes may e.g. show up as changes in the energy/attenuation/transmission or the signal-to-noise ratio (SNR) of the projection signals. Any faulty detection line may be removed from the reconstruction. Such a touch-sensitive apparatus is disclosed in Applicant's U.S. provisional application No. 61/288,416, which was filed on Dec. 21, 2009 and which is incorporated herein by this reference. To fully benefit from such functionality, the touch-sensitive apparatus may be designed to have slightly more sensors and/or emitters than necessary to achieve adequate performance, such that it is possible to discard a significant amount of the projection values, for example 5%, without significantly affecting performance. The re-calculation step (cf. step 42 in FIG. 4A) may be configured to dynamically (i.e. for each individual sensing instance) account for such faulty detection lines by, whenever a detection line is marked as faulty, removing the corresponding sampling point in the sample space and re-computing the interpolation function around that sampling point. Thereby, the density of sampling points is reduced locally (in the φ-s-plane), but the reconstruction process will continue to work adequately while discarding information from the faulty detection line. This is further illustrated in FIGS. 15-16. FIG. 15A is a close-up of two-dimensional interpolation function formed as an interpolation grid in the sample space. Assume that this interpolation function is stored for use in the re-calculation step for a complete set of sampling points. Also assume that the sampling point indicated by a circle in FIG. 15A corresponds to a detection line which is found to be faulty. In such a situation, the sampling point is removed, and the interpolation function is updated or recomputed based on the remaining sampling points. The result of this operation is shown in FIG. 15B. As shown, the change will be local to the triangles closest to the removed sampling point. If an emitter is deemed faulty, all detection lines originating from this emitter should be removed. This corresponds to removal of a collection of sampling points and a corresponding update of the interpolation function. FIG. 15C illustrates the interpolation function in FIG. 15A after such updating, and FIG. 15D illustrates the updated interpolation function for the complete sample space. The removal of the detection lines results in a band of lower density (indicated by arrow L1), but the reconstruction process still works properly. Instead, if a sensor is deemed faulty, all detection lines originating from this sensor should be removed. This is done in the same way as for the faulty emitter, and FIG. 16A illustrates the interpolation function in FIG. 15A after such updating. FIG. 16B illustrates the updated interpolation function for the complete sample space. The removal of the detection lines again results in a band of lower density (indicated by arrow L2), but the reconstruction process still works properly. 6.1.2 Example: Non-Interleaved Arrangement The non-interleaved arrangement generally results in a different set of sampling points than the interleaved arrangement, as seen by comparing FIG. 12A and FIG. 12D. However, there is no fundamental difference between the interpolation solutions for these arrangements, and all embodiments and examples of reconstruction processing described above in relation to the interleaved arrangement are equally applicable to the non-interleaved arrangement. The following example therefore focuses on different techniques for handling the gap regions, i.e. regions without sampling points, which are obtained in non-interleaved arrangement. The following example is given for the non-interleaved arrangement shown in FIG. 2B, assuming a reference image as shown in FIG. 17, i.e. the same reference image as in FIG. 13. FIG. 18A is a plan view of the resulting interpolation function, where a mesh of non-overlapping triangles have been adapted to the sampling points in the sample space. Thus, this example forms the interpolation function directly from the original sampling points. Since the sample space contain contiguous gap regions (see FIG. 12D), the resulting interpolation function is undefined in these gap regions, or stated differently, the values at the implicit sampling points in the gap regions are set to zero. The interpolation function in FIG. 18A may be used to generate an updated set of sampling points, like in the foregoing examples. FIG. 18B illustrates the reconstructed attenuation field that is obtained by calculating the interpolated projection values for the reference image in FIG. 17, operating the 1D filter of the result, and applying the back projection operator on the result of the filtered data. The reconstructed attenuation field has merit values: m =0.7413 and m An alternative approach to handling the gap regions is to extend the interpolation function across the gap regions, i.e. to extend the mesh of triangles over the gap regions, as shown in FIG. 19A. The interpolation function in FIG. 18A may thus be used to generate desirable interpolation points within the entire sample space, i.e. also in the gap regions. FIG. 19B illustrates the interpolated projection values calculated for the reference image in FIG. 17. It can be seen that projection values are smeared out into the gap regions in the φ-s-plane. The reconstructed attenuation field (not shown), obtained after 1D filtering and back projection, has merit values: m =0.8694 and m =1.4532, i.e. slightly better than FIG. 18B. Yet another alternative approach is to add some border vertices to the interpolation function in the gap regions, where these border vertices form a gradual transition from the original sampling points to zero values, and letting the interpolation function be undefined/zero in the remainder of the gap regions. This results in a smoother transition of the interpolation function into the gap regions, as seen in FIG. 20A. FIG. 20B illustrates the interpolated projection values calculated for the reference image in FIG. 17. The reconstructed attenuation field (not shown), obtained after 1D filtering and back projection, has merit values: m =0.8274 and m =1.4434, i.e. slightly better than FIG. 18B. All of the three above-described approaches lead to reconstructed attenuation fields of approximately the same quality. Below follows a description of a technique for improving the quality further, by improving the estimation of sampling points in the gap regions. This improved technique for generating estimation points in the gap regions will be described in relation to FIGS. 22-23. It is to be noted that this technique may also be applied to populate gaps formed by removal of faulty detection lines, as a supplement or alternative to the technique discussed in section 6.1.1. Generally, the estimation points may be selected to match the standard geometries, like the interpolation points, possibly with a lower density than the interpolation points. FIG. 21A illustrates the sample space supplemented with such estimation points in the gap regions. Like in the foregoing examples, an interpolation function is generated based on the sample space, in this case based on the combination of sampling points and interpolation points. FIG. 21B illustrates the resulting interpolation function. The aim is to obtain a good estimate for every added estimation point. This may be achieved by making assumptions about the touching objects, although this is not strictly necessary. For example, if it can be presumed that the touching objects are fingertips, it can be assumed that each touching object results in a top hat profile in the attenuation field with a circular or ellipsoidal contour. Unless the number of touching objects is excessive, there will exist, for each touching object, at least one detection line that interacts with this touching object only. If it is assumed that the touch profiles are essentially round, the touch profile will cause essentially the same attenuation of all detection lines that are affected by the touch profile. The value at each estimation point, in the φ-s-plane (marked with diamonds in FIG. 21A), represents a line integral along a specific line on the touch surface. Since the estimation points are located in the gap region, there is no real (physical) detection line that matches the specific line. Thus, the specific line is a virtual line in the x-y-plane (i.e. a fictitious detection line, although it does not correspond to an interpolation point but to an estimation point). The value at the estimation point may be obtained by analyzing selected points along the virtual line in the x-y-plane. Specifically, a minimum projection value is identified for each selected point, by identifying minimum projection value for the ensemble of detection lines (actual or fictitious) that passes through the selected point. This means that, for every analyzed point, the algorithm goes through the different detection lines passing through the point and identifies the lowest value of all these detection lines. The value of the estimation point may then be given by the maximum value of all identified minimum projection values, i.e. for the different analyzed points, along the virtual line. To explain this approach further, FIG. 22A illustrates the original sampling points together with two estimation points EP1, EP2 indicated by circles. The estimation point EP1 corresponds to a virtual line V1, which is indicated in the reference image of FIG. 22B. The next step is to evaluate selected points along the virtual line V1. For every selected point, the projection values for all intersecting detection lines are collected. The result is shown in the two-dimensional plot of FIG. 22C, which illustrates projection values as a function of detection line (represented by its angle) and the selected points (given as position along the virtual line). The large black areas in FIG. 22C correspond to non-existing detection lines. To find the value of the estimation point EP1, the data in FIG. 22C is first processed to identify the minimum projection value (over the angles) for each selected point along the virtual line V1. The result is shown in the graph of FIG. 22D. The value of the estimation point EP1 is then selected as the maximum of these minimum projection values. FIG. 22E illustrates the values of all estimation points in FIG. 21A calculated for the reference image in FIG. 17 using this approach, together with the interpolated projection values. By comparing FIG. 19B and FIG. 20B, a significant improvement is seen with respect to the information in the gap regions of the sample space. The reconstructed attenuation field, obtained after 1D filtering and back projection, is shown in FIG. 22F and has merit values: m =1.2085 and m =2.5997, i.e. much better than FIG. 18B. It is possible to improve the estimation process further. Instead of choosing the maximum among the minimum projection values, the process may identify the presence of plural touch profiles along the investigated virtual line and combine (sum, weighted sum, etc) the maximum projection values of the different touch profiles. To explain this approach further, consider the estimation point EP2 in FIG. 22A. The estimation point EP2 corresponds to a virtual line V2, which is indicated in the reference image of FIG. 23A Like in the foregoing example, selected points along the virtual line V2 are evaluated. The result is shown in the two-dimensional plot of FIG. 23B. Like in the foregoing example, the data in FIG. 23B is then processed to identify the minimum projection value (over the angles) for each selected point along the virtual line V2. The result is shown in the graph of FIG. 23C. This graph clearly indicates that there are two separate touch profiles on the virtual line V2. Thus, the estimation process processes the maximum projection values in FIG. 23C to identify local maxima (in this example two maxima), and sets the value of the estimation point EP2 equal to the sum of the local maxima (projection values). FIG. 23D illustrates the values of all estimation points in FIG. 21A calculated for the reference image in FIG. 17 using this approach, together with the interpolated projection values. The gap regions of the sample space are represented by relevant information. The reconstructed attenuation field, obtained after 1D filtering and back projection, is shown in FIG. 23E and has merit values: m =1.2469 and m =2.6589, i.e. slightly better than FIG. 22F. FIG. 24 is a flowchart of an exemplifying reconstruction process, which is a more detailed version of the general process in FIG. 4A adapted for data processing in a touch-sensitive apparatus with a non-interleaved arrangement. The process operates on the output signal from the light sensor arrangement, using data stored in a system memory 50, and intermediate data generated during the process. It is realized that the intermediate data also may be stored temporarily in the system memory 50 during the process. The flowchart will not be described in great detail, since the different steps have already been explained above. In step 500, the process samples the output signal from the light sensor arrangement. In step 502, the sampled data is processed for calculation of projection values (g). In step 504, the process reads the interpolation function (IF) from the memory 50. The interpolation function (IF) could, e.g., be designed as any one of the interpolation functions shown in FIGS. 18A, 19A, 20A and 21B. The process also reads "exclusion data" from the memory 50, or obtains this data directly from a dedicated process. The exclusion data identifies any faulty detection lines that should be excluded in the reconstruction process. The process modifies the interpolation function (IF) based on the exclusion data, resulting in an updated interpolation function (IF') which may be stored in the memory 50 for use during subsequent iterations. Based on the updated interpolation function (IF'), and the projection values (g), step 504 generates new projection values ("interpolation values", i) at given interpolation points. Step 504 may also involve a calculation of new projection values ("estimation values", e) at given estimation points in the gap regions, based on the updated interpolation function (IF'). Step 504 results in a matched sinogram (g'), which contains the interpolation values and the estimation values. In step 506, the process reads the filter kernel (w ) from the memory 50 and operates the kernel in one dimension on the matched sinogram (g'). The result of step 506 is a filtered sinogram (ν). In step 508, the process reads "subarea data" from the memory 50, or obtains this data directly from a dedicated process. The subarea data indicates the parts of the attenuation field/touch surface to be reconstructed. Based on the subarea data, and the filtered sinogram (ν), step 510 generates a reconstructed attenuation field (a), which is output, stored in memory 50, or processed further. Following step 508, the process returns to step 500. It is to be understood that a similar process may be applied for data processing in a touch-sensitive apparatus with an interleaved arrangement. 6.2 Re-Calculation into Fan Geometry The following example will illustrate the re-calculation into a standard fan geometry for an interleaved arrangement. Since the re-calculation is made for a fan geometry, the following examples are given for the β-α-plane. 6.2.1 Example: Interleaved Arrangement This example is given for the interleaved arrangement shown in FIG. 2A, assuming a reference image as shown in FIG. 13. A first implementation of the re-calculation step (cf. step 42 in FIG. 4A) will be described with reference to FIG. 25. In the first implementation, the sampled data is "squeezed" to fit a specific fan geometry. This means that the projection values obtained for the detection lines of the interleaved arrangement are re-assigned to fictitious detection lines that match a fan geometry, in this example the geometry of an equiangular fan beam tomograph. Making such a re-assignment may involve a step of finding the best guess for an equiangular spacing of the β values, and for the α values. In this example, the β values for the sampling points are re-interpreted to be consistent with the angles of an equiangular fan beam tomograph. This essentially means that the difference in rotation angle between the different incoupling points is considered to be the same around the perimeter of the touch surface, i.e. δβ=2π/M, where M is the total number of emitters (incoupling points). The α values for the sampling points are re-interpreted by postulating that the α values are found at nδα, where -≦n≦N and 2N+1 is the total number of sensors (outcoupling points) that receive light energy from the relevant emitter. To get accurate ordering of the α values, n=0 may be set as the original sample with the smallest value of α FIG. 25A illustrates the sampling points in the β-α-plane, after this basic reassignment of projection values. After angle correction, 1D filtering of the angle-corrected data, and back projection, a reconstructed attenuation field is obtained as shown in FIG. 25B. It is evident that the first implementation is able to reproduce the original image (FIG. 13), but with a rather low quality, especially in the corner regions. In a second implementation of the re-calculation step, the measured projection values are processed for calculation of new (updated) projection values for fictitious detection lines that match a fan geometry. In the second implementation, like in the first implementation, each emitter (incoupling point) on the perimeter of the touch surface is regarded as the origin of a set of detection lines of different directions. This means that every β value corresponds to an emitter (incoupling point) in the interleaved arrangement, which generates a plurality of detection lines with individual angular directions α , and the sampling points defined by the actual β values and α values thus form columns in the β-α-plane. Therefore, interpolation in the β direction can be omitted, and possibly be replaced by a step of adding an individual weighting factor to the back projection operator (by changing δβ to δβ , which should correspond to the difference in β values between neighboring emitters). In the second implementation, the recalculation step involves an interpolation with respect to the α variable, suitably to provide values of interpolation points having an equidistant separation with respect to the α variable for each β value in the sampling space. Thus, the interpolation of the sampling points may be reduced to applying a 1D interpolation function. The 1D interpolation function may be of any type, such as linear, cubic, spline, Lanczos, Sinc, etc. In following example, the interpolation function is linear. It should be noted, though, that a 2D interpolating function as described in section 6.1 above can alternatively be applied for interpolation in the β-α-plane. FIG. 26A illustrates the sampling points in the β-α-plane, after the 1D interpolation. FIG. 26B shows the reconstructed attenuation field which is obtained after angle correction, 1D filtering of the angle-corrected data, and back projection. By comparing FIG. 26B with FIG. 25B, it can be seen that the second implementation provides a significant quality improvement compared to the first Further, by comparing FIG. 26B with FIG. 14D, which both illustrate reconstructed attenuation fields for the interleaved arrangement, it may appear as if the parallel geometry may result in a higher reconstruction quality than the fan geometry. This apparent quality difference may have several causes. First, reconstruction algorithms for the fan geometry restrict the direction angle α to the range -π/2≦α≦π/2. Direction angles outside this range will cause the angle correction (see section 5.2) to deteriorate. In the touch-sensitive apparatus, detection lines may have direction angles outside this range, especially for emitters located the corners of the touch surface (recalling that α=0 for a line going from the emitter through the origin, i.e. the center of the touch surface). Second, the weighted back projection operator (see section 5.2) involves a normalization based on the inverse of the squared distance between the source and the reconstructed position. This distance becomes close to zero near the perimeter of the touch surface and its inverse goes towards infinity, thereby reducing the reconstruction quality at the perimeter. Still further, the standard reconstruction algorithms assume that all sensors (outcoupling points) are arranged at the same distance from the emitters (incoupling points). A third implementation of the re-calculation step will now be described with reference to FIGS. 27-28. In the third implementation, which is designed to at least partially overcome the above-mentioned limitations of the first and second implementations, the detection lines are defined based on fictive emitter/sensor locations. FIG. 27 illustrates the touch-sensitive apparatus circumscribed by a circle C which may or may not be centered at the origin of the x,y coordinate system (FIG. 2) of the apparatus. The emitters 2 and sensors 3 provide a set of detection lines (not shown) across the touch surface 1. To define the detection lines in a β-α-plane, the intersection of each detection line and the circle C is taken to define a β value, whereas the α value of each detection line is given by the inclination angle of the detection line with respect to a reference line (like in the other fan geometry examples given herein). Thereby, the β and α variables are defined in strict alignment with the theoretical definition depicted in FIG. 9, where the β variable is defined as a rotation angle along a circular perimeter. FIG. 28A illustrates the resulting sampling points in the β-α-plane for the interleaved system shown in FIG. 27, where the β values are defined according to the foregoing "fictive circle approach". The sampling space contains a highly irregular pattern of sampling points. FIG. 28B is a plan view of a 2D interpolation function fitted to the sampling points in FIG. 28A. It should be realized that the techniques described in sections 6.1.1 and 6.1.2 may be applied also to the sampling points in the β-α-plane to generate interpolation/estimation points that represent fictitious detection lines matching a standard fan geometry. Thus, the interpolation/estimation points are suitably generated to form columns with respect to the β variable, preferably with equidistant spacing. FIG. 28C illustrates the interpolated sinogram, which is obtained by operating the interpolation function in FIG. 28B on the projection values that are given by the reference image in FIG. 13. FIG. 28D shows the reconstructed attenuation field which is obtained after angle correction, 1D filtering of the angle-corrected data, and back projection. By comparing FIG. 28D with FIG. 26B, it can be seen that the third implementation provides a significant quality improvement compared to the first and second In all of the above implementations, the re-calculation step results in an updated sinogram, in which each β value and its associated α values (i.e. each column in the sinogram) corresponds to a fan of detection lines with a common origin, and thus the data is matched to a fan geometry in a broad sense. 6.3 Re-Calculation by Fourier Transformation In tomography theory, it is generally assumed that g(φ, s) is bandwidth limited. Thereby, it is possible to use Fourier transformation algorithms to perform the recalculation step (step 42 in FIG. 4A) so as to form the above-mentioned updated set of sampling points. There is a class of Fourier transformation algorithms that are designed to enable Fourier transformation of irregularly sampled data. These algorithms may e.g. involve interpolation and oversampling of the original data, e.g. using least-squares, iterative solutions or Fourier expansion (Shannon's sampling theorem). This type of Fourier transformation algorithm comes in many different names and flavors, e.g. Non-Uniform FFT (NUFFT/NFFT), Generalized FFT (GFFT), Non-uniform DFT (NDFT), Non-Equispaced Result FFT (NER), Non-Equispaced Data FFT (NED), and Unequally spaced FFT (USFFT). In the following, a brief example is given on the use of the NED algorithm in a recalculation step into a standard parallel geometry. The theory behind the NED algorithm is further described in the article "Non-Equispaced Fast Fourier Transforms with Applications to Tomography" by K Fourmont, published in "Journal of Fourier Analysis and Applications", Volume 9, Number 5, pages 431-450 (2003), which is incorporated herein by this reference. The example involves two FFT operations on the original set of projection values in the sinogram g(φ, s). First, a two-dimensional NED FFT algorithm is operated on the sinogram: whereby the Fourier transform of the sinogram is computed. As noted above, the NED algorithm is designed to process irregularly sampled data, and the resulting Fourier coefficients (θ, σ) will be arranged in a Cartesian grid. Then, a regular two-dimensional inverse FFT algorithm is operated on the Fourier coefficients to get an updated set of projection values arranged in a standard geometry, in this example a parallel geometry: A regular inverse FFT algorithm can be used since both the input data (θ, σ) and the output data g(φ, s) are arranged on a Cartesian grid. In this example, it may be advantageous for the cN-periodicity of the recalculation step to be for φ=2π. This may be achieved by mirroring the sinogram values before applying the NED FFT algorithm: g (φ, s)=g(φ-π, -s) for π≦φ<2π. However, this extension of the sinogram is not strictly necessary. In a variant, it is merely ensured that the wrapping behavior of the cN-periodicity is consistent with the mirroring of the sinogram values. It can be noted that the above example is equally applicable for re-calculation into a fan geometry, by changing (φ, s) to (α, β). It is also to be understood that the re-calculation is not limited to the use of the NED FFT algorithm, but can be achieved by applying any other suitable Fourier transformation algorithm designed for irregularly sampled data, e.g. as listed above. 6.4 Re-Calculation by Weighted Average Interpolation The interpolation in the re-calculation step (step 42 in FIG. 4A) may be based on a weighted average algorithm. Like Delaunay triangulation, the weighted average algorithm involves applying a linear operator on the measured projection values, with the coefficients in the linear operator being given by the known locations of the original sampling points and the interpolation points in the sample One benefit of weighted average interpolation is that the computation of the coefficients may be simple to implement, e.g. compared to Delaunay triangulation. Another benefit is the possibility of doing on-the-fly computation of the coefficients in the linear operator (instead of using pre-computed coefficients) if available memory is limited, e.g. when the signal processor (10 in FIG. 1) is implemented as an FPGA. These benefits will be further illustrated by way of an example, in which a weighted average algorithm is used for on-the-fly interpolation of original projection values g(φ , s ) into a matched sinogram g'(φ' , s' ), in three steps S1-S3. Reverting to FIG. 14B, the original projection values correspond to the sampling points (stars), and the matched sinogram corresponds to the interpolation points (circles). In the following example, the weight function is represented as F S1. Initialize an accumulator sinogram, acc(φ' , s' ), and a weight sinogram, w(φ' , s' ), by setting them to zero. S2. For each sampling point (φ , s ), execute the following sequence of sub-steps i.-iii. for all interpolation points (φ' , s' i. ω=F (Δφ, Δs)=F , s ii. acc(φ' , s' , s' , s iii. w(φ' , s' , s' S3. For each interpolation point (φ' , s' , compute the matched sinogram: If w(φ' , s' )>0, then set g'(φ' , s' , s' , s' ), otherwise set g'(φ' , s' There are numerous weight functions F that may be used in this and other examples. One characteristic of a suitable weight function F is that it decreases as |Δφ|, |Δs| increase. The constants in the weight function F may be chosen such that each projection value g(φ , s ) contributes to only one or a few interpolation points (φ' , s' ). This makes it possible to speed up the interpolation significantly since step S2 is reduced to an accumulation in the vicinity of the respective sampling point (φ , s ). In one example, the sub-steps i.-iii. are only executed for the 3×3 interpolation points (φ' , s' ) that are closest to each sampling point (φ , s ) in the sample space. A few non-limiting examples of weight functions include: F (Δφ, Δs)=e.sup.-(Δφ .sup.+.- DELTA.s .sup.), and F (Δφ, Δs)=1/(1+α ), where σ.sub.φ, σ , α , α are constants. Generally, the interpolation by weighted average may be seen to involve, for each interpolation point, a step of calculating a weighted contribution to the value of the interpolation point from at least a subset of the sampling points (e.g. implemented by S2: i.-ii.), and a step of aggregating the weighted contributions (e.g. implemented by S2: iii. and S3), wherein each weighted contribution is calculated as a function of the projection value of the sampling point and a distance in the sample space between the interpolation point and the sampling point. It can be noted that the above discussion is equally applicable for re-calculation into a fan geometry, by changing (φ, s) to (α, β). 7. Alternative Reconstruction Techniques in Standard Geometries It is to be understood that the reference to Filtered Back Projection (FBP) herein is merely given as an example of a technique for reconstructing the attenuation field. There are many other known techniques that can be used for reconstruction, after re-calculation into a standard geometry, such as for instance ART, SIRT, SART and Fourier-transform based algorithms. More information about these and other algorithms can be found, e.g., in the above-mentioned books "The Mathematics of Computerized Tomography" by Natterer, and "Principles of Computerized Tomographic Imaging" by Kak and It should also be noted that it is possible to do an unfiltered back projection and perform the filtering on the reconstructed image. Fourier-transform based algorithms give the promise of time complexities of O(n log(n)), i.e. a significant improvement. However, as stated by Kak and Slaney, the naive algorithm may not suffice. The naive algorithm is discussed by Natterer on pages 119-125, whereupon Natterer continues to present two different improved algorithms (on pages 125-127) that are stated to produce good results. The above-referenced article by Fourmont presents further algorithms that involve the use of FFT-based algorithms designed to handle uneven distribution of input data and/or output data. It can also be noted that in certain implementations, it may be advantageous to perform a low-pass filtering of the updated set of projection values that results from the re-calculation into a standard geometry, before applying the reconstruction technique. 8. Concluding Remarks The invention has mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope and spirit of the invention, which is defined and limited only by the appended patent claims. For example, the reconstructed attenuation field may be subjected to post-processing before the touch data extraction (step 48 in FIG. 4A). Such post-processing may involve different types of filtering, for noise removal and/or image enhancement. FIG. 29 illustrates the result of applying a Bayesian image enhancer to the reconstructed attenuation field in FIG. 23E. The enhanced attenuation field has merit values: m =1.6433 and m =5.5233. For comparison, the enhanced attenuation field obtained by applying the Bayesian image enhancer on the reconstructed attenuation field in FIG. 14D has merit values: m =1.8536 and m =10.0283. In both cases, a significant quality improvement is obtained. Furthermore, it is to be understood that the inventive concept is applicable to any touch-sensitive apparatus that defines a fixed set of detection lines and operates by processing measured projection values for the detection lines according to any tomographic reconstruction algorithm that is defined for a standard geometry, where these standard geometry does not match the fixed set of detection lines. Thus, although the above description is given with reference to FBP algorithms, the inventive concept have a more general applicability. It should also be emphasized that all the above embodiments, examples, variants and alternatives given with respect to interpolation, removal of detection lines, and estimation in gap regions are generally applicable to any type of emitter-sensor arrangement and irrespective of standard geometry. Furthermore, the reconstructed attenuation field need not represent the distribution of attenuation coefficient values within the touch surface, but could instead represent the distribution of energy, relative transmission, or any other relevant entity derivable by processing of projection values given by the output signal of the sensors. Thus, the projection values may represent measured energy, differential energy (e.g. given by a measured energy value subtracted by a background energy value for each detection line), relative attenuation, relative transmission, a logarithmic attenuation, a logarithmic transmission, etc. The person skilled in the art realizes that there are other ways of generating projection values based on the output signal. For example, each individual projection signal included in the output signal may be subjected to a high-pass filtering in the time domain, whereby the thus-filtered projection signals represent background-compensated energy and can be sampled for generation of projection values. Furthermore, all the above embodiments, examples, variants and alternatives given with respect to an FTIR system are equally applicable to a touch-sensitive apparatus that operates by transmission of other energy than light. In one example, the touch surface may be implemented as an electrically conductive panel, the emitters and sensors may be electrodes that couple electric currents into and out of the panel, and the output signal may be indicative of the resistance/impedance of the panel on the individual detection lines. In another example, the touch surface may include a material acting as a dielectric, the emitters and sensors may be electrodes, and the output signal may be indicative of the capacitance of the panel on the individual detection lines. In yet another example, the touch surface may include a material acting as a vibration conducting medium, the emitters may be vibration generators (e.g. acoustic or piezoelectric transducers), and the sensors may be vibration sensors (e.g. acoustic or piezoelectric sensors). Still further, the inventive concept may be applied to improve tomographic reconstruction in any field of technology, such as radiology, archaeology, biology, geophysics, oceanography, materials science, astrophysics, etc, whenever the detection lines are mismatched to a standard geometry that forms the basis for the tomographic reconstruction algorithm. Thus, the inventive concept could be generally defined as a method for image reconstruction based on an output signal from a tomograph, the tomograph comprising a plurality of peripheral entry points and a plurality of peripheral withdrawal points, which between them define actual detection lines that extend across a measurement space to propagate energy signals from the entry points to the withdrawal points, at least one signal generator coupled to the entry points to generate the energy signals, and at least one signal detector coupled to the withdrawal points to generate the output signal, the method comprising: processing the output signal to generate a set of data samples, wherein the data samples are indicative of detected energy for at least a subset of the actual detection lines; processing the set of data samples to generate a set of matched samples, wherein the matched samples are indicative of estimated detected energy for fictitious detection lines that have a location in the measurement space that matches a standard geometry for tomographic reconstruction; and processing the set of matched samples by tomographic reconstruction to generate data indicative of a distribution of an energy-related parameter within at least part of the measurement space. Patent applications by Tomas Christiansson, Torna-Hallestad SE Patent applications by FlatFrog Laboratories AB Patent applications in class Touch panel Patent applications in all subclasses Touch panel User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130044073","timestamp":"2014-04-23T08:53:23Z","content_type":null,"content_length":"170866","record_id":"<urn:uuid:366379f2-fe4b-4b99-8a19-ce5c6b86eb85>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Panel Data with Truncation and Gaps [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: Panel Data with Truncation and Gaps From John Simpson <john.simpson@ualberta.ca> To statalist@hsphsun2.harvard.edu Subject st: Panel Data with Truncation and Gaps Date Sat, 4 Jul 2009 23:27:56 -0600 Dear Stata and statistics experts, I am looking for a strategy to handle a large amount of panel data that features both truncation and gaps. In particular I would like to know how I might go about fitting a model to the data I have on hand. Important features of the data are as follows: 1. It is population data generated from agent-based evolutionary simulations . Each trial population has a series of observations associated with it over the length of time that it was being run. 2. To conserve memory and processing time two data collection shortcuts were used. 2a. Summary statistics from the population were collected on the initial creation of the population, after running it for one generation, and again after the second generation. Following this the same statistics are collected every five generation until generation 100 at which point the simulation of the population ends. If the population drops below two members then no more information is collected either (There is no single-agent reproduction). 2b. If the population grew over 15000 members then summary statistics were collected in the generation in which this occurred and then the population was dropped. 3. There are a collection of variables that need to be taken into account. 3a. Some of these are fixed throughout the trial (These include things like the initial population size, the cost to live from generation to generation, and the cost to spawn with another agent). 3b. Others change throughout the course of each simulation and are randomly distributed at the beginning (These are the behaviours that the agents exhibit under certain conditions. Over time as opportunities to express these behaviours present themselves agents with more good/useful beahviours get to spawn more, increasing the likelihood that these useful behaviours will become more prevalent in the population). In particular I have two worries. First, that as successful populations are truncated out, those that remain will bring down the mean. Second, that a combination of successful population truncating out and unsuccessful populations having few members with highly similar behaviour sets will skew any investigation into which behaviours are successful. Any suggestions regarding possible models or methods for handling this dataset or directions to possibly useful resources would be appreciated. John Simpson Department of Philosophy University of Alberta, Canada * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-07/msg00173.html","timestamp":"2014-04-17T09:44:31Z","content_type":null,"content_length":"8428","record_id":"<urn:uuid:9a405f82-5b8f-45f0-8a1e-2b1cd8e95a1f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] extract elements of an array that are contained in another array? [Numpy-discussion] extract elements of an array that are contained in another array? Robert Cimrman cimrman3@ntc.zcu... Thu Jun 4 10:27:11 CDT 2009 Alan G Isaac wrote: > On 6/4/2009 10:50 AM josef.pktd@gmail.com apparently wrote: >> intersect1d gives set intersection if both arrays have >> only unique elements (i.e. are sets). I thought the >> naming is pretty clear: >> intersect1d(a,b) set intersection if a and b with unique elements >> intersect1d_nu(a,b) set intersection if a and b with non-unique elements >> setmember1d(a,b) boolean index array for a of set intersection if a >> and b with unique elements >> setmember1d_nu(a,b) boolean index array for a of set intersection if >> a and b with non-unique elements >>>> a > array([1, 1, 2, 3, 3, 4]) >>>> b > array([1, 4, 4, 4]) >>>> np.intersect1d_nu(a,b) > array([1, 4]) > That is, intersect1d_nu is the actual set intersection > function. (I.e., intersect1d and intersect1d_nu would most > naturally have swapped names.) That is why the appended _nu > will not communicate what was intended. (I.e., > setmember1d_nu will not be a match for intersect1d_nu.) The naming should express this: intersect1d expects its arguments are sets, intersect1d_nu does not. A set has unique elements by definition. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-June/042992.html","timestamp":"2014-04-16T10:37:43Z","content_type":null,"content_length":"4441","record_id":"<urn:uuid:13c33336-821f-411b-bc16-b0b9ae43fa3f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Cloth Diapers & Parenting Community - DiaperSwappers.com - 8 pouches of Booty Cubes wipe solution cubes left - discounted & price reduction! cassondruh 06-03-2011 08:34 PM 8 pouches of Booty Cubes wipe solution cubes left - discounted & price reduction! Hi everyone! I have some old stock pouches of Booty Cubes to sell. There are 8 pouches left. These retail at $5.80 for a 2 oz pouch, but I am trying to get rid of old stock so I am doubling the amount you get for a discounted price By "old stock" I mean these cubes have already been made and have been sitting on the shelf for the past 6 months, as opposed to me making them as orders come in (what I usually do). They are not bad by any means, however! I just want to get them off my shelves and into new homes :) I have available: Four pouches of Lavender+Chamomile+TTO blend cubes with 4 oz of cubes. This blend contains extra organic olive oil and aloe vera, ingredients to aide in soothing and healing diaper rashes or keeping diaper rashes away! Each 4 oz pouch will last a VERY long time! 1 cube makes 1 cup (8oz) of liquid solution for you, and each pouch contains approximately 100 cubes! Each pouch comes with easy instructions for use, and the ingredients list. I also have available: Four pouches of Lavender+Chamomile+TTO blend cubes with 4 oz of cubes. This blend contains shea butter, an ingredient said to be beneficial to people suffering from eczema! Each 4 oz pouch will last a VERY long time! 1 cube makes 1 cup (8oz) of liquid solution for you, and each pouch contains approximately 100 cubes! Each pouch comes with easy instructions for use, and the ingredients list. I am asking $8 for each 4 oz pouch of cubes and $2 shipping for one pouch , $3 shipping for two pouches and add 1 dollar per pouch after that for shipping (first class USPS unless we agree to another method). cassondruh 06-04-2011 07:02 PM Re: 8 pouches of Booty Cubes left - discounted!! :D cassondruh 06-06-2011 12:25 PM Re: 8 pouches of Booty Cubes left - discounted!! :D cassondruh 06-06-2011 08:12 PM Re: 8 pouches of Booty Cubes wipe solution cubes left - discounted & price reduction! evening bump mamalamb 06-06-2011 09:02 PM Re: 8 pouches of Booty Cubes wipe solution cubes left - discounted & price reduction! i'm definitely interested in a pouch. how does it work...do you put it in a spray bottle? cassondruh 06-06-2011 11:57 PM Re: 8 pouches of Booty Cubes wipe solution cubes left - discounted & price reduction! mamalamb - It is pretty simple. You just drop one cube into 1 cup (8oz) of HOT water and stir. Or, you can put one cube into a microwave safe container and add 1 cup (8oz) water and microwave for 15-20 seconds. Take container out carefully and stir. The solution may be HOT, but once the solution cools you can pour the solution into a spray bottle and use it on baby wipes or directly on baby's Some people like to pour the solution into a container on top of a few dry wipes to make them ready to use. My solution cubes contain a natural ingredient which helps prevent bacterial growth so the wet baby wipes are okay if they sit in a container for a day or two :) Hope this helps. If you have a Paypal address and you are still interested in a pouch of cubes, let me know! I can send you an invoice for the cubes through Paypal if you let me know your Paypal address. :) We can talk more via private message on here, or you can email me directly: bootycubes@comcast.net Talk to you soon momma, cassondruh 06-10-2011 04:07 PM Re: 8 pouches of Booty Cubes wipe solution cubes left - discounted & price reduction! cassondruh 06-20-2011 11:20 PM Re: 8 pouches of Booty Cubes wipe solution cubes left - discounted & price reduction! Whits01 06-23-2011 09:31 AM Re: 8 pouches of Booty Cubes wipe solution cubes left - discounted & price reduction! How long will they be good for? If I don't use them all with my LO (I'm hoping she starts using the potty soon), will they still be good in a couple of years? cassondruh 06-23-2011 08:23 PM Re: 8 pouches of Booty Cubes wipe solution cubes left - discounted & price reduction! hi mama, I started making Booty Cubes in early 2007. One of my first batches of cubes I saved and finally used the last cube last year in august for my twins :) it still worked just as well as the cubes worked when they were first made. The scent faded a little bit but the ingredients worked just the same. The solution will start to go bad a few days after making, but the cubes last a long time in cube form All times are GMT -6. The time now is 11:43 PM. Powered by vBulletin® Version 3.8.4 Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
{"url":"http://www.diaperswappers.com/forum/printthread.php?t=1221929","timestamp":"2014-04-17T05:43:33Z","content_type":null,"content_length":"14134","record_id":"<urn:uuid:0eb131cb-cc9e-484f-ac06-d130466d0478>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
Lansdowne Math Tutor Find a Lansdowne Math Tutor ...I am highly committed to students' performances and to improve their comprehension of all areas of mathematics.I have excelled in courses in Ordinary Differential Equations in both undergraduate and graduate school, as well as partial differential equations at the graduate level. Also, I have tu... 19 Subjects: including prealgebra, discrete math, econometrics, logic ...I have spent 2 years as a tutor at Jacksonville University. I am currently a graduate mathematics student at Villanova University. I believe in helping students to understand and enjoy math as I do, I will not do the work for the student but will help them understand the process behind it. 13 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have learned through the years how to make math seem easy. I enjoy math a great deal and look forward to working with you.I have taught and tutored Algebra 1 in different capacities for over 5 years among other subjects. I am a certified in secondary mathematics by the State of Pennsylvania. 11 Subjects: including precalculus, trigonometry, statistics, SAT math ...I placed out of math in college, and took through Calc AP in high school. In my AP class, and my freshmen read between 8 to 10 novels/plays a year -- that's a lot of literature practice! I can help with reading for content, theme, main idea, literary elements and finding quotes to support an idea. 17 Subjects: including algebra 1, SAT math, English, prealgebra ...I earned a BFA in Visual Communication Design with an emphasis in Illustration from the University of Dayton in 2001. In 2005 I completed an MFA in Painting at Northern Illinois University. I have been teaching drawing and graphic design courses at the college level since 2003. 15 Subjects: including algebra 1, writing, geometry, ESL/ESOL
{"url":"http://www.purplemath.com/Lansdowne_Math_tutors.php","timestamp":"2014-04-18T11:27:24Z","content_type":null,"content_length":"23825","record_id":"<urn:uuid:c6c1ff2c-49eb-4214-8b96-f4ce2eabe648>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiple Choice Questions on Radioactivity The following Question appeared in Karnataka CET 2003 question paper: Half life of a radioactive substance is 20 min. The time between 20% and 80% decay will be (a) 25 min. (b) 30 min. (c) 40 min. (d) 20 min If the initial activity is A0, the activity ‘A’ after ‘n’ half lives is given by A = A[0]/2^n. Let us take the initial activity as 100 units. After 20% decay, the activity becomes 80 units and after 80% decay, the activity becomes 20 units. These two cases can be stated as 80 = 100/2^n and 20 = 100/2^m where ‘n’ and ‘m’ are the numbers of half lives required for 20% decay and 80% decay respectively. Dividing, 80/20 = 2^m/2^n = 2^(m–n). Or, 2^(m-n) = 4, from which (m–n) = 2 half lives = 2×20 min. = 40 min. Now consider the following MCQ which appeared in Karnataka CET 2004: A count rate meter shows a count of 240 per minute from a given radioactive source. One hour later the meter shows a count rate of 30 per minute. The half life of the source is (a) 80 min. (b) 120 min. (c) 20 min. (d) 30 min. From the equation, A = A[0]/2^n, we have 30 = 240/2^n so that n = 3. Therefore one hour is equal to 3 half lives which means the half life of the substance is 20 min.
{"url":"http://www.physicsplus.in/2007/03/multiple-choice-questions-on.html","timestamp":"2014-04-16T04:50:12Z","content_type":null,"content_length":"96849","record_id":"<urn:uuid:28866547-dd1a-43f1-bf20-64dece413f77>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig Calculator Online • a small machine that is used for mathematical calculations • an expert at calculation (or at operating calculating machines) • Something used for making mathematical calculations, in particular a small electronic device with a keyboard and a visual display • A calculator is a small (often pocket-sized), usually inexpensive electronic device used to perform the basic operations of arithmetic. Modern calculators are more portable than most computers, though most PDAs are comparable in size to handheld calculators. • Controlled by or connected to another computer or to a network • on-line: on a regular route of a railroad or bus or airline system; "on-line industries" • Connected to the Internet or World Wide Web • on-line: connected to a computer network or accessible by computer; "an on-line database" • on-line(a): being in progress now; "on-line editorial projects" • Trigonometry • clean-cut: neat and smart in appearance; "a clean-cut and well-bred young man"; "the trig corporal in his jaunty cap"; "a trim beard" • trigonometry: the mathematics of triangles and trigonometric functions • Trigonometry (from Greek ' "triangle" + ' "measure") is a branch of mathematics that studies triangles. trig calculator online - Calculated Industries Calculated Industries 4080 Construction Master Pro Trigonometric Calculator Advanced Feet-Inch-Fraction Construction-Math Calculator with Full Trigonometric Functions The Construction Master Pro Trig Feet-Inch-Fraction calculator provides the building professional complete trigonometric function. This powerful and advanced construction-math calculator features new built-in solutions and expanded preference selection. It allows you to easily determine precise angle measurements and solve the most complex design and construction-math problems. Perfect for estimating materials and costs. In the field or office it helps assure accuracy, save time and money. This is a brand new item, we are an authorized Calculated Industries dealer. Trig Spotters Neil and Lorraine enthusiastically bag another trig. Hidden Trig An old trig hidden amongst the Black Beech. trig calculator online Dietary Supplement. Time released for best absorbtion. Hyaluronate 13, Glucosamine 15, Chondroitin 12. New Formula: Hyaluronate - a powerful key component of joint fluid that helps joints slide smoothly and comfortably. Plus - Doctor Recommended: Glucosamine and Chondroitin. 30-day supply of tri-packs. The power of 3 - Relieve, Build, Support. Maximum Strength Jointcare: A scientifically - advanced, once-a-day formulation that not only helps build and maintain healthy joints and cartilage, but also helps relieve joint discomfort. The new, improved formula offers smooth, steady, extended release of three powerful compounds for all day joint comfort. Hyaluronate - contains much more of this joint oil ingredient than similar formulations. Plus Glucosamine - helps build and nourish cartilage. Chondroitin - promotes comfort and flexibility. Get extended joint care, with Premium-grade ingredients, time-released throughout the day, for the same price as ordinary strength supplements. The time release daily dosage of Trigosamine provides nearly full release of it's powerful compound. In a test performed by a major consumer products quality assurance firm, other supplements were shown to quickly peak and decline, offering only about half of their glucosamine. Trigosamine offers consistent support releasing nearly all of its glucosamine within 24 hours. These statements have not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure or prevent any disease.
{"url":"https://sites.google.com/site/canyouuseacalculatoronth/trig-calculator-online-calculator-online","timestamp":"2014-04-24T20:28:09Z","content_type":null,"content_length":"25821","record_id":"<urn:uuid:eaf62b91-24f1-4c12-9ad8-55fecc8443fc>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Approximation and Exact Algorithms for Minimum­Width Annuli and Shells \Lambda Pankaj K. Agarwal y Boris Aronov z Sariel Har­Peled x Micha Sharir -- Let S be a set of n points in R d . The ``roundness'' of S can be measured by computing the width ! \Lambda (S) of the thinnest spherical shell (or annulus in R 2 ) that contains S. This pa­ per contains four main results related to computing ! \Lambda (S): (i) For d = 2, we can compute in O(n log n) time an annulus containing S whose width is at most 2! \Lambda (S). (ii) For d = 2 we can compute, for any given parameter '' ? 0, an an­ nulus containing S whose width is at most (1 + '')! \Lambda (S), in time O(n log n + n='' 2 ). (iii) For d – 3, given a pa­ rameter '' ? 0, we can compute a shell containing S of width at most (1 + '')! \Lambda (S) in time O i n '' d log i diam(S) ! \Lambda (S)'' or O
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/839/3807582.html","timestamp":"2014-04-20T01:04:14Z","content_type":null,"content_length":"8101","record_id":"<urn:uuid:75125bbd-8aec-48b1-96aa-a535857bd3cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numeracy 493] Re: What does equality mean? [Numeracy 493] Re: What does equality mean? Archived Content Disclaimer This page contains archived content from a LINCS email discussion list that closed in 2012. This content is not updated as part of LINCS’ ongoing website maintenance, and hyperlinks may be broken. Chip Burkitt chip.burkitt at orderingchaos.com Sat Aug 14 12:59:24 EDT 2010 I think the difficulty is that mathematics requires rigorous definitions and logic, especially as one advances in it. However, for ABE or GED students, it is usually enough to know that the equal sign is like a balance scale. In order for the sides to be in balance, the expressions on both sides must have the same value. If you add something to one side, you must add it to the other side as well to maintain the balance. If you subtract from one side, you must subtract it from the other side as well. When students get into algebra, they need to know that some transformations of an expression can change the character of the equality. For example (−/a/)^2 = /a/^2, but it does NOT follow by taking the square root of both sides that −/a/ = /a/. Likewise, /y//(/x/ − 1) = 3 needs to be qualified by /x/ ≠ 1, even though the equation can be transformed to /y/ = 3/x/ − 3, which has a solution for /x/ = 1 at /y/ = 0. For most purposes in ABE or GED classes, the balance analogy works well without getting into abstract discussions about various kinds of equivalence relations and the transformations that change the relation or leave it unchanged. If anyone has a better explanation of the equal sign for ABE and GED students, I would like to hear it. Chip Burkitt On 8/14/2010 1:04 AM, Michael Gyori wrote: Greetings all, After all this discussion about what the equal sign (or equality) means, I find myself somewhat in a maze. A discussion of equality takes us into a potentially esoteric realm from the perspective of our Might it be time to attempt to more clearly (and simply!) define terms among those who teach math? Michael A. Gyori Maui International Language School www.mauilanguage.com <http://www.mauilanguage.com/> -------------- next part -------------- An HTML attachment was scrubbed... -------------- next part -------------- A non-text attachment was scrubbed... Name: chip_burkitt.vcf Type: text/x-vcard Size: 162 bytes Desc: not available Url : -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 2283 bytes Desc: not available Url :
{"url":"http://lincs.ed.gov/pipermail/numeracy/2010/000518.html","timestamp":"2014-04-18T00:53:24Z","content_type":null,"content_length":"19918","record_id":"<urn:uuid:237d1403-4336-44ef-aafd-3d6f98c63f93>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Super Bowl Summaries Super Bowl Summaries Use http://www.supernfl.com/SuperBowl2.html to answer the questions. 1. Make a frequency and cumulative frequency chart for the locations of the Super Bowls. How many were there in all? 2. Which states had the most Super Bowls? How many? 3. Who was voted the Most Valuable Player the most times? Which team won the most Super Bowls? 4. Make a stem and leaf plot of the winning scores. What is the range, mode, median and mean for the scores? Now make a histogram. Don't forget to label all of the parts. 5. Of those teams in the different Super Bowls, which ones won 50% of their games or more? 6. Which team has the best Super Bowl record? How did you determine this? 7. What is the total number of points scored in all of the Super Bowls? 8. Which team appeared in the Super Bowl most often? 9. Of the 33 Most Valuable Players, what % were quarterbacks? Don't forget to make a ratio of quarterback to total MVP's first. Then, use your calculator's F/D key. 10. Make a ratio of winning NFL teams (NFC also) to AFL (AFC) teams. Which conference won most often? Super Bowl Summaries Solutions
{"url":"http://www.fi.edu/school/math/superbowl.html","timestamp":"2014-04-16T10:10:59Z","content_type":null,"content_length":"5630","record_id":"<urn:uuid:52ce41cf-1e1e-410e-87d7-f3e7c0f1e8a2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics A survey of multipartitions: Congruences and identities. (English) Zbl 1183.11063 Alladi, Krishnaswami (ed.), Surveys in number theory. New York, NY: Springer (ISBN 978-0-387-78509-7/hbk). Developments in Mathematics 17, 1-19 (2008). Summary: The concept of a multipartition of a number, which has proved so useful in the study of Lie algebras, is studied for its own intrinsic interest. Following up on the work of Atkin, we present an infinite family of congruences for ${P}_{k}\left(n\right)$, the number of $k$-component multipartitions of $n$. We also examine the enigmatic tripentagonal number theorem and show that it implies a theorem about tripartitions. Building on this latter observation, we examine a variety of multipartition identities connecting them with mock theta functions and the Rogers-Ramanujan identities. 11P81 Elementary theory of partitions 11P83 Partitions: congruences and congruential restrictions 05A30 $q$-calculus and related topics
{"url":"http://zbmath.org/?format=complete&q=an:1183.11063","timestamp":"2014-04-16T22:24:51Z","content_type":null,"content_length":"21355","record_id":"<urn:uuid:4699c692-bd13-481d-ab50-0615976eb9b9>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Formula Function Reference Null means that a field's value is undefined. In other words, no one has entered any data in that particular field. It's empty. Its value is null.--The result of this function is true if x is null, otherwise false. The argument x may be of any data type (except text or boolean).
{"url":"http://quickbase.intuit.com/developer/documentation/formulas?page=10","timestamp":"2014-04-18T00:18:09Z","content_type":null,"content_length":"48642","record_id":"<urn:uuid:959f75bd-e42c-4784-914c-6247cfc42eaf>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Vector Equilibrium & Isotropic Vector Matrix As has been stated throughout this website, the Vector Equilibrium (VE) is the most primary geometric energy array in the cosmos. According to Bucky Fuller, the VE is more appropriately referred to as a “system” than as a structure, due to it having square faces that are inherently unstable and therefore non-structural. Given its primary role in the vector-based forms of the cosmos, though, we include it in this section. The Vector Equilibrium, as its name describes, is the only geometric form wherein all of the vectors are of equal length and angular relationship (60° angles throughout). This includes both from its center point out to its circumferential vertices, and the edges (vectors) connecting all of those vertices. Having the same form as a cuboctahedron, it was Buckminster Fuller who discovered the significance of the full vector symmetry in 1917 and called it the Vector Equilibrium in 1940. With all vectors being exactly the same length and angular relationship, from an energetic perspective, the VE represents the ultimate and perfect condition wherein the movement of energy comes to a state of absolute equilibrium, and therefore absolute stillness and nothingness. As Fuller states, because of this it is the zero-phase from which all other forms emerge (as well as all dynamic energy events, as will be described below). In Fuller's own words... Structure of the Unified Field — The VE and Isotropic Vector Matrix The most fundamental aspect of the VE to understand is that, being a geometry of absolute equilibrium wherein all fluctuation (and therefore differential) ceases, it is conceptually the geometry of what we call the zero-point or Unified Field — also called the "vacuum" of space. In order for anything to become manifest in the universe, both physically (energy) and metaphysically (consciousness), it requires a fluctuation in the Unified Field, the result of which fluctuation and differential manifests as the Quantum and Spacetime fields that are observable and measurable. Prior to this fluctuation, though, the Unified Field exists as pure potential, and according to contemporary theory in physics it contains an infinite amount of energy (and in cosmometry, as well as spiritual philosophies, an infinite creative potential of consciousness). Being a geometry of equal vectors and equal 60° angles, it is possible to extend this equilibrium array infinitely outward from the center point of the VE, producing what is called the Isotropic Vector Matrix (IVM). Isotropic means “all the same”, Vector means “line of energy”, and Matrix means “a pattern of lines of energy”. It is this full isotropic vector matrix that can be seen as the infinitely-present-at-all-scales-and-in-perfect-equilibrium geometry of the zero-point Unified Field. Every point in this matrix is a potential center point of a VE around which a condition of dynamic fluctuation may arise to manifest. And as has been stated and is seen in this image, this VE geometry is inherent in this matrix (the green lines comprise the VE): The IVM also consists of a simple arrangement of alternating tetrahedron and octahedron geometries, as seen in this illustration: In fact, the VE itself can be seen to consist of a symmetrical array of eight tetrahedons with their bases representing the triangular faces of the VE, and all pointing towards the VE’s center point. (The square faces are the bases of half-octahedron, like the form of the pyramids in Eqypt.) Given this primary presence of tetrahedons in the VE and IVM, researcher Nassim Haramein sought to determine the most balanced symmetry of them that takes into account the positive and negative polarity of the IVM structure (i.e. “upward” and “downward” pointing tetrahedrons). He identified an arrangement of tetrahedrons in the IVM that, at a scale of complexity one level greater than the primary VE geometry, defines the most balanced array of energy structures (tetrahedons) wherein the positive and negative polarities are equal and without “gaps” in the symmetry. This arrangement consists of 32 positive and 32 negative tetrahedrons for a total 64, and looks like this (notice the underlying VE symmetry as well): Beyond the VE’s primary zero-phase symmetry, the 64 Tetrahedron Grid, as it is known, represents the first conceptual fractal of structural wholeness in balanced integrity. It is noteworthy that the quantity of 64 is found in numerous systems in the cosmos, including the 64 codons in our DNA, the 64 hexagrams of the I Ching (Chinese Book of Changes), the 64 tantric arts of the Kama Sutra, as well as in the Mayan Calendar’s underlying structure. It appears that the 64-based quantitative value is of primary importance in the fundamental structure of the Unified Field and how that field manifests from its implicate (pre-manifest) order to its explicate (manifest) order, both physically and metaphysically. (See also the relationship between the Analog and Digital realms describing numerically how both the binary 64-based system and the Phi-based Fibonacci system are in intimate coordination.) Other Primary Attributes of the Vector Equilibrium 12 Spheres Around 1 Another way of deriving the geometry of the VE is by using 13 spheres of the same diameter. Using one sphere as the center point, we can then pack twelve spheres around this “nucleus” sphere, as seen in the illustration below. Given that the diameter is the same for all of the spheres, the centers of each sphere will be equidistant from all of their adjacent neighbors, including the center one. The lines connecting their centers are the vectors of the VE. Because it’s an array of 12 spheres around one central sphere, we can refer to the VE’s geometry as a 12-around-1 system. We can then consider this system when we examine the cosmometric relationships of 12-based systems such as the 12-tone music scale, the astrological zodiac, and the Sectors of Human Concern. (See also this figure from Fuller's Synergetics) 4 Hexagonal Planes The VE also has the attribute of consisting of four hexagons symmetrically arrayed in four planes. As can be seen in the illustration below, there is one at the equator or horizon plane (red), one encircling the whole VE as if being viewed from directly above (blue), and two at left and right-tilting angles (green and purple). They are all 60°s from each other, and the angles they define are exactly the same as those of the faces of a tetrahedron. According to Fuller, this is the zero-tetrahedron, wherein the tetrahedron’s faces have all converged simultaneously on its center point. (It is also significant to note that the 8 triangular faces of the VE symmetrically match the 8 triangular faces of a star tetrahedron as well, this being a polar-balanced geometry of the tetrahedron’s most basic structural form; see the Tetrahedron page for more on this). It is because the VE has these four hexagonal planes defining its spatial coordinates (and therefore, too, the IVM) that Fuller says that the foundation of the cosmic geometry is actually 4-dimensional, as opposed to the conventional 3-dimensional 90° X,Y,Z coordinate system historically assumed to be fundamental. The Spherical VE or Genesa Crystal It is to the symmetrical arrangement of these four hexagonal planes that we align the four phi double spiral field patterns in the basic model of cosmometry. In essence, the points of these hexagons all touch the surface of a sphere, and the phi double spiral boundaries define in the simplest manner the great-circle vectors of a spherical VE. This form, pictured below, is also known as the Genesa Crystal, and is purported to possess the property of balancing and cleansing the energy of the environment surrounding it for a distance of 2 miles when using a 16” diameter model. (See this link for more information on the Genesa Crystal, and this video of one inhabiting the center of the garden at the Perelandra Center for Nature Research in Virginia, USA). In essence, this simple form, even when built solely of copper or brass strips or tubing, sets up a resonance with the underlying structure of the Unified Field, thus creating an island of coherence is a sea of naturally occurring “chaos,” amplifying the equilibrium state throughout its surrounding local field. The VE’s Relationship to the Cube and Octahedron In the terminology of basic geometry, the form that the VE defines as a solid is called a cuboctahedron (pronounced “cube-octahedron”). As is evident from its name, this form has a symmetrical relationship to both the cube and the octahedron, wherein the six square faces of the VE are symmetrical to the faces of a cube, and the eight triangular faces of the VE are symmetrical to the faces of an octahedron. Another way of seeing this is that the structures of both a cube and an octahedron can be “wrapped” around a VE, as seen in this video: This will be significant when we explore in the next section the dynamic nature of the VE’s ability to contract and expand and transfer energy and information seamlessly throughout the entire Unified Field across all scales instantaneously. The Jitterbug So far we have looked at the VE in its static state (or more correctly, its ultimate dynamic equilibrium zero state). In other words, we’ve looked at the form in its state of perfect symmetry. What is also quite remarkable about the VE is that, given it has six square faces and that squares are inherently non-structural (only triangles are structurally stable), the VE has the ability to “collapse” inward, drawing the twelve outer points symmetrically towards its center point. As it does so it goes from its state of perfect equilibrium (the zero-phase) into a dynamic “spin” that can contract in both clockwise and counterclockwise directions. When contracted and expanded alternating in both directions, it exhibits a dynamic “pumping” action that Fuller called the Jitterbug (after the dance of the 1930’s that was popular at the time he was exploring this phenomenon). It is due to this dynamic jitterbug motion of the VE that the entire manifest universe arises, and most fundamentally that the five platonic forms arise as the foundation of all structural geometry in the cosmos. To understand this, first consider again that the VE (and whole IVM) is the conceptual zero-phase cosmometry of the Unified Field. The Unified Field has an infinite energetic and creative potential. This potential is untapped until an impulse is introduced that causes the IVM to go out of equilibrium, and when it does so it “collapses the field” (as is often said in quantum physics when the consciousness of an observer seeks to determine the location or angular momentum of a quantum energy event) and an extremely minute amount of the Field’s infinite energy comes into a polarized dynamic of spin, differential, form and motion. A local energy event has emerged from the otherwise invisible and non-measurable Unified Field (a local energy event being something as basic as a photon, electron, proton, etc, and growing in complexity all the way to the macro scale of super-massive galactic clusters). Once a local energy event has arisen, a dynamic tension arises between its center point (its local “gravitational center”) and those in its near proximity as the contractive force of the collapsed field “pulls” on the surrounding field, and the center of each event tries to pull the others inward towards itself. When a stabilized system of such points arises, a geometric “tensegrity” form of energy is created. The center point of each event can be described as the “singularity” around which the event is manifesting that remains connected to the infinite energy/density of the Unified Field. In this way, we can say that everything has a center point, and all center points are one (because they are continuously connected to the Unified Field’s IVM structure and therefore entirely and constantly unified). It is this unification of center points that explains non-local effects in the quantum realm that are proven to exist experimentally, wherein a change in the state of a quantum particle will simultaneously cause the same change in state of a paired particle across vast distances instantly (apparently violating the proposed cosmic “speed limit” of the manifest universe that is traditionally defined as the speed of light). Giving Birth to the Universe As stated, as soon as the IVM/VE goes out of equilibrium, all of the dynamic forces that are observable and measurable arise — mass, angular momentum, spin, charge, etc. Differential is introduced as the once perfectly calm Unified Field fluctuates and the jitterbugging motion causes waves to emanate into the field, creating what we call the spacetime field of manifest universe. At the quantum level, this jitterbugging is happening at such a high rate (10^44 times per second, the rate at which quantum particles are said to pop in and out of existence) that we do not realize that it is constantly going back and forth between the manifest state (out of equilibrium) and the non-manifest state (perfect equilibrium VE). Again, this quantum dynamic shows how everything in the universe is both individual and unified simultaneously. What is remarkable about the cosmometry of this jitterbugging dynamic is that, in one swift motion all of the primary geometric forms — the platonic structures — come into energetic manifestation. As has been repeatedly mentioned, Fuller called the VE’s geometry the zero-phase. As the VE collapses inward and the square faces contract across one of their diagonals, the length of that diagonal distance becomes the same length as the VE’s edges. At this moment the symmetry of the icosahedron arises. This is what Fuller calls the icosahedral phase. (Note that the dodecahedron is the symmetrical “dual” of the icosahedron and is therefore energetically implied at this phase as well; and according to researcher Robert Gray, there is also a dodecahedral phase further along the contracting VE’s motion as illustrated below). Continuing on its inward journey, the square faces of the VE continue to contract across the diagonal until the gap is completely closed. At this moment the symmetry of the octahedron arises. This octahedral phase now displays a doubling of the vectors of the VE, creating an extremely strong bonding tension as is found in atomic elements that have octahedral symmetry. (Note that the cube is the symmetrical dual of the octahedron and is therefore implied in this phase as well, and as noted above is also inherent in the primary symmetry of the VE itself. Icosahedron Phase of VE Jitterbug Dodecahedron phase of VE Jitterbug From the viewpoint of a physical model of a VE contracting through these phases, the octahedron can be seen as the minimum contraction state, after which the jitterbugging motion must expand again and return to its fully expanded VE phase. From a purely energetic viewpoint, though, this motion can continue, and ultimately it reaches the phase of symmetry of the tetrahedron. In fact, it is even possible to show this using a physical model, being able to spin the jitterbugging contraction past the octahedron phase and folding the model into a tetrahedron (now with the vectors bonded four-fold). And this can also then be further collapsed into the most fundamental unit of structure, the triangle, with the VE’s edges now bonded six-fold — all without breaking any of the connections at the corner points of the VE model. In this way, we have now shown how the VE — the zero-phase potential of all form — inherently contains all of the primary platonic structures, and that they arise from the dynamic fluctuation of the Unified Field as it collapses through the various platonic phases into manifest form. Depending on the resonance of the dynamic tension (tensegrity) of the energy events at play, the different geometric forms will be seen as the underlying symmetry of structure. Infinite and Instantaneous Exchange of Energy and Information Throughout the Unified Field As mentioned previously, the collapsed VE can be seen to have the octahedron as its fully contracted form. And as also mentioned, the octahedron can also be wrapped around the fully expanded VE form, with the faces of the octahedron symmetrically aligned with the triangular faces of the VE. As such, we can then see how the same octagonal form is present at both the minimum and maximum phases of the jitterbugging motion, and can therefore also be deemed to fill either role at any moment (or both roles simultaneously, more appropriately). This is the fractal scaling nature of the dynamic motion of the VE wherein one scale’s maximum (VE form) has wrapped around it the next larger scale’s minimum octahedron form. This new minimum then expands to become the next scale’s maximum VE, itself then having the next minimum octahedron wrapped around it, and so on moving expandingly and contractingly up and down the cosmic micro to macro The next animation illustrates this phenomenon showing the simultaneous expansion and contraction alternating between scales and meeting in the octahedral moment that is symmetrical to both the manifest octahedron phase (minimum contracted state) and the unmanifest zero-phase of the VE. In this way we can see once again that inherent in the dynamic flow of energy in the quantum field is a continuous return to the zero-point Unified Field, at which moment all is instantaneously and infinitely unified. Click Image to View Animation And as this next animation illustrates, there is also an instant and infinite exchange of information throughout the entire field, depicted here as the small triangles that are picked up by the larger triangles and carried further up the scale ad infinitum. Each triangle is a packet of information relevant to the fractal scale at which it exists, and as soon as this packet of information reaches its new maximum expansion state, it becomes instantly available to the entire Unified Field. In this way, all information about energy events throughout the entire cosmos is instantly and constantly available to all other energy events in a cosmic feedback loop of individual and unified holographic integrity. This simple dynamic model and its implications as described lend a logical cosmometric explanation to such phenomena as clairvoyance, clairaudience, long-distance healing, quantum entanglement, etc. Click Image to View Animation The scaling ratio of this relationship is a doubling of the VE’s diameter at every iteration. It is octave scaling such as is found in sound frequencies and music. It is also important to understand that the jitterbugging motion of the VE within the isotropic field exhibits both an expansive and contractive dynamic simultaneously, with VE's adjacent to each other "shunting" energy through the matrix. This illustration by Robert Gray and Foster Gamble depicts this dynamic pulsation. Click Image to View Animation Jitterbugging Dynamic as Toroidal Flow Form As the VE jitterbug spins inward it sets up a differential of energy density (i.e. pressure, electromagnetic charge) that sets in motion a dual vortex flow that creates the form of a torus. The pumping of the jitterbug sustains this toroidal form in a balanced rhythmic exchange of energy that flows through the manifest system. From a fractal-holographic perspective, it is this fundamental dynamic that takes place at every scale, first expressed as photons, then sub-atomic particles, which then aggregate into the geometric arrays of atoms, which aggregate into compounds that form crystals, minerals, cells and organs, and then whole organisms such as trees, animals, us, and then ecosystems, atmospheres, planets, stars and galaxies. At every scale the toroidal flow dynamic is active as long as the coherence of the manifest energy is maintained. Once the coherence is lost (as energy dissipates or is disrupted due to internal or external factors), the toroidal form will no longer remain stable and will resolve back into a state of dynamic equilibrium. A good example of this is the appearance and disappearance of vortexes in a stream of water. The dynamic equilibrium is the stream moving as a whole. Within this stream the water will interact with an object like a rock, and the resulting pressure differentials will cause a collapse of the water’s field into a vortex (the vortex being, in this case, the visible portion of a complete toroidal flow dynamic that is occurring invisibly in the water). According to physicist David Bohm, this is the true nature of the underlying field (sub-quantum field, as he calls it) wherein there is a continuous current of flow (which he termed the holomovement) within which vortices of energy emerge (as photons, electrons, etc). These vortices are both distinct in their form and completely connected to each other and the whole current, just as is the case with water. This idea and the characteristics and principles related to the torus are explored in the Torus - Dynamic Flow Process section.
{"url":"http://www.cosmometry.net/vector-equilibrium-%26-isotropic-vector-matrix","timestamp":"2014-04-17T07:31:10Z","content_type":null,"content_length":"52462","record_id":"<urn:uuid:8b98fa2b-bc0d-4526-8d45-a393a95dd715>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra 1: Algebra 1 Quick Links Unit Downloads Learning to solve equations begins with simple equations in one variable and involves fundamental principles such as inverse properties and the concept of balance. The lessons in this unit illustrate the difference between expressions and equations, explore the additive inverse property, use a balance beam to solve equations, and examine the special cases in which the equation is an identity or for which the equation has no solution.
{"url":"http://education.ti.com/en/timathnspired/us/algebra-1/equations","timestamp":"2014-04-19T09:26:17Z","content_type":null,"content_length":"72253","record_id":"<urn:uuid:4c75479d-c9e2-4739-8269-69078d6c4688>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] [Newbie] Fast plotting Franck Pommereau pommereau@univ-paris12... Wed Jan 7 06:37:53 CST 2009 Hi all, First, let me say that I'm impressed: this mailing list is probably the most reactive I've ever seen. I've asked my first question and got immediately more solutions than time to test them... Many thanks to all the answerers. Using the various proposals, I ran two performance tests: - test 1: 2000000 random values - test 2: 1328724 values from my real use case Here are the various functions and how they perform: def f0 (x, y) : """Initial version test 1 CPU times: 13.37s test 2 CPU times: 5.92s s, n = {}, {} for a, b in zip(x, y) : s[a] = s.get(a, 0.0) + b n[a] = n.get(a, 0) + 1 return (numpy.array([a for a in sorted(s)]), numpy.array([s[a]/n[a] for a in sorted(s)])) def f1 (x, y) : """Alan G Isaac <aisaac@american.edu> Modified in order to sort the result only once. test 1 CPU times: 10.86s test 2 CPU times: 2.78s defaultdict indeed speeds things up, probably avoiding one of two sorts is good also s, n = defaultdict(float), defaultdict(int) for a, b in izip(x, y) : s[a] += b n[a] += 1 new_x = numpy.array([a for a in sorted(s)]) return (new_x, numpy.array([s[a]/n[a] for a in new_x])) def f2 (x, y) : """Francesc Alted <faltet@pytables.org> Modified with preallocation of arrays (it appeared faster) test 1: killed after more than 10 minutes test 2 CPU times: 22.01s This result is not surprising as I guess a quadratic complexity: one pass for each unique value in x, and presumably one nested pass to compute y[x==i] u = numpy.unique(x) m = numpy.array(range(len(u))) for pos, i in enumerate(u) : g = y[x == i] m[pos] = g.mean() return u, m def f3 (x, y) : """Sebastian Stephan Berg <sebastian@sipsolutions.net> Modified because I can always work in place. test 1 CPU times: 17.43s test 2 CPU times: 0.21s Adopted! This is definitely the fastest one when using real values. I tried to preallocate arrays by setting u=numpy.unique(x) and the looping on u, but the result is slower, probably because of unique() Compared with f1, its slower on larger arrays of random values. It may be explained by a complexity argument: f1 as a linear complexity (two passes in sequence) while f3 is probably N log N (a sequence of one sort, two passes to set x[:] and y[:] and one loop on each distinct value with a nested searchsorted that is probably logarithmic). But, real values are far from random, and the sort is probably more efficient, as well as the while loop is shorter because there are less values. s = x.argsort() x[:] = x[s] y[:] = y[s] u, means, start, value = [], [], 0, x[0] while True: next = x.searchsorted(value, side='right') if next == len(x): value = x[next] start = next return numpy.array(u), numpy.array(means) def f4 (x, y) : """Jean-Baptiste Rudant <boogaloojb@yahoo.fr> test 1 CPU times: 111.21s test 2 CPU times: 13.48s As Jean-Baptiste noticed, this solution is not very efficient (but works almost of-the-shelf). recXY = numpy.rec.fromarrays((x, x), names='x, y') return matplotlib.mlab.rec_groupby(recXY, ('x',), (('y', numpy.mean, 'y_avg'),)) A few more remarks. Sebastian Stephan Berg wrote: > Just thinking. If the parameters are limited, you may be able to use the > histogram feature? Doing one histogram with Y as weights, then one > without weights and calculating the mean from this yourself should be > pretty speedy I imagine. I'm afraid I don't know what the histogram function computes. But this may be something worth to investigate because I think I'll need it later on in order to smooth my graphs (by plotting mean values on intervals). Bruce Southey wrote: > If you use Knuth's one pass approach > you can write a function to get the min, max, mean and variance/standard > deviation in a single pass through the array rather than one pass for > each. I do not know if this will provide any advantage as that will > probably depend on the size of the arrays. If I understood well, this algorithm computes the variance of a whole array, I can see how to adapt it to compute mean (already done by the algorithm), max, min, etc., but I did not see how it can be adapted to my case. > Also, please use the highest precision possible (ie float128) for your > arrays to minimize numerical error due to the size of your arrays. Thanks for the advice! So, thank you again everybody. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-January/039510.html","timestamp":"2014-04-18T08:19:07Z","content_type":null,"content_length":"8058","record_id":"<urn:uuid:b4a56c1e-4f03-4810-9f68-8eb47e91c6af>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition of étale for rings MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Let $A \to B$ be a ring extension. What is the definition of $B/A$ étale ? up vote 6 down vote favorite When $A$ is a field, do we get a nice characterization ? ag.algebraic-geometry ac.commutative-algebra add comment When $A$ is a field, do we get a nice characterization ? You say that a ring homomorphism $\phi: A \to B$ is étale (resp. smooth, unramified), or that $B$ is étale (resp. smooth, unramified) over $A$ is the following two conditions are • $A \to B$ is formally étale (resp. formally smooth, formally unramified): for every square-zero extension of $A$-algebras $R' \to R$ (meaning that the kernel $I$ satisfies $I^2 = 0$) the natural map $$\mathrm{Hom}_A(B, R') \to \mathrm{Hom}_{A}(B, R)$$ is bijective (resp. surjective, injective). • $B$ is esentially of finite presentation over $A$: $A \to B$ factors as $A \to C \to B$, where $A \to C$ is of finite presentation and $C \to B$ is $C$-isomorphic to a localization morphism $C \to S^{-1}C$ for some multiplicatively closed subset $S \subset C$. The second condition is just a finiteness condition; the meat of the concept is in the first one. Formal smoothness is often referred to as the infinitesimal lifting property. Geometrically speaking, it says that if the affine scheme $\mathrm{Spec}B$ is smooth over $\mathrm{Spec}A$, then any map from $\mathrm{Spec}B$ to $\mathrm{Spec}R$ lifts to any up vote 11 square-zero (and hence any infinitesimal) deformation $\mathrm{Spec}R'$. Moreover, if $\mathrm{Spec}B$ is étale over $\mathrm{Spec}A$ this lifting is unique. down vote accepted Differential-geometrically, unramifiedness, smoothness and étaleness correspond to the tangent map of $\mathrm{Spec}\phi$ being injective, surjective and bijective, respectively. In particular, étale is the generalization to the algebraic case of the concept of local isomorphism. There are two references you might want to consult. The first one, in which you can read all about the formal properties of these morphisms, is Iversen's "Generic Local Structure in Commutative Algebra". The second one, Hartshorne's "Deformation Theory", will give you a lot of information about the geometry; section 4 of chapter 1 (available online) talks about the infinitesimal lifting property. EDIT: The EGA definition of étale morphism of rings is slightly different from the above, in the sense that it requires finite presentation, not just locally of finite presentation: see the comments below. show 5 more comments You say that a ring homomorphism $\phi: A \to B$ is étale (resp. smooth, unramified), or that $B$ is étale (resp. smooth, unramified) over $A$ is the following two conditions are satisfied: The second condition is just a finiteness condition; the meat of the concept is in the first one. Formal smoothness is often referred to as the infinitesimal lifting property. Geometrically speaking, it says that if the affine scheme $\mathrm{Spec}B$ is smooth over $\mathrm{Spec}A$, then any map from $\mathrm{Spec}B$ to $\mathrm{Spec}R$ lifts to any square-zero (and hence any infinitesimal) deformation $\mathrm{Spec}R'$. Moreover, if $\mathrm{Spec}B$ is étale over $\mathrm{Spec}A$ this lifting is unique. Differential-geometrically, unramifiedness, smoothness and étaleness correspond to the tangent map of $\mathrm{Spec}\phi$ being injective, surjective and bijective, respectively. In particular, étale is the generalization to the algebraic case of the concept of local isomorphism. There are two references you might want to consult. The first one, in which you can read all about the formal properties of these morphisms, is Iversen's "Generic Local Structure in Commutative Algebra". The second one, Hartshorne's "Deformation Theory", will give you a lot of information about the geometry; section 4 of chapter 1 (available online) talks about the infinitesimal lifting EDIT: The EGA definition of étale morphism of rings is slightly different from the above, in the sense that it requires finite presentation, not just locally of finite presentation: see the comments Apparently $B$ should be finitely generated as a ring over $A$ and be a flat $A$-module, and the module of Kaehler differentials of $B$ over $A$ should vanish. When $A$ is a field, the up vote 5 characterization is that $B$ should be a finite direct sum of finite separable field extensions of $A$. down vote add comment Apparently $B$ should be finitely generated as a ring over $A$ and be a flat $A$-module, and the module of Kaehler differentials of $B$ over $A$ should vanish. When $A$ is a field, the characterization is that $B$ should be a finite direct sum of finite separable field extensions of $A$. That should be read "B is etale over A". This happens when the map from A->B is an etale ring map, which means that its dual map is an etale morphism of affine schems from SpecB->SpecA, which is defined: As with most things in ring theory, this condition is somewhat more trivial when A is a field. We get flatness since the only stalk of specA is A (spec A has one point), which is a field, so all of its modules are free, and hence flat. Unramifiedness will not always hold, but it's also lot easier because k is a field. If k is of characteristic zero, the extension is automatically separable, so then we only need to restrict to it being finite. up vote 2 down vote There's a more direct definition which says that the morphism A->B is a smooth ring map with relative dimension zero. If you'd like to read a section on them in more generality, you can check out Stacks-Git Chapter 7 section 85 (7.85) on page 366 . I'm sure it's also in Hartshorne. add comment That should be read "B is etale over A". This happens when the map from A->B is an etale ring map, which means that its dual map is an etale morphism of affine schems from SpecB->SpecA, which is As with most things in ring theory, this condition is somewhat more trivial when A is a field. We get flatness since the only stalk of specA is A (spec A has one point), which is a field, so all of its modules are free, and hence flat. Unramifiedness will not always hold, but it's also lot easier because k is a field. If k is of characteristic zero, the extension is automatically separable, so then we only need to restrict to it being finite. There's a more direct definition which says that the morphism A->B is a smooth ring map with relative dimension zero. If you'd like to read a section on them in more generality, you can check out Stacks-Git Chapter 7 section 85 (7.85) on page 366 . If f is a map of local rings $$f:A\rightarrow B$$ is étale iff it is flat and unramified (check out Bhargav Bhatt's notes at the stacks project link text). If A is a field and B is finite over A, then f is étale iff B is isomorphic to a finite product of separable field extensions of A (see proposition I.3.1 of Milne's book "Étale cohomology"). More generally, for f any ring up vote homomorphism, check out definition II.1.1 of SGA 4.5 (B is a finitely presented A-algebra and satisfies a Jacobian criterion is a possible definition. Or B is a finitely presented A-algebra 1 down and B is flat and the relative differentials are trivial). The definition comes down to "smooth of relative dimension 0". add comment If f is a map of local rings $$f:A\rightarrow B$$ is étale iff it is flat and unramified (check out Bhargav Bhatt's notes at the stacks project link text). If A is a field and B is finite over A, then f is étale iff B is isomorphic to a finite product of separable field extensions of A (see proposition I.3.1 of Milne's book "Étale cohomology"). More generally, for f any ring homomorphism, check out definition II.1.1 of SGA 4.5 (B is a finitely presented A-algebra and satisfies a Jacobian criterion is a possible definition. Or B is a finitely presented A-algebra and B is flat and the relative differentials are trivial). The definition comes down to "smooth of relative dimension 0".
{"url":"https://mathoverflow.net/questions/8451/definition-of-etale-for-rings/8455","timestamp":"2014-04-21T07:28:16Z","content_type":null,"content_length":"83080","record_id":"<urn:uuid:6ee29545-15e6-4418-805a-b272b02ae52f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
convert 100 cm to inches You asked: convert 100 cm to inches Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/convert_100_cm_to_inches","timestamp":"2014-04-24T03:02:43Z","content_type":null,"content_length":"59878","record_id":"<urn:uuid:9c6425ee-fb95-442b-bef6-c8e2603c4015>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Millersville, MD Math Tutor Find a Millersville, MD Math Tutor ...Matlab can handle vast amounts of input data and manipulate the data in accordance with the instructions that the user provides. It has amazing plotting capabilities with both 2-D and 3-D plots. It also provides a vast array of statistical functions including means, variances, medians, and modes of data sets. 17 Subjects: including actuarial science, linear algebra, algebra 1, algebra 2 ...In addition, I have also worked with students in the tutoring sessions to apply these skills to the various subjects that I have taught. I am a highly-qualified teacher, licensed to teach ESL/ ESOL K-12 in the state of Maryland. I have successfully taught ESOL for five years in the public schools. 24 Subjects: including geometry, ACT Math, SAT math, prealgebra I recently graduated from UMD with a Master's in Electrical Engineering. I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a system of self-learning and studyi... 15 Subjects: including prealgebra, probability, algebra 1, algebra 2 ...I took 10 years of formal lessons and have played for weddings, receptions, & parties. I have experience teaching elementary age and tutoring older children/adults on piano techniques. I was a music/choir director for my church when I was in graduate school. 33 Subjects: including calculus, chemistry, SAT math, physics John received his Bachelor's Degree in Computer Science from Morehouse College and a Master of Business Administration (MBA) from Georgia Tech with concentrations in Finance and Information Technology. He has served as a Life Leadership Adviser for the NBMBAA Leaders of Tomorrow Program (LOT) for t... 18 Subjects: including ACT Math, geometry, SAT math, prealgebra Related Millersville, MD Tutors Millersville, MD Accounting Tutors Millersville, MD ACT Tutors Millersville, MD Algebra Tutors Millersville, MD Algebra 2 Tutors Millersville, MD Calculus Tutors Millersville, MD Geometry Tutors Millersville, MD Math Tutors Millersville, MD Prealgebra Tutors Millersville, MD Precalculus Tutors Millersville, MD SAT Tutors Millersville, MD SAT Math Tutors Millersville, MD Science Tutors Millersville, MD Statistics Tutors Millersville, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Millersville_MD_Math_tutors.php","timestamp":"2014-04-21T10:33:34Z","content_type":null,"content_length":"24013","record_id":"<urn:uuid:0fb2df79-b0ca-4244-ab7f-a6fd04d1ac6a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/lethal/medals","timestamp":"2014-04-20T14:01:46Z","content_type":null,"content_length":"98804","record_id":"<urn:uuid:1726b0bf-95dd-4239-bc2c-998345572140>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] two questions about Godel Nik Weaver nweaver at math.wustl.edu Fri Feb 17 03:26:19 EST 2006 Harvey Friedman: > Explaining why you disagree so condidentally with Kurt Godel > is a good and fair question that I now reiterate. Where and > why did Godel go wrong? It's a fair question if you have in mind some particular argument of Godel's in favor of platonism. You don't specify one. The argument of his that I'm familiar with is the one about having an intuitive perception of the objects of set theory that is analogous to sense perception. I can't tell you *why* Godel went wrong on this --- that seems like a psychological question --- but I can tell you what I think is wrong with the argument. I don't believe we do have any perception of the actual objects of set theory, as I think these supposed objects are fictitious. What Godel is thinking of is our intuitive perception of the structure that is supposed to be embodied in these fictional objects, not any direct perception of the objects themselves. When I intuitively perceive that 2 + 2 = 4 it is not the case that my mind is in some way reaching out into the netherworld and grasping hold of some abstract entities, "2" and "4". Rather I have, say, a mental picture of two dots approaching two other dots and becoming four dots. The actual picture involved can vary and it might not need to be involve visual images at all, but I think Godel's analogy with sense perception is telling because no matter how one intuitively perceives that 2 + 2 = 4, undoubtedly some brain structures used in sense perpection will be involved. I imagine the same is true in all instances of mathematical intuition. Some brain structures used for sensory processing will always be involved, and that is why we have the feeling of direct perception of mathematical truth that is so like sensory perception. This is no evidence for the actual existence of non-physical abstract mathematical objects. > Do you think that he ever subscribed to one of the controversial > stopping places for predicativity? The question doesn't make sense; you're confused about my work on predicativism. You know that Feferman and Schutte proposed Gamma_0 as a stopping point for predicativism. You are also aware that I have challenged this proposal and argued that one can predicatively access ordinals beyond Gamma_0. You have evidently read the title of my paper "Predicativity beyond Gamma_0", but I can see you haven't read the paper itself because you've somehow got the idea that I have put forward some other "stopping place" for predicativism. In fact I have proposed formal systems which (I argue) are predicatively valid and go well beyond Gamma_0; the strongest one gets up to phi_{Omega^omega}(0). However, predicativism doesn't stop there. It would be easy to strengthen that system a little bit in a predicatively legitimate way and get a little further. The open challenge is to find a systematic way of (predicatively legitimately) strengthening it which goes substantially farther and captures a larger proof-theoretically significant ordinal. Moreover, I extensively argue in the Gamma_0 paper that it is highly implausible that one could ever precisely identify any ordinal as the "stopping point" of predicativism. So your declarations that predicativism is too vague to be identified with a precise ordinal is not as distressing to me as you might However, your assumption that I have merely come up with a different "story" whose correspondence with predicativism is neither better nor worse than the one adopted by Feferman and Schutte is not correct. I don't have any different "story". I argue that the principles accepted as predicative by Feferman and Schutte in fact justify systems which go beyond Gamma_0. I also argue very directly and at length that there is no coherent philosophical stance which would lead one to accept all ordinals less than Gamma_0 but not Gamma_0 itself. Anyway your question is strange because Godel had become a platonist long before Feferman and Schutte made their ordinal analysis. Some time during the 1950s Hao Wang had the idea that one could predicatively access L_alpha for every recursive alpha but not beyond. I don't know what Godel thought of that but he was a platonist by then too. I'm not a Godel scholar though, and I'm not particularly interested in WWGD (What Would Godel Do) questions, and I don't think I'll answer any more of More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-February/009872.html","timestamp":"2014-04-18T14:01:46Z","content_type":null,"content_length":"6848","record_id":"<urn:uuid:0191deff-27a9-4f83-98ed-e87bfd2d39a0>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear eta product identities - how many are there? up vote 9 down vote favorite For the Dedekind eta function, defined as usual by $\eta(q) = q^{\frac1{24}} \prod\limits_{n=1}^{\infty} (1-q^{n}) $, let for brevity $e_k:=\eta(q^k)$. With this notation, a blog entry of Michael Somos gives three beautiful identities for sums of $\eta$-products where all exponents are only $0$ or $1$: $I_{60}:\qquad \ \ e_{1}e_{12}e_{15}e_{20} + e_{3}e_{4}e_{5}e_{60}=e_{2}e_{6}e_{10}e_{30} $ $I_{210}:\qquad e_{1}e_{30}e_{35}e_{42 }+ e_{3}e_{10}e_{14}e_{105} = e_{2}e_{15}e_{21}e_{70} +e_{5}e_{6}e_{7}e_{210}$ $I_{30}:\qquad\ \ e_{1}e_{3}e_{5}e_{15}+ 2e_{2}e_{6}e_{10}e_{30}=e_{1}e_{2}e_{15}e_{30}+ e_{3}e_{5}e_{6}e_{10} $ For $I_{60}$ and $I_{30}$ the structure is clear at one glance if we write the divisors of $60$ as vertices of a Cayley-like graph (here: 'union' of two cube graphs $Q_3$ with the common face $ (2,6,30,10)$ ): Alternatively, if we define $a_0:=e_{1} e_{15} \qquad b_0:= e_{3} e_{5} $ $a_1:=e_{2} e_{30} \qquad b_1:= e_{6} e_{10} $ $a_2:=e_{4} e_{60} \qquad b_2:= e_{12} e_{20} $ $I_{60}\iff a_0b_2+b_0a_2=a_1b_1$ $I_{30}\iff a_0a_1+b_0b_1=a_0b_0+2a_1b_1$. For $I_{210}$ the symmetry is a bit less obvious to see. We can identify the divisors of $210$ with the vertices of a tesseract graph $Q_4$ or write the factors of the four products as lines of a matrix and note $a_{i,j}a_{4-i,j}=210$ as well as the factor $3$ between the two pairs of lines: I'd suggest to call identities of this type linear eta product identities. Their linearity seems to enforce a high degree of symmetry in the way these three identities $I_n$ feature the divisors of $n$, which makes them very special among the thousands of known eta product identities. It looks like there is something deeper behind. And: Why do all products have exactly $4$ factors? So, more precisely, for naturals $a\ge b$ let's define a linear eta product identity of type $\mathbf{(a,b)}$ as an identity $L_1+\cdots+L_a=R_1+\cdots+R_b$, where each $L_i$ and each $R_i$ is a finite product of pairwise different terms of form $\eta(q^{\lambda})$ with $\lambda\in\mathbb N$. (The products $L_i$ and $R_i$ don't need to be all different, e.g. the above $I_{30}$ is of type $ (3,2)$ with $L_2=L_3$. But of course we want $\{L_i\}\cap\{R_i\}=\emptyset$, and also that the gcd of all the $\lambda$'s is $1$.) Somos conjectures that $I_{60}$ is the only linear identity of type $(2,1)$. Is it possible that the three above identities are only the first ones of a whole (infinite?) set of linear eta product identities, and/or that for naturals $a\ge b$, there is at most one such identity of type $(a,b)$? co.combinatorics modular-forms special-functions nt.number-theory Kyoji Saito has papers on various eta product identities... I am not sure this is relevant, just a comment. – Alexander Chervov Mar 20 '12 at 7:23 If we define $a_k =\eta(q^k)\,\eta(q^{PQk})$ and $b_k= \eta(q^{Pk})\,\eta(q^{Qk})$, then for $P,Q =3,5$ we have $I_{60} \iff a_1b_4+a_4b_1 = a_2b_2$. For $P,Q =5,7$, it is $I_{210} \iff a_1b_6+a_3b_2 = a_2b_3+a_6b_1$. It's so tempting to speculate that these belong to an infinite family for appropriately chosen primes $P,Q$. – Tito Piezas III Feb 1 at 1:13 I tried $P,Q = 11,13$. Since $LCM(11\cdot12\cdot13) =1726$ (which has 24 divisors), I hoped to find linear relations between the 6 real numbers $a_1 b_{12},\, a_2 b_6,\, a_3 b_4,\, a_4 b_3,\, a_6 b_2,\, a_{12} b_1$. Unfortunately, Mathematica couldn't seem to find anything. Sigh. – Tito Piezas III Feb 1 at 1:15 @TitoPiezasIII Your notation $a_k$ and $b_k$ is better than mine (which only handles $k$'s that are powers of 2), moreover it shows that for $I_{60}$ and $I_{210}$, all terms $a_kb_\ell$ have $k\ ell=const$. I agree with you, that seems to cry for generalization, but in between I have gained the impression that in spite of the thousands of existing eta-identities, everything is finite there in terms of re-occuring patterns. – Wolfgang Feb 1 at 14:32 There are some more linear ones, 14 altogether in Somos' collection, some of them with products of four, but many terms (see the 2 last ones in Somos’ level 300 file, the two of level 450 and the level 945 one), others with products of six (to wit, two pairs for each 180 & 300, one for 252 somewhat similar to $I_{60}$, and one for 240). Somos has searched in vain for linear ones with products of eight. They all have many internal symmetries, as may be expected. E.g. the pairs for 180 and 300 are perfectly "isomorphic" to each other: switch all factors 3 with factors 5. Interesting! – Wolfgang Feb 2 at 18:50 show 1 more comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged co.combinatorics modular-forms special-functions nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/91672/linear-eta-product-identities-how-many-are-there","timestamp":"2014-04-19T15:22:30Z","content_type":null,"content_length":"56089","record_id":"<urn:uuid:b97be79f-ed88-4868-b515-ba4eb4f96a02>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Automatic Verification of Safety and Liveness for XScale-Like Processor Models Using WEB-Refinements Automatic Verification of Safety and Liveness for XScale-Like Processor Models Using WEB-Refinements Panagiotis Manolios and Sudarshan K. Srinivasan. CERCS TR# GIT-CERCS-03-17 We show how to automatically verify that a complex XScale-like pipelined machine model is a WEB-refinement of an instruction set architecture model, which implies that the machines satisfy the same safety and liveness properties. Automation is achieved by reducing the WEB-refinement proof obligation to a formula in the logic of Counter arithmetic with Lambda expressions and Uninterpreted functions (CLU). We use UCLID to transform the resulting CLU formula into a CNF formula, which is then checked with a SAT solver. We define several XScale-like models with out of order completion, including models with precise exceptions, branch prediction, and interrupts. We use two types of refinement maps. In one, flushing is used to map pipelined machine states to instruction set architecture states; in the other, we use the commitment approach, which is the dual of flushing, since partially completed instructions are invalidated. We present experimental results for all the machines modeled, including verification times. For our application, we found that the SAT solver Siege provides superior performance over Chaff and that the amount of time spent proving liveness when using the commitment approach is less than 1% of the overall verification time, whereas when flushing is employed, the liveness proof accounts for about 10% of the verification time. Gzipped Postscript (56K) PDF (57K) Postscript (185K)
{"url":"http://www.ccs.neu.edu/home/pete/research/safety-liveness-xscale-web.html","timestamp":"2014-04-21T05:04:27Z","content_type":null,"content_length":"3960","record_id":"<urn:uuid:99bf2476-b864-45fa-9983-0957409379d3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation for tangent line of inverse function. September 15th 2011, 11:12 AM #1 Sep 2011 Equation for tangent line of inverse function. Let f(x)=(x^4)+(x^3)+1 Let g(x) be the inverse of f(x) and define F(x)=f(2g(x)). Find an equation for the tangent line to y=F(x) at x=3. The answer the book gives is y=(88x-89)/7 I just have no clue how to get there! Please help! Re: Equation for tangent line of inverse function. If $y= f(2f^{-1}(x))$, by the chain rule, the derivative is $y'= f'(2f^{-1}(x))(2(f^{-1}(x)'$. Further, ( $(f^{-1})'= \frac{1}{f'(y)}$ where y is such that f(y)= x. Here, $f(x)= x^4+ x^3+ 1$ so $f'(x)= 4x^3+ 3x^2$. g(x) will be the value, y, such that f(y)= $y^4+ y^3+ 1= 3$. Fortunately, it is clear that y= 1 satisfies that equation so $g(3)= f^{-1}(3)= 1$ September 15th 2011, 11:25 AM #2 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/188050-equation-tangent-line-inverse-function.html","timestamp":"2014-04-17T09:50:16Z","content_type":null,"content_length":"34098","record_id":"<urn:uuid:e5c0e98c-6cb7-4d88-a256-23b9d08fcd0e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
The π-calculus as a theory in linear logic: Preliminary results Results 1 - 10 of 79 - In Proceedings of 9th Annual IEEE Symposium On Logic In Computer Science , 1994 "... The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement, Lolli [12], provide data types, higher-order programming) but lack primitives for concurrency. The logic pro ..." Cited by 86 (7 self) Add to MetaCart The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement, Lolli [12], provide data types, higher-order programming) but lack primitives for concurrency. The logic programming language, LO (Linear Objects) [2] provides for concurrency but lacks abstraction mechanisms. In this paper we present Forum, a logic programming presentation of all of linear logic that modularly extends the languages λProlog, Lolli, and LO. Forum, therefore, allows specifications to incorporate both abstractions and concurrency. As a meta-language, Forum greatly extends the expressiveness of these other logic programming languages. To illustrate its expressive strength, we specify in Forum a sequent calculus proof system and the operational semantics of a functional programming language that incorporates such nonfunctional features as counters and references. 1 - Theoretical Computer Science , 1996 "... The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement, Lolli [15], provide for various forms of abstraction (modules, abstract data types, and higher-order program ..." Cited by 85 (11 self) Add to MetaCart The theory of cut-free sequent proofs has been used to motivate and justify the design of a number of logic programming languages. Two such languages, λProlog and its linear logic refinement, Lolli [15], provide for various forms of abstraction (modules, abstract data types, and higher-order programming) but lack primitives for concurrency. The logic programming language, LO (Linear Objects) [2] provides some primitives for concurrency but lacks abstraction mechanisms. In this paper we present Forum, a logic programming presentation of all of linear logic that modularly extends λProlog, Lolli, and LO. Forum, therefore, allows specifications to incorporate both abstractions and concurrency. To illustrate the new expressive strengths of Forum, we specify in it a sequent calculus proof system and the operational semantics of a programming language that incorporates references and concurrency. We also show that the meta theory of linear logic can be used to prove properties of the objectlanguages specified in Forum. , 2003 "... The Concurrent Logical Framework, or CLF, is a new logical framework in which concurrent computations can be represented as monadic objects, for which there is an intrinsic notion of concurrency. It is designed as a conservative extension of the linear logical framework LLF with the synchronous con ..." Cited by 73 (25 self) Add to MetaCart The Concurrent Logical Framework, or CLF, is a new logical framework in which concurrent computations can be represented as monadic objects, for which there is an intrinsic notion of concurrency. It is designed as a conservative extension of the linear logical framework LLF with the synchronous connectives# of intuitionistic linear logic, encapsulated in a monad. LLF is itself a conservative extension of LF with the asynchronous connectives -#, & and #. , 2003 "... this paper, we do this by adding the #-quantifier: its role will be to declare variables to be new and of local scope. The syntax of the formula # x.B is like that for the universal and existential quantifiers. Following Church's Simple Theory of Types [Church 1940], formulas are given the type ..." Cited by 61 (14 self) Add to MetaCart this paper, we do this by adding the #-quantifier: its role will be to declare variables to be new and of local scope. The syntax of the formula # x.B is like that for the universal and existential quantifiers. Following Church's Simple Theory of Types [Church 1940], formulas are given the type o, and for all types # not containing o, # is a constant of type (# o) o. The expression # #x.B is ACM Transactions on Computational Logic, Vol. V, No. N, October 2003. 4 usually abbreviated as simply # x.B or as if the type information is either simple to infer or not important - IN JOINT INTL. CONFERENCE AND SYMPOSIUM ON LOGIC PROGRAMMING , 1996 "... In previous work, we developed Transaction Logic (or T R), which deals with state changes in deductive databases. T R provides a logical framework in which elementary database updates and queries can be combined into complex database transactions. T R accounts not only for the updates themselves, bu ..." Cited by 58 (15 self) Add to MetaCart In previous work, we developed Transaction Logic (or T R), which deals with state changes in deductive databases. T R provides a logical framework in which elementary database updates and queries can be combined into complex database transactions. T R accounts not only for the updates themselves, but also for important related problems, such as the order of update operations, non-determinism, and transaction failure and rollback. In the present paper, we propose Concurrent Transaction Logic (or CT R), which extends Transaction Logic with connectives for modeling the concurrent execution of complex processes. Concurrent processes in CT R execute in an interleaved fashion and can communicate and synchronize themselves. Like classical logic, CT R has a "Horn" fragment that has both a procedural and a declarative semantics, in which users can program and execute database transactions. CT R is thus a deductive database language that integrates concurrency, communication, and updates. All th... - OF LECTURE NOTES IN COMPUTER SCIENCE , 2001 "... We introduce the calculus of structures: it is more general than the sequent calculus and it allows for cut elimination and the subformula property. We show a simple extension of multiplicative linear logic, by a self-dual non-commutative operator inspired by CCS, that seems not to be expressible in ..." Cited by 58 (22 self) Add to MetaCart We introduce the calculus of structures: it is more general than the sequent calculus and it allows for cut elimination and the subformula property. We show a simple extension of multiplicative linear logic, by a self-dual non-commutative operator inspired by CCS, that seems not to be expressible in the sequent calculus. Then we show that multiplicative exponential linear logic benefits from its presentation in the calculus of structures, especially because we can replace the ordinary, global promotion rule by a local version. These formal systems, for which we prove cut elimination, outline a range of techniques and properties that were not previously available. Contrarily to what happens in the sequent calculus, the cut elimination proof is modular. , 1993 "... An overview of linear logic is given, including an extensive bibliography and a simple example of the close relationship between linear logic and computation. ..." Cited by 53 (8 self) Add to MetaCart An overview of linear logic is given, including an extensive bibliography and a simple example of the close relationship between linear logic and computation. - Formal Aspects of Computing , 1995 "... We propose a new framework called ACL for concurrent computation based on linear logic. ACL is a kind of linear logic programming framework, where its operational semantics is described in terms of proof construction in linear logic. We also give a model-theoretic semantics as a natural extension of ..." Cited by 47 (6 self) Add to MetaCart We propose a new framework called ACL for concurrent computation based on linear logic. ACL is a kind of linear logic programming framework, where its operational semantics is described in terms of proof construction in linear logic. We also give a model-theoretic semantics as a natural extension of phase semantics, a model of linear logic. Our framework well captures concurrent computation based on asynchronous communication. It will, therefore, provide us with a new insight into other models of concurrent computation from a logical point of view. We also expect ACL to become a formal framework for verification, reasoning, and transformation of concurrent programs by the use of techniques for traditional logic programming. ACL's attractive features for concurrent programming paradigms are also discussed. 1 Introduction For future massively parallel processing environments, concurrent programming languages based on asynchronous communication would become more and more important. Due ... - Proceedings of the 1993 International Logic Programming Symposium , 1993 "... We propose a novel concurrent programming framework called ACL. ACL is a variant of linear logic programming, where computation is described in terms of bottom-up proof search of some formula in linear logic. The whole linear sequent calculus is too non-deterministic to be interpreted as an operatio ..." Cited by 46 (4 self) Add to MetaCart We propose a novel concurrent programming framework called ACL. ACL is a variant of linear logic programming, where computation is described in terms of bottom-up proof search of some formula in linear logic. The whole linear sequent calculus is too non-deterministic to be interpreted as an operational semantics for a realistic programming language. We restrict formulas and accordingly refine inference rules for those formulas, hence overcoming this problem. Don't care interpretation of non-determinism in the resulting system yields a very clean and powerful concurrent programming paradigm based on message-passing style communication. It is remarkable that each ACL inference rule has an exact correspondence to some operation in concurrent computation and that non-determinism in proof search just corresponds to an inherent non-determinism in concurrent computation, namely, non-determinism on message arrival order. We demonstrate the power of our ACL framework by showing several programm... , 2002 "... CLF is a new logical framework with an intrinsic notion of concurrency. It is designed as a conservative extension of the linear logical framework LLF with the synchronous connectives # of intuitionistic linear logic, encapsulated in a monad. LLF is itself a conservative extension of LF with the ..." Cited by 46 (30 self) Add to MetaCart CLF is a new logical framework with an intrinsic notion of concurrency. It is designed as a conservative extension of the linear logical framework LLF with the synchronous connectives # of intuitionistic linear logic, encapsulated in a monad. LLF is itself a conservative extension of LF with the asynchronous connectives #.
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.160.8309","timestamp":"2014-04-24T09:21:45Z","content_type":null,"content_length":"37656","record_id":"<urn:uuid:30320082-9bef-46d6-b165-07fe8c35340c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
arithmetic exception Author arithmetic exception 1) ArithmeticException will occur after execution of the following code Joined: Aug 14, int i = 10; 2001 float f = 0.0; Posts: 23 double d = i/f; this is the answer and i can't understand it .i think that float f is 0 so divide by zero give arithmetic exception. may be this answer is becoz it give runtime error not exception am i right? ronak please help. integers give an ArithmeticException when division by zero occurs. A float divided by zero will give positive infinity, negative infinity or NaN. Joined: Jul 22, 2000 This question does not belong in the JavaRanch forum. The JavaRanch forum is for discussion of the JavaRanch site, not questions about Java. I'm moving this to Java in General Posts: 9043 (beginner). 10 JavaBeginnersFaq "Yesterday is history, tomorrow is a mystery, and today is a gift; that's why they call it the present." Eleanor Roosevelt "The Hood" In theory 0.0f is a rounding of some number with a zillion decimal places but that is not necessarily zero. Therefore the division is legal, just results in a VERY SMALL or VERY Joined: Sep 29, LARGE numbers. That is why the answer comes back as POSITIVE_INFINITY. Posts: 8521 "JavaRanch, where the deer and the Certified play" - David O'Meara subject: arithmetic exception
{"url":"http://www.coderanch.com/t/389500/java/java/arithmetic-exception","timestamp":"2014-04-20T13:38:06Z","content_type":null,"content_length":"22256","record_id":"<urn:uuid:cd1cce72-fc70-4c60-ab98-b7098dd3dc51>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Anthony Henderson I am an Associate Professor in the School of Mathematics and Statistics at the University of Sydney, and currently (Jan-May 2014) a Visiting Fellow of the Mathematical Sciences Institute at the Australian National University. Here is my curriculum vitae (pdf, 8 pages). Postal address: A/Prof Anthony Henderson School of Mathematics and Statistics F07 University of Sydney NSW 2006 Office: Room 805 Carslaw Building Email: anthony.henderson[at]sydney.edu.au I am a member of the Algebra research group. Broadly speaking, I am interested in geometric and combinatorial aspects of representation theory. The papers and talks listed below can be categorized into three main themes. • I have a long-standing interest in the geometry of nilpotent varieties and Springer fibres, and their generalizations such as orbit closures in Hilbert nullcones and Nakajima's quiver varieties. It is often the case that the (intersection) cohomology of these varieties controls decomposition numbers in representation theory. See preprints 1-3, research publications 1,3,10,13,15-17,19,21-22, expository publication 1 and talks 1-4. Current funding for research in this area: In November 2011, I was awarded a four-year Future Fellowship by the Australian Research Council for the project `Springer fibres, nilpotent cones, and representation theory'. • I have also worked in more combinatorial areas related to hyperplane arrangements and wonderful compactifications. My interest has been in computing the characters of the representations of symmetric groups and wreath products on the cohomology of various varieties on which they act. See research publications 5-8,11,12,14,18,20. • The subject of my PhD thesis and subsequent research was the extension of Lusztig's results on reductive groups over a finite field to the case of symmetric spaces. See research publications In September 2012, I was one of two recipients of the Australian Mathematical Society Medal, awarded annually to a member of the Society under the age of 40 for distinguished research in the mathematical sciences. I thereby became a Fellow of the Australian Mathematical Society. In 2011, I was the inaugural winner of the Christopher Heyde Medal of the Australian Academy of Science. This medal is for an outstanding researcher in the mathematical sciences under the age of 40, working in Australia: it is awarded annually on a three-year rotation between pure mathematics, applied mathematics, and statistics. I am currently the Elected Vice-President of the Australian Mathematical Society and an Associate Editor of the Journal of the Australian Mathematical Society. Research Publications Expository Publications 1. Enhancing the Jordan canonical form, Austral. Math. Soc. Gaz. 38 (2011), no. 4, 206-211. 2. Representations of Lie Algebras: An Introduction Through gl[n], Australian Mathematical Society Lecture Series, no. 22, Cambridge University Press, Cambridge, 2012, available from CUP America, Amazon, Amazon UK and Book Depository. See the reviews on MathSciNet, zbMATH, Choice (subscriptions required). Selected Recent Talks Postdoctoral Supervision Since July 2013, I supervise the University of Sydney Postdoctoral Fellowship of Alan Stapledon. Postgraduate Supervision I supervise or have supervised the following postgraduate students. I have also supervised six Honours projects and eight Vacation Scholarships. For information about available PhD projects, see my page in Research Supervisor Connect. Undergraduate Teaching In May 2009, I was awarded a Faculty of Science Citation for Excellence in Teaching. I have lectured the following units of study. • Semester 1, 2007-08: MATH1901 Differential Calculus (Advanced). • Semester 1, 2005-06,08-10: Special Studies Program unit MATH1906. • Semester 1, 2007-09: MATH2069 Discrete Mathematics and Graph Theory (joint with the Advanced version MATH2969). • Semester 1, 2011 and Semester 2, 2013: Special Studies Program unit MATH2916/MATH2917. • Semester 2, 2011: MATH2968 Algebra (Advanced). • Semester 2, 2006-10: MATH3966 Modules and Group Representations (Advanced). • Semester 1, 2003-04: MATH3002 Rings and Fields. • Semester 2, 2005 and Semester 1, 2011: Pure Mathematics Honours unit Lie Algebras. • Semester 1, 2002: Pure Mathematics Honours unit Introduction to Lie Theory. • Summer 2004 and 2007: Lie Algebras in the AMSI Summer School. I also run the SUMS Problem Competition for undergraduates from around Australia. Some Links
{"url":"http://www.maths.usyd.edu.au/u/anthonyh/","timestamp":"2014-04-16T22:25:54Z","content_type":null,"content_length":"21762","record_id":"<urn:uuid:bf11f43b-63c3-45b3-8616-58f1cae512f5>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
John Tukey and the Beginning of Interactive Graphics To fully appreciate the revolutionary nature of PRIM-9 one has to view it against the backdrop of its time. When Statistics was widely taken to be synonymous with inference and hypotheses testing, PRIM-9 was a purely descriptive instrument designed for data exploration. When statistics research meant research in statistical theory, employing the tools of mathematics, the research content of PRIM-9 was in the area of computer-human interfaces, drawing on tools from computer science. When the product of statistical research was theorems published in journals, PRIM-9 was a program documented in a movie. John W. Tukey's Work on Interactive Graphics. The Annals of Statistics, Vol. 30 No. 6. 2002. Luckily, you can appreciate Tukey's work here at the ASA video library. It's even more amazing when you consider where computers and technology were at back then. Who knows where Statistics would be if it weren't for Tukey and his brilliance and creativity. I can't imagine, or maybe I just don't want to. Tukey was someone who truly understood data -- structure, patterns, and what to look for -- and because of that, he was able to create something amazing. 3 Comments That is an astonishing video. Thank you for pointing it out! This system anticipates the next 25 years of work in visualization. (To give one example, Spotfire seems just a footnote to PRIM-9). And Tukey’s summary at the end shows that he learned lessons about the importance of UI and iterative development that it took everyone else decades to figure out. Thank you VERY VERY much for this fantastic Video. I am a french Engineer trying to teach concepts of Statistics by means of EDA with interactive Graphics, and to promote Tukey’s Ideas in Data This video will help me for this promotion.
{"url":"http://flowingdata.com/2008/01/01/john-tukey-and-the-beginning-of-interactive-graphics/","timestamp":"2014-04-18T23:39:37Z","content_type":null,"content_length":"19950","record_id":"<urn:uuid:fcf17e0a-386d-4785-a7f3-8209832d097e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Probabilistic Combinatorics 21-737 Probabilistic Combinatorics This course covers the probabilistic method for combinatorics in detail and introduces randomized algorithms and the theory of random graphs. Methods covered include the second moment method, the R\"odl nibble, the Lov\'asz local lemma, correlation inequalities, martingale's and tight concentration, Janson's inequality, branching processes, coupling and the differential equations method for discrete random processes. Objects studied include the configuration model for random regular graphs, Markov chains, the phase transition in the Erd\H{o}s-R\'enyi random graph, and the Barab\'asi-Albert preferential attachment model. Course Instructor: Wean Hall 6105 Office Hours: Thursday 2:00-3:30 or by appointment A sample final: Postscript PDF Homework 2: Postscript PDF Hints for Homework 2: Postscript PDF Homework 3: Postscript PDF Hints for Homework 3: Postscript PDF Homework 4: Postscript PDF Hints for Homework 4: Postscript PDF Schedule of paper presentations: Wednesday April 24 Brian Kell, Balanced allocations. Friday, April 26 Jenny Iglesias, The probability that a random multigraph is simple. Monday, April 29 Michael Nugget, The Asymptotic Order of the k-SAT Threshold. Wednesday, May 1 Nate Ince, Asymptotically the list colouring constants are 1. Friday, May 3 Misha Lavrov, Almost all cubic graphs are Hamiltonian. Possible presentation papers: D. Achlioptas and C. Moore, The Asymptotic Order of the k-SAT Threshold, Proc. Foundations of Computer Science (FOCS) 2002. Y. Azar, A. Broder, A. Karlin, and E. Upfal, Balanced allocations, SIAM J. on Computing, 29, (2000), 180-200. T. Bohman, A. Frieze and E. Lubetzky, A note on the random greedy triangle-packing algorithm , Journal of Combinatorics, 1 (2010), 477-488. Svante Janson, The probability that a random multigraph is simple. Combin. Probab. Comput. 18 (2009), 205-225. M. Krivelevich, Bounding Ramsey numbers through large deviation inequalities, Random Structures and Algorithms 7 (1995), 145-155. A Nachmias, Y. Peres, The critical random graph, with martingales, Israel Journal of Math, 176 (2010) 29-43. A Nachmias, Y. Peres, Component sizes of the random graph outside the scaling window , Latin American Journal of Probability and Mathematical Statistics (ALEA), 3, 133-142 B. Reed and B. Sudakov, Asymptotically the list colouring constants are 1, J. Combinatorial Theory Ser. B 86 (2002), 27-37. A. Rucinski and N. Wormald, Random graph processes with degree restrictions, Combinatorics, Probability and Computing 1 (1992) 169-180. J. Spencer Asymptotic Packing via A Branching Process Random Structures and Algorithms, 7 (1995,) 167-172 J. Spencer and N. Wormald, Birth control for giants , Combinatorica 27 (2007), 587-628.
{"url":"http://www.math.cmu.edu/~tbohman/21-737/pc.html","timestamp":"2014-04-21T00:14:35Z","content_type":null,"content_length":"9612","record_id":"<urn:uuid:9f9145c2-f25f-4c86-a798-bff534d50d69>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
[racket] arity of + versus <= From: Carl Eastlund (cce at ccs.neu.edu) Date: Fri Oct 28 14:36:56 EDT 2011 On Fri, Oct 28, 2011 at 2:28 PM, Joe Marshall <jmarshall at alum.mit.edu> wrote: > On Fri, Oct 28, 2011 at 11:08 AM, Carl Eastlund <cce at ccs.neu.edu> wrote: >> You seem to be assuming that we have to pick one binary->nary for all >> binary operators. > That is the nature of `generalization'. If I have to discriminate, it isn't > general. Only if our job is to generalize binary operators as a class to n-ary operators. This thread is about generalizing <= (and a few related operators) to n-ary operators. We can do the latter without doing the >> I would choose this one for relations and the other >> one for associative operators with identities. > And you thus answer the original poster's question. > `` is there a rationale beyond historical precedent > for + and * to allow any number of arguments but, =, <=, <, >, >= to > require at least two arguments?'' > Yes. The two generalizations are different. How is that a rationale? I don't see why this kind of difference is in any way an argument against generalization. It may be a reason that the designers thought of one kind of generalization but not the other, but that's in the category of historical precedent. > I made a clumsy argument to this effect by showing that the natural > generalization > for add and multiply do not extend to relational operators. Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2011-October/048839.html","timestamp":"2014-04-16T16:10:32Z","content_type":null,"content_length":"6894","record_id":"<urn:uuid:13a2f477-2367-496f-8ad2-e5d28e5bf18b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Primelimit Karim BELABAS on Mon, 1 Feb 1999 18:30:39 +0100 (MET) [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] > > I don't understand what you want to achieve by artificially lowering the > > number of primes available to some functions. Could you give more details ? > If I understand Gerhard correct, some functions take primelimit not as > an advice about the number of available primes, but as a directive to > loop over these primes. > Suppose I want to have a table of primes up to 1e9 (very easy with the > current PARI), say, to use forprime() with big primes. This may > significantly slow down the functions which loop over *all* existing > primes. I don't think there's any function like that. I'll check that. The only exception I can think of is factor(x, 0), and you can use factor(x, bound) to achieve exactly what you want [the default factor will only trial divides by the first 1000 primes or so before trying more elaborate stuff]. Karim Belabas email: Karim.Belabas@math.u-psud.fr Dep. de Mathematiques, Bat. 425 Universite Paris-Sud Tel: (00 33) 1 69 15 57 48 F-91405 Orsay (France) Fax: (00 33) 1 69 15 60 19 PARI/GP Home Page: http://hasse.mathematik.tu-muenchen.de/ntsw/pari/
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-9902/msg00002.html","timestamp":"2014-04-16T10:30:22Z","content_type":null,"content_length":"5054","record_id":"<urn:uuid:a4343156-ca2f-47f0-829d-4ebfc7b3348f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: i'm stuck i need help factoring out this> 7k^2+9k • one year ago • one year ago Best Response You've already chosen the best response. each has a common factor of \(k\) so you can "factor it out" Best Response You've already chosen the best response. so i keep getting 16 but i'm not sure if thats right Best Response You've already chosen the best response. It's not correct. When you take out the common factor k, what is left for each term? Best Response You've already chosen the best response. 7 and 9 Best Response You've already chosen the best response. remember.. \[7k^2 = 7k \times k\] Best Response You've already chosen the best response. Best Response You've already chosen the best response. so i have 63 Best Response You've already chosen the best response. You cannot multiply the terms... One example: factor x^2 + 2x In this case, x is the common factor. Take it out and group the rest of the terms, that is x^2 + 2x = x (x) + 2(x) = x ( x+2) Can you try again for your question? Best Response You've already chosen the best response. so is k(7k+9) correct Best Response You've already chosen the best response. Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50245de2e4b09c3cae9ddca7","timestamp":"2014-04-21T16:05:29Z","content_type":null,"content_length":"51669","record_id":"<urn:uuid:5bc6c1dc-ad3a-4149-a346-bf9c28024e07>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Bertrand, Joseph Born: Mar 11, 1822 AD Died: 1900 AD, at 78 years of age. 1822 - Born on the 11th of March in Paris, France. French mathematician and educator remembered for his elegant applications of differential equations to analytical mechanics, particularly in thermodynamics, and for his work on statistical probability and the theory of curves and surfaces. 1836 - He received his doctorate, he officially entered the École Polytechnique and also published his first paper on the mathematical theory of electricity. - Bertrand was a professor at the École Polytechnique and Collège de France. 1839 - He received his doctorate, he officially entered the École Polytechnique and also published his first paper on the mathematical theory of electricity. 1843 - Bertrand published the memoirs on Surfaces isothermes orthogonale. 1844 - He was appointed répétiteur d'analyse at the École Polytechnique. 1845 - He conjectured that there is at least one prime between n and 2n-2 for every n > 3. 1850 - Chebyshev proved this conjecture, which now called Bertrand's postulate. - He is famous for a paradox in the field of probability, now known as Bertrand's Paradox. 1855 - He translated Gauss's work on the theory of errors and the method of least squares into French. 1862 - He held a chair at the prestigious Collège de France. - He was also a member of the influential Académie des Sciences. 1883 - Bertrand decided to publish a review of Walras's book in the Journal des Savants. 1888 - His book Calcul des probabilitiés contains a paradox on continuous probabilities now known as Bertrand's paradox. 1900 - Joseph Louis François Bertrand died on 5th of April in Paris, France. Page last updated: 10:59pm, 07th Aug '07
{"url":"http://www.s9.com/Biography/Print/Bertrand-Joseph","timestamp":"2014-04-16T07:41:23Z","content_type":null,"content_length":"4520","record_id":"<urn:uuid:d7297543-a9eb-4dfd-b501-6100fe94c850>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
IBM Claims Breakthrough In Analysis of Encrypted Data 4984873 story Posted by from the scrambled-in-the-shell dept. An anonymous reader writes "An IBM researcher has solved a thorny mathematical problem that has confounded scientists since the invention of public-key encryption several decades ago. The breakthrough, called 'privacy homomorphism,' or 'fully homomorphic encryption,' makes possible the deep and unlimited analysis of encrypted information — data that has been intentionally scrambled — without sacrificing writes that the solution IBM claims "might better enable a cloud computing vendor to perform computations on clients' data at their request, such as analyzing sales patterns, without exposing the original data. Other potential applications include enabling filters to identify spam, even in encrypted email, or protecting information contained in electronic medical records." This discussion has been archived. No new comments can be posted. • First post! (Score:5, Funny) by fenring (1582541) on Thursday June 25, 2009 @02:16PM (#28469471) Have you seen the new neighbours. I think they're homomorphic. • Yeah (Score:3, Insightful) by rodrigoandrade (713371) on Thursday June 25, 2009 @02:18PM (#28469507) "might better enable a cloud computing vendor to perform computations on clients' data at their request, such as analyzing sales patterns, without exposing the original data. Other potential applications include enabling filters to identify spam, even in encrypted email, or protecting information contained in electronic medical records." Right, because we've already figured out everything about cloud computing and it's a totally stable environment ready to be deployed in every company around the globe. Time to take it to the next □ But what if it took... a TRILLION times longer? (Score:3, Insightful) by spun (1352) Yeah, you can perform calculations on encrypted data without unencrypting it. But it's just a LITTLE slow. The first step is just showing it can be done, but it's a very long way from useful. ☆ Re:But what if it took... a TRILLION times longer? (Score:4, Informative) by SiliconEntity (448450) on Thursday June 25, 2009 @05:54PM (#28473133) I read the paper and my guess is that a TRILLION is actually an understatement. It looks to me like it might be almost INFINITELY slower. In other words, completely impractical and only of theoretical value. However, now that the first step has been taken, it's possible that someone will come up with an improvement that makes the idea practical someday. ○ by js_sebastian (946118) Actually, the presentation (http://www.fields.utoronto.ca/audio/08-09/crypto/gentry/index.html) claims that evaluating one logic gate takes in the order of k to the 7th operations, where k is the size of the key. For 128-bit keys that's around 10 to the 15th power. Which is most definitely not infinitely slower (whatever this "informative" sentence is supposed to mean), but also not exactly practical. • No More Privacy (Score:5, Insightful) by basementman (1475159) on Thursday June 25, 2009 @02:22PM (#28469563) Homepage "perform computations on clients' data at their request, such as analyzing sales patterns" Or without their request. □ Since it's close to being slashdotted... (Score:5, Informative) by Magic5Ball (188725) on Thursday June 25, 2009 @02:25PM (#28469615) IBM researcher solves longstanding cryptographic challenge Posted on 25 June 2009. An IBM researcher has solved a thorny mathematical problem that has confounded scientists since the invention of public-key encryption several decades ago. The breakthrough, called "privacy homomorphism," or "fully homomorphic encryption," makes possible the deep and unlimited analysis of encrypted information - data that has been intentionally scrambled - without sacrificing confidentiality. IBM's solution, formulated by IBM Researcher Craig Gentry, uses a mathematical object called an "ideal lattice," and allows people to fully interact with encrypted data in ways previously thought impossible. With the breakthrough, computer vendors storing the confidential, electronic data of others will be able to fully analyze data on their clients' behalf without expensive interaction with the client, and without seeing any of the private data. With Gentry's technique, the analysis of encrypted information can yield the same detailed results as if the original data was fully visible to all. Using the solution could help strengthen the business model of "cloud computing," where a computer vendor is entrusted to host the confidential data of others in a ubiquitous Internet presence. It might better enable a cloud computing vendor to perform computations on clients' data at their request, such as analyzing sales patterns, without exposing the original data. Other potential applications include enabling filters to identify spam, even in encrypted email, or protecting information contained in electronic medical records. The breakthrough might also one day enable computer users to retrieve information from a search engine with more confidentiality. "At IBM, as we aim to help businesses and governments operate in more intelligent ways, we are also pursuing the future of privacy and security," said Charles Lickel, vice president of Software Research at IBM. "Fully homomorphic encryption is a bit like enabling a layperson to perform flawless neurosurgery while blindfolded, and without later remembering the episode. We believe this breakthrough will enable businesses to make more informed decisions, based on more studied analysis, without compromising privacy. We also think that the lattice approach holds potential for helping to solve additional cryptography challenges in the future." Two fathers of modern encryption - Ron Rivest and Leonard Adleman - together with Michael Dertouzos, introduced and struggled with the notion of fully homomorphic encryption approximately 30 years ago. Although advances through the years offered partial solutions to this problem, a full solution that achieves all the desired properties of homomorphic encryption did not exist until now. IBM enjoys a tradition of making major cryptography breakthroughs, such as the design of the Data Encryption Standard (DES); Hash Message Authentication Code (HMAC); the first lattice-based encryption with a rigorous proof-of-security; and numerous other solutions that have helped advance Internet security. Craig Gentry conducted research on privacy homomorphism while he was a summer student at IBM Research and while working on his PhD at Stanford University. □ Re: (Score:3, Insightful) by megamerican (1073936) Or without their request. The NSA figured that out a long time ago. □ by Anonymous Coward Or without their request. If they really figured it out, then sure they can analyze without your request, but they can't decrypt the results without your key. So you still have the same privacy. BTW, this is the entire point of this process. ☆ by GargamelSpaceman (992546) I wonder if this would make possible the following: Here is Encrypt(www.slashdot.org), please compute Encrypt(DNSLookup(www.slashdot.org)) so that I can then Decrypt(Encrypt(DNSLookup(www.slashdot.org))) to produce 216.34.181.48. □ Re:No More Privacy (Score:4, Interesting) by mea37 (1201159) on Thursday June 25, 2009 @02:53PM (#28470073) TFA doesn't seem clear on this point, but what the name of the technique implies is that you can perform the operation, but neither the inputs nor the outputs are ever decrypted. So if you can't see the question, and you can't see the answer, then why would you perform the operation other than at the request of someone who can (i.e. the client)? That said, I'd like to know a lot more about this before I'd want to trust it. For this to work, I'd think a lot of the data's structure must be preserved. Maybe you can't detect that structure from the encrypted data, but you can probably infer a lot about it by analyzing the algorithms your clients ask you to apply (especially if they're your algorithms - i.e. software-as-a-service type stuff). I'm impressed if this doesn't create vulnerabilities. Also I suspect this is fundamentally divorced from public key techniques. If I'm able to encrypt values of my choosing and perform operations of my choosing on encrypted values, I'm pretty sure I can work backward to extract the cleartext from the encrypted data the client provides... ☆ here's why this is important. (Score:3, Informative) by goombah99 (560566) TFA doesn't seem clear on this point, but what the name of the technique implies is that you can perform the operation, but neither the inputs nor the outputs are ever decrypted. So if you can't see the question, and you can't see the answer, then why would you perform the operation other than at the request of someone who can (i.e. the client)? Example, I want the total sales figures for all the left handed employees. I cobble together the appropriate SQL processing request send it to my cloud server which rummages throught the data base summing up the figures for some subset of the fields. It sends me back just the sum, encypted. It never knows which employees it was selecting nor any of their sales figures or even the sum. It just has the encrypted result that it sends to me all processed. otherwise I'd have to pull every encrypted record of ○ by TheLink (130905) Why would you encrypt ballots? You should allow observers and party representatives to watch the counting of the votes. Requirement #0 of democratic elections: elections do not just have to be fair, they have to be seen as fair. Electronic voting systems fail that requirement. You can have simple scalable solutions like paper based voting that are easily understandable (esp on how easy and hard it is to cheat) and thus satisfy requirement #0. So it makes no sense to me to use electronic voting systems unless you □ Re:No More Privacy (Score:5, Informative) by John Hasler (414242) on Thursday June 25, 2009 @02:56PM (#28470133) Homepage Everything remains encrypted throughout the process, including the output. Only the client can read the results. That is the point. □ by Mashiara (5631) TFA is skimp on this but after bit of Googling around I understand a little more, see also http://en.wikipedia.org/wiki/Homomorphic_encryption [wikipedia.org]. The point being that those who provide the encrypted data must encrypt it in a special way to allow the homomorphic properties to be taken advantage of. ☆ by caramelcarrot (778148) Also, as someone points out below, http://science.slashdot.org/comments.pl?sid=1282009&cid=28470091 [slashdot.org], you don't use the same algorithm as before - but you instead "encrypt" the algorithm so it's working in the same space as the encrypted data. I'm sort of imagining some sort of encrypted virtual machine. Otherwise some of the flaws being talked about would be an issue. □ Re:No, misleading headline (Score:3, Informative) by Taxman415a (863020) No, the Slashdot headline is, as usual, misleading. The article didn't really help explain the distinction either. This breakthrough doesn't help anybody break otherwise secure, non homomorphic cryptosystems and suddenly make them insecure. What the researcher did was be the first to create a fully homomorphic cryptosystem that allows the types of things described in the article, while still keeping certain desired information secure. This Wikipedia article [wikipedia.org] gives a much better description of the issue, and □ by Chris Kamel (813292) Doesn't matter, you'll need to decrypt the output anyway so the analyzer won't be able to benefit from the analysis result, only the client can: http://en.wikipedia.org/wiki/Homomorphic_encryption [wikipedia.org] Using such a scheme, one could homomorphically evaluate any circuit, effectively allowing the construction of programs which may be run on encryptions of their inputs to produce an encryption of their output. Since such a program never decrypts its input, it could be run by an untrusted party without revealing it □ by GargamelSpaceman (992546) But the output is encrypted. So basically you give them your sales data ( encrypted ) and they compute the results ( encrypted ). They can't understand the results they produce. Only you can decrypt the results and make use of them. • Fully homomorphic encryption using ideal lattices (Score:5, Informative) by grshutt (985889) on Thursday June 25, 2009 @02:22PM (#28469575) Homepage The abstract for Gentry's article can be found at: http://doi.acm.org/10.1145/1536414.1536440 [acm.org] • If they can analyze the data... (Score:2, Insightful) by Lord Juan (1280214) then that form of encryption is useless for highly sensitive information. It's as simple as that. □ BAD summary (Score:5, Informative) by spun (1352) <loverevolutionary@@@yahoo...com> on Thursday June 25, 2009 @02:43PM (#28469889) Journal You can not analyze the data. You can perform calculations on it without knowing what it is. So, for instance, you could encrypt all your tax info, send it to a company that processes the encrypted data without decrypting it, and sends you back your encrypted tax return, without ever having seen any of your financial detail. ☆ by master_p (608214) How is it possible for them to calculate the tax return if they do not analyze the data? ○ by spun (1352) That's the breakthrough. They add (as a made up example) E47F109A and FA619B05, coming up with 191AA7FC. They have no idea that, when decrypted with your key, those values are 51, 49, and 100 respectively. How is that possible? You'll have to read the paper, because I can't explain it :) □ Re: (Score:2, Interesting) by Isarian (929683) So I may have missed something from the article, but are all forms of public-key encryption vulnerable or just certain algorithms? ☆ by Magic5Ball (188725) This isn't a vulnerability with existing encryption systems, it's a scheme for a different way to structure and encrypt the data to explicitly allow calculations on the data without exposing the original values. □ by Chris Mattern (191822) then that form of encryption is useless for highly sensitive information. Unless the analysis is also encrypted. □ Re: (Score:2, Interesting) by Anonymous Coward They can perform computations on the data, but the answer is still encrypted. □ by John Hasler (414242) All the data and all the results remain encrypted so that only the client can read the results. That is the point. Read about homomorphic encryption here [wikipedia.org] • Wait, what? (Score:2, Interesting) Okay, maybe I'm a noob when it comes to encryption, but I was under the impression that if you were able to read the encrypted email, you were probably able to read the encrypted recipient address too. Is there something I'm missing here? □ Re:Wait, what? (Score:5, Informative) by moogied (1175879) on Thursday June 25, 2009 @02:37PM (#28469809) Yes, yes you are. The point is not to read the content, but to enable a computer to analyze the content in such a way that they can deduce statistics and patterns from it. FTFA: computer vendors storing the confidential, electronic data of others will be able to fully analyze data on their clients' behalf without expensive interaction with the client, and without seeing any of the private data I don't need to know that you love apples to know you definitely love the same thing as 14 other people. Lets assume that we have 20 encrypted sets of data. Lets also assume the 20 sets say basically the same thing but because of the encyrption method look nothing a like from the raw data perspective. If you go ahead and find a way to analyze the encryption enough to know that the 20 emails all contain a similar message, but not enough to actually know what the message is... well then! You could go ahead and store all of ebay's customer information and do massive amounts of data crunching for them, without ever actually seeing any data. This is a huge problem in IT, where admins need access to the databases in order to see how the data is being stored, how the tables are working, etc etc.. but can't actually have access to the database because then they might see customer information. So you either let joe-bob admin in there and let him see all the data, or you don't. Now you can let the admin in there, they can determine anything they might want to know, but they never actually see any exact data. No, I don't know anything about the math portion.. but thats basically what they are trying to say in the article. I think. :) ☆ by geminidomino (614729) * Yes, yes you are. The point is not to read the content, but to enable a computer to analyze the content in such a way that they can deduce statistics and patterns from it. I'm not crypto-geek, but aren't patterns generally the bane of encryption? ☆ by Cyberax (705495) Implausible. Changing just one bit results in an 'avalanche effect' in good ciphers, so quite a lot of bits will be changed. You won't be able to derive any useful information from that. Fair enough, but how is that better than just "anonymizing" data from a database through a one-way hash and then removing all directly identifiable info (client ID, etc)? ☆ by e-scetic (1003976) If I encrypt my data, and I like apples, and I can now use this new technique to determine that 20 other people like apples too, don't I have an essential piece of information I can use to decrypt the encrypted data of those 20 other people? ○ by John Hasler (414242) Homomorphic encryption does not give you any such ability. Okay... So what if I like apples. And I have a username that starts with S. Now we've already established that I can see how many other people like apples. Can I see how many other people like apples that have usernames that start with S? And then can I see how many other people like apples, and have usernames that start with 'Sp'? I'm sure you see where I'm going with this. I may just be a cynical bastard with a math education insufficient to understand the technique by which this works, but it soun ★ by mhall119 (1035984) You can only see if 20 other people like apples if that plaintext data was encrypted with the same key as the plaintext data that says you like apples. Suppose Coca-Cola and Pepsi Cola both use the same Market Research firm, which we'll call StatisticsInc. Now, companies are very jealous of market insight data, most will not work with a firm that also works with a competitor, lest someone get bribed into sharing trade secrets. What this allows if for Coca-Cola to sent a bunch of demographic data to Statist ☆ by DragonWriter (970822) Now you can let the admin in there, they can determine anything they might want to know, but they never actually see any exact data. If they can determine "anything they might want to know" about the data, that is exactly equivalent to having full access to the data. So if that's what this offers, for a 12 order of magnitude performance hit, I'm not impressed. • from the horses mouth (Score:5, Informative) by Anonymous Coward on Thursday June 25, 2009 @02:31PM (#28469701) Just FYI this site is whole sale cut and paste ripping IBM press off. □ Re: (Score:2, Informative) by NoCowardsHere (1278858) Uhh... I'm not sure how to break this to you, but WHOLE POINT of a PRESS RELEASE is that it gets sent out to the press, in the hopes that websites and newspapers will reprint it. That's why IBM published it in the first place. So, yeah, it's not plagiarism, sorry. ☆ by RegularFry (137639) Yeah, but it's kind of nice to hope that the news vendor will add some of their own analysis rather than simply regurgitating a press release. Foolishly optimistic, in most cases, but nice nonetheless. • by 0xABADC0DA (867955) I bet multi-modal reflection sorting can determine what the confidential info is. • Wikipedia to the rescue (Score:5, Informative) by Dr. Manhattan (29720) <.moc.liamg. .ta. .171rorecros.> on Thursday June 25, 2009 @02:41PM (#28469871) Homepage With fully homomorphic encryption [wikipedia.org], you can perform operations on the encrypted data, in encrypted form, that produces encrypted output. Sort of like doing a database query on encrypted data, that produces an encrypted result. So you could store your data somewhere in encrypted form, ask the host to perform some operations using their CPU cycles, and send you the result. You decrypt the result yourself, the host never sees unencrypted data at any point. Cool, but I'm half-convinced that holes will be found. The first time a new encryption scheme is put to the test, it usually fails. Still, hopefully, it'll lead to a truly secure scheme. □ Re:Wikipedia to the rescue (Score:5, Insightful) by bobdehnhardt (18286) on Thursday June 25, 2009 @02:46PM (#28469953) Holes are always found - no method is 100% foolproof. The question is will the holes be usable? If the level of effort to exploit the holes is high enough, we may not see them exploited for some time. But the holes are there, and they will be found. ☆ by wren337 (182018) Holes are always found - no method is 100% foolproof. http://en.wikipedia.org/wiki/One-time_pad [wikipedia.org] ○ by Anti_Climax (447121) While I love the elegance of good OTP encryption, it is only as good as the security during the key exchange which is not 100% foolproof. ■ by Abcd1234 (188840) Nevertheless, OTP itself *is* foolproof. Key exchange is a whole other ball of wax. □ Re: (Score:2, Funny) by jgbishop (861610) Of course holes will be found. It's made out of a lattice! • f(x) = x □ by SUB7IME (604466) Yes, you may, until 1==0. □ Re:Can I run this homomorphism on your data? (Score:4, Informative) by Simetrical (1047518) <Simetrical+sd@gmail.com> on Thursday June 25, 2009 @09:33PM (#28476011) Homepage f(x) = x No. The operations you get are addition and multiplication, that's it. Given E(x) and E(y), you can compute E(x + y) or E(xy), nothing else. And you do this without ever learning x or y. RTFWA [wikipedia.org]. The reason for the terminology is that the encryption function E is a ring homomorphism [wikipedia.org] between plaintext and ciphertext. Some operation of addition is defined on both plaintext and ciphertext such that if x and y are plaintext, f(x + y) = f(x) + f(y). (The "+" on the left is addition of plaintext, the "+" on the right is addition of ciphertext: two totally different operations.) Multiplication is similar. You don't get to apply arbitrary homomorphisms to the data, it's the (predetermined) encryption function that's the homomorphism. Actually, I don't see any mention of subtraction -- so maybe it's really a semiring homomorphism. With an actual ring homomorphism you'd also have f(x - y) = f(x) - f(y), and some 0 element with f(0) = 0. And maybe f(1) = 1, depending on definition. • simple explanation (Score:5, Informative) by Anonymous Coward on Thursday June 25, 2009 @02:54PM (#28470091) OK, it looks like a lot of people are missing the point. What Gentry figured out was a scheme for carrying out arbitrary computations on encrypted data, producing an encrypted result. That way, you can do your computation on encrypted data in the "cloud", but only you can view the results. If E() is your encryption function, x is your data, and f() is the function you'd like to compute, homomorphic encryption gives you a function f'() such that f'(E(x)) = E(f(x)). But at no point does it actually decrypt your data. This could be huge for secure computing. □ by TypoNAM (695420) If E() is your encryption function, x is your data, and f() is the function you'd like to compute, homomorphic encryption gives you a function f'() such that f'(E(x)) = E(f(x)). But at no point does it actually decrypt your data. Got an example in C language instead? ☆ by Lunzo (1065904) Replace the = with ==. You now have it in C. Joking aside GP was talking mathematical functions, which is quite appropriate given the context - theory underpinning cryptography. □ by cenc (1310167) Perhaps I am a bit slow and stupid, but is this not like running an encrypted virtual machine or at least could be done in some sort of encrypted virtual machine? Something where the underlying hardware and OS does not know what the processes and data are at the higher level. □ by harlows_monkeys (106428) What Gentry figured out was a scheme for carrying out arbitrary computations on encrypted data, producing an encrypted result. That way, you can do your computation on encrypted data in the "cloud", but only you can view the results The other direction--letting the server do secure computation on the client, is also very interesting. Consider an MMORPG. One of the problems in MMORPG is cheat programs. These can be particularly troublesome in a PvP game. For example, there were programs for Dark Age of Camelot that would show you every enemy player in a large bubble around you, regardless of any obstacles blocking line of sight or the use of stealth abilities. The obvious solution for this is that the server should only send player posit □ by mdmkolbe (944892) Isn't there some restriction on your "f" function? For example, it might be nice to compute a diff between two encrypted files, but the resulting size of the diff could reveal a lot of information and thus make the system insecure. • At first... (Score:4, Funny) by curtix7 (1429475) on Thursday June 25, 2009 @03:00PM (#28470195) I thought I was being childish when i thought to myself "tehee homo-morphic," but after RTFA my suspicions may be justified: Two fathers of modern encryption... • Not really a threat to privacy (Score:2, Interesting) by bk2204 (310841) Basically, IBM has created a set of cryptographic algorithms that allow fully homomorphic encryption. If you don't want your data to be analyzed, all you have to do is use an algorithm that doesn't support it. You'd want to do that anyway, since you'd want to use algorithms that are already considered strong, such as RSA and AES. Although RSA is homomorphic in theory, in practice it is not, since padding is used to prevent other weaknesses. • by Muad'Dave (255648) Fully homomorphic encryption is a bit like enabling a layperson to perform flawless neurosurgery while blindfolded, and without later remembering the episode. Oh, I get it! It's like when Dr. McCoy reinstalled Spock's brain. McCoy was an idiot before, got the 1337 skillz, and then forgot it all. □ by 14erCleaner (745600) Fully homomorphic encryption is a bit like enabling a layperson to perform flawless neurosurgery while blindfolded, and without later remembering the episode. I remember the episode: Spock's Brain [memory-alpha.org]. • by tekrat (242117) I don't suppose the researcher's name was Janik? • Homomorphism (Score:5, Insightful) by NAR8789 (894504) on Thursday June 25, 2009 @03:51PM (#28470903) This article needs some clarification. In particular, a lot of the worried comments here show a lack of understanding of the word "homomorphic". Here's a very simplified example of a homomorphism. I define a function f(x) = 3x This function is a homomorphism on numbers under addition. Its image "preserves" the addition operation. What I mean more precisely is f(a) + f(b) = f(a + b) That's pretty easy to verify for the function I've given. Homomorphic encryption is interested in an encryption function f() that preserves useful computational operations. If we take my example as a very very simplified encryption then, say I have two numbers, 6, and 15, and I lack the computational power to do addtion, but I can encrypt my data with my key--3. (I'm generalizing my function to be multiplication by a key. And yes, for some reason I have the computational power to do multiplication. Humor me). I can encrypt my data, f(6) = 18 and f(15) = 45, and pass these to you, and ask you do do addtion for me. You'll do the addition, get 63, and pass this result to me, which I can then decrypt, which yields 21. Now, my encryption here is very simple and very, very weak, but if you're willing to suspend disbelief, you'll note that the information I've allowed you to handle does not reveal either my inputs or my outputs. (In fact, with the particular numbers I've chosen, you might guess that my key is 9 instead of 3, (though relying on lucky choices or constraining myself to choices which have this property make my scheme rather useless)) If you generalize this to strong encryption and more useful computational operations, you begin to see how homomorphic encryption can be useful. One should note that, no, homomorphic encryption will not be a drop-in replacement for other forms of encryption. (Sending encrypted emails with homormorphic encryption would be unwise. An attacker can modify the data (though, if my understanding is correct, only with other data encrypted with the same key)) Homomorphic encryption simply fills a need that the other forms do not serve. Hopefully you now also see how the article's use of the word "analysis" can be rather misleading. In particular, one of the earlier comments notes that it might be useful in allowing you to determine if different people's encrypted information is identical. By my understanding, homomorphic encryption would not allow this. In any case, if my explanation is not enough, here [wikipedia.org]'s the wikipedia article. □ Re:Homomorphism (Score:4, Informative) by Simetrical (1047518) <Simetrical+sd@gmail.com> on Thursday June 25, 2009 @09:50PM (#28476175) Homepage Here's a very simplified example of a homomorphism. I define a function f(x) = 3x This function is a homomorphism on numbers under addition. Its image "preserves" the addition operation. What I mean more precisely is f(a) + f(b) = f(a + b) That's pretty easy to verify for the function I've given. But examples like you gave (semigroup homomorphisms) have existed for a long time. Basic RSA has that property. The key advance here is that you have a semiring homomorphism, where it preserves two operations, one of which distributes over the other. Like multiplication and addition, or bitwise and and xor. (For those who don't follow: x*(y + z) = x*y + x*z, x & (y ^ z) = (x & y) ^ (x & z). If you don't believe the second identity, try all possibilities.) An example of a semiring homomorphism on the reals is f(x) = -x. Then f(x + y) = -(x + y) = f(x) + f(y), and f(xy) = (-x)(-y) = xy = f(x)f(y). (Unless you believe in Time Cube.) It seems distributivity is enough to do complicated calculations. You could simulate and and xor gates, I guess. Then you could get ~x = x ^ -1, x | y = ~(~x & ~y), etc.: all possible binary operations. That's enough to build a virtual computer right there, all operating on encrypted data. Of course, the one running the code would be able to figure out exactly what algorithm you're using. So it's not perfect. But it's pretty cool regardless. □ by master_p (608214) Thank you for the explanation. Here is a shorter explanation: using homomorphic encryption, mathematical operations on encrypted data can produce results which are themselves encrypted by the same encryption code. • What are the operations for which this is homomorphic? It has to be quite limited. Otherwise for example, lets suppose I have an integer (encrypted of course) and I have comparison and addition/subtraction and multiply/divide. I can very easily find the encrypted values of both 0 (a-a for any a) and 1 (a/a) I can now decrypt the data with repeated additions (or subtractions) of 1 and equality comparisons. And, I don't see how you can prevent equality tests in the encrypted domain. You might have to calc □ by SiliconEntity (448450) What are the operations for which this is homomorphic? It has to be quite limited. Otherwise for example, lets suppose I have an integer (encrypted of course) and I have comparison and addition/subtraction and multiply/divide. I can very easily find the encrypted values of both 0 (a-a for any a) and 1 (a/a) The article neglected to mention that the underlying encryption system is randomized public key encryption. This means (A) you can easily discover encryptions of 0, encryptions of 1, and encryptions of anyt □ by Simetrical (1047518) So I don't see how the operations available can be as much as the usual operators on reals. The idea seems to make the operations map to something like & and ^, so that you can recover all logical operators and make a virtual computer using them. & and ^ on the integers may not seem as powerful as * and + on integers/floating points/etc., but you can easily encode the latter as the former. • Is the plaintext needed post-encryption? (Score:3, Informative) by rlseaman (1420667) on Thursday June 25, 2009 @05:05PM (#28472339) A lot of respondents seem to have seized on a spurious notion of what this is all about. That isn't surprising since the Slashdot article and the press release and even the abstract are rather obscure. No sign of a preprint, but the same abstract shows up for a number of colloquiums in the last couple of months. The paper is from a proceedings, so it may itself not be especially The abstract says: "We propose a fully homomorphic encryption scheme -- i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result -- that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable." The encryption and compression literature tends to use the word "scheme" where others might say algorithm or transform. "Circuits" here is a term of art (maybe arising originally from actual physical circuits, as in the Enigma machine?) "An encryption scheme that permits evaluation of arbitrary circuits" suggests only that the possessor of the private key can generate these arbitrary queries, not that anybody and their brother can scavenge the encrypted data. It isn't stated whether such a query also requires the plaintext. It would be pretty cool if one feature were to be able to discard the plaintext post-encryption. The gimmick appears to be that the arbitrary circuit can include the decryption itself (the bootstrap part). Note that this feature is far more cool (assuming it works) than all the nonsense about cloud computing. Somehow the data are *arbitrarily* available to properly encoded queries without ever being exposed - even to the CPU performing the operations. This processor could be on the same machine, on some remote server, in the cloud or across the galaxy. How cool is that? • by DragonWriter (970822) makes possible the deep and unlimited analysis of encrypted information -- data that has been intentionally scrambled -- without sacrificing confidentiality." This is nonsense: unlimited analysis being possible is the same thing as confidentiality being sacrificed. Maybe there is something significant and important here, but TFA doesn't provide a clue as to what it is. • Doh! (Score:2) by itsybitsy (149808) * I downloaded the PDF paper and it says "We omit full details due to lack of space...". Doh!!! What use is an ACM account when white papers "omit full details"? Well I suppose they don't have to kill me as they omitted the full details... that's something at least. • Ummm... (Score:2) Wouldn't this sort of analysis of the encrypted data potentially provide clues of its nature that would help in the decryption of the data? Seems to me that this new analysis method weakens ALL POSSIBLE encryption techniques... • Clarification on the technology (Score:5, Informative) by SiliconEntity (448450) on Thursday June 25, 2009 @07:47PM (#28474789) A few misconceptions continue to circulate here; let me try to shed some light. First, the encryption system is apparently not practical in its current form. Maybe improvements will occur some day to make it practical, maybe not. It is still a major theoretical breakthrough because fully homomorphic encryption had often been thought to be impossible in the past. It has been a long sought goal in cryptography and it is remarkable to see it finally achieved. So in practice nobody is going to be doing spam filtering, income tax returns, or anonymous google searches any time soon. Second, several people have gotten tripped up over an apparent weakness: if you can calculate E(X-Y) you can get an encryption of 0; if you can calculate E(X/Y) you can get an encryption of 1; and from these you could get other encryptions and potentially break the system. This idea fails for two reasons: first, it is a public-key system, so you don't need to go through all this rigamarole to get encryptions of 0, 1, or anything. In public key cryptography, anyone can encrypt data under a given key, without knowing any secrets. So it is already possible to get encryptions of known values, even without the special homomorphic properties. Second, in order for public key systems to be secure, they need to have a randomization property. In randomized encryption, there are multiple ciphertext values that encrypt the same plaintext. Basically, the encryption algorithm takes both the plaintext and a random value, and produces the ciphertext. Each different possible random value causes the same plaintext to go to a different ciphertext. The decryption algorithm nevertheless can take any of these different ciphertext values and produce the same plaintext. This may be confusing because the most well known public key encryption system, RSA is not randomized. At the time it was invented, this aspect was not well understood. Shortly afterwards it became clear how important randomization is. Other encryption systems like ElGamal do use randomization, and RSA was adapted to allow randomization via what is called a "random padding" layer, known by the technical name PKCS-1. This adds the randomness which allows RSA to be used securely. One other point is that people are getting hung up about what "fully" homomorphic encryption covers. Exactly what operations can you do? I think the best way to think of it is to go down to the binary level. We know that in our computers, at the lowest level everything is 1's and 0's. These get combined with elementary logical operations like AND, OR, NOT, XOR, and so on. Using these primitive operations, all the complexity of modern programs can be built up. In the case of the homomorphic encryption, it is probably best to think of the values being encrypted in binary form, as encryptions of 1's and 0's. Keep in mind the point above about randomized encryption: all the encryptions of 1 look different, as do all the encryptions of 0. You can't tell whether a given value encrypts a 1 or a 0. Given these encrypted values, you can compute AND, OR, XOR, NOT and so on with these values, and get new encrypted values as the answers. You don't know the value of the outputs, they are encrypted. Only the holder of the private key, who originally encrypted the data, could decrypt the output. But you can continue to work with these output values, do more calculations with them, and so on. Let me give an example of how you could do an equality comparison. Suppose you have two encrypted values and want to determine if they are the same. Recall that we are working in binary, so you actually have two sequences of encrypted bits; some are encrypted 1's and some are encrypted 0's, but you can't tell which. So the first thing you compute is the XOR of corresponding bits in the two values: XOR the 1st bits of each value; XOR the 2nd bits of each value, and so on. Now if the values are equal, the results are all encryptions of 0's. If the values are different, some of the results will be encryptions of 1's. But aga Related Links Top of the: day, week, month.
{"url":"http://science.slashdot.org/story/09/06/25/1736230/ibm-claims-breakthrough-in-analysis-of-encrypted-data","timestamp":"2014-04-20T08:56:07Z","content_type":null,"content_length":"306501","record_id":"<urn:uuid:4537a015-8057-45cd-ba3a-05a898409224>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Subaru Legacy Forums - View Single Post - Gear Ratios (USDM Turbo Models, Also JDM Spec B & FXT ST Nearly all tire manufacturers list tires in diameter. This is true, but they also list static loaded radius, and revolutions per mile. Radius is different pending tire pressure, where you measure, and other factors. Diameter measurements also vary with pressure and where you measure. There is a given distance of tread on the tire and one rotation it covers that distance. Plain and simple. This is only true for a solid tire with a fixed circumference. Not true for an inflatable tire. A 215/45x17 tire inflated to 10psi will travel a shorter distance in one rotation than the same tire inflated to 40psi, especially when the tire is under a load (i.e.mounted on a vehicle). At 10psi the tire has a smaller rolling radius (and smaller effective circumference) than at 40psi. Plain and If you want to measure the distance an unloaded tire travels, then use the diameter in your calculations. If you want to measure the distance a loaded tire will travel, then use the static loaded radius in your calculations. Since the tires, when mounted on the car, are in a loaded condition, logic would dictate that the static loaded radius will yield a more accurate calculation.
{"url":"http://legacygt.com/forums/showpost.php?p=18765&postcount=26","timestamp":"2014-04-17T06:59:04Z","content_type":null,"content_length":"18833","record_id":"<urn:uuid:a899eeee-8e0d-4d18-a3fe-a0a6c48761ca>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: MathML-Presentation questions From: Robert Miner <RobertM@dessci.com> Date: Tue, 24 Apr 2001 10:09:23 -0500 Message-Id: <200104241509.KAA25644@wisdom.geomtech.com> To: habelg@micro-intel.com CC: www-math@w3.org, www-math@w3.org Dear Genevieve, > The company I work for uses MathML (and XML) for encoding educational > contents. We also have the goal of translating our XML contents in many > differents cultural domains. We have encountered some problems that we > don't know how to resolve in MathML. Boy, you came up with a tough list. Let me try to give some answers, though in many cases, I'm not sure there are any good answers. > 1. How do you propose to markup the association of units to quantities? > (Ex.: $5, 100 miles/h, 37.5 ºC) > i. Which markup is appropriate? > ii. How do you specify the spacing before or after the quantity? > iii.How do you manage $5 which is translated 5$ in french-canadian, > for example? A similar question was asked privately a while ago. I am posting a digest of the answers to the list separately, so it will show up in the subject line. > 2. mn. How do you propose to markup periodic numbers? There is no special way to do this in MathML. All I can think of really is to mark it up to look right. For example: > 3. mn. How do you propose to markup the ellipsis expressing the infinite > non-repeating decimals of a number? > (Ex.: 3.14159...) > Is it a postfix operator? We are looking for a form that does not > transgress the "number" meaning. The idea of a postfix operator is interesting. That would probably work, though you might have to tweak the spacing. However, I think a more straightforward approach is just to include it in the MN > 4. What is the best way to markup words in mathematical expressions that > represent units, identifiers, operators,...? > With mi, mo or mtext? mtext is not rendered in a math font. > Ex.: > i. Area = 1/2 (base x height) > ??? > <math> > <mi>Area</mi> > <mo>=</mo> > <mfrac> > <mn>1</mn> > <mn>2</mn> > </mfrac> > <mo>&it;</mo> > <mfenced> > <mrow> > <mi>base</mi> > <mo>&Cross;</mo> > <mi>height</mi> > </mrow> > </mfenced> > </math> I would do it just as you have with <mi>'s. In your example, base and height are really variable names, and thus should be mi's. Since they both have multiple characters, they should be typeset in a normal font > ii. 150 m of fencing = 3 widths and 2 lengths > 150 = 3w + 2L > ... > ??? > <math> > <mtable groupalign="right center left"> > <mtr><mtd> > <maligngroup/><mn>150</mn> > <m?>m of fencing</m?> > <maligngroup/><mo>=</mo> > <maligngroup/><mn>3</mn> > <mo>&it;</mo> > <mi>widths</mi> > <mo>and</mo> > <maligngroup/><mn>2</mn> > <mo>&it;</mo> > <mi>lengths</mi> > </mtd></mtr> > ... > </mtable> > </math> This works for me too. I guess I might put the "and" in an <mtext>, since it is sort of in the gray zone between a genuine mathematical "and" operator and the ordinary word. Also, the typesetting for an <mtext> is likely to be better. > 5. mo: Invisible operators. Will you define an <mo>&InvisiblePlus;</mo> > to markup a fraction like 1 1/2 (one and an half)? > If not, how do you propose to markup the implied plus in this > fraction? You wouldn't ask it you knew the battle we've had over InvisibleTimes... I think the only way to mark this up for presentation is a <mn> next to a <mfrac>. Not very satisfactory, I > 6. When explaining mathematics, it is often useful to present the > different notations used around the world. Is there an attribute or a > markup to specify the cultural domain notation we wish to render? > Ex.<p>The decimal system of numbers, based on 10, is used in most > countries in the world. The notation for decimal points is not the same > in all countries, however. In Canada and the USA, the point is placed on > the line. In the United Kingdom, it is placed above the line. In France, > a comma is used on the line instead of a point.</p> > p>As a student in France, you would convert the decimal number > 7,12 > to a fraction.</p> > <p>In the UK, you would convert the decimal number > 7&middot;12 > to a fraction.</p> No, not really. As you obviously know, there is a lot of stuff to do in this area, and MML 2 didn't take it on. For the time being, using a private attribute or maybe the 'class' attribute on the <math> tag is probably the best work around. > 7. math. In the MathML2.0, there is a section for all markups, but I > didn't find the one which explains the root of mathematical expressions: > the math markup. Is there a specific documentation that could tell me > more about it (attributes, etc.)? Yeah, the <math> element is documented separately in Chapter 7 of the spec. Chapter 7 deals with the issues of embedding MathML in other markup, so the <math> element is described there. Robert Miner RobertM@dessci.com MathML 2.0 Specification Co-editor 651-223-2883 Design Science, Inc. "How Science Communicates" www.dessci.com Received on Tuesday, 24 April 2001 11:09:57 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Saturday, 20 February 2010 06:12:50 GMT
{"url":"http://lists.w3.org/Archives/Public/www-math/2001Apr/0012.html","timestamp":"2014-04-16T19:28:46Z","content_type":null,"content_length":"14575","record_id":"<urn:uuid:4d01b3dc-5650-4228-b271-0e75ffc8e52d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Electric Force January 19th 2009, 10:37 AM #1 Electric Force Two tiny conducting balls of identical mass m and identical charge q hang from non-conducting threads of length L. Assume that $\theta$ is so small that $\tan{\theta}$ can be replaced by its approximate equal, $\sin{\theta}$. (a) Show that $x = \left(\frac{q^2L}{2\pi\epsilon_0 mg}\right)^{\frac{1}{3}}$ gives the equilibrium separation x of the balls. (b) If L = 120cm, m = 10g, and x =5.0, what is |q|? forces acting on a single ball ... T = tension in the string mg = weight F = electrostatic force let $\theta$ = angle the string makes with the vertical equilibrium ... $T\cos{\theta} = mg$ $T = \frac{mg}{\cos{\theta}}$ $T\sin{\theta} = F$ $mg\tan{\theta} = F$ since $\tan{\theta} \approx \sin{\theta}$ ... $mg\sin{\theta} = F$ $mg\sin{\theta} = \frac{q^2}{4\pi \epsilon_0 \cdot x^2}$ since $\sin{\theta} = \frac{x}{2L}$ ... $\frac{mgx}{2L} = \frac{q^2}{4\pi \epsilon_0 \cdot x^2}$ $x^3 = \frac{q^2 L}{2\pi \epsilon_0 mg}$ January 19th 2009, 10:52 AM #2
{"url":"http://mathhelpforum.com/advanced-applied-math/68880-electric-force.html","timestamp":"2014-04-18T10:09:38Z","content_type":null,"content_length":"36995","record_id":"<urn:uuid:850ce17c-41a3-46e5-adb5-30ef7763da45>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring specialization in species interaction networks Network analyses of plant-animal interactions hold valuable biological information. They are often used to quantify the degree of specialization between partners, but usually based on qualitative indices such as 'connectance' or number of links. These measures ignore interaction frequencies or sampling intensity, and strongly depend on network size. Here we introduce two quantitative indices using interaction frequencies to describe the degree of specialization, based on information theory. The first measure (d') describes the degree of interaction specialization at the species level, while the second measure (H[2]') characterizes the degree of specialization or partitioning among two parties in the entire network. Both indices are mathematically related and derived from Shannon entropy. The species-level index d' can be used to analyze variation within networks, while H[2]' as a network-level index is useful for comparisons across different interaction webs. Analyses of two published pollinator networks identified differences and features that have not been detected with previous approaches. For instance, plants and pollinators within a network differed in their average degree of specialization (weighted mean d'), and the correlation between specialization of pollinators and their relative abundance also differed between the webs. Rarefied sampling effort in both networks and null model simulations suggest that H[2]' is not affected by network size or sampling intensity. Quantitative analyses reflect properties of interaction networks more appropriately than previous qualitative attempts, and are robust against variation in sampling intensity, network size and symmetry. These measures will improve our understanding of patterns of specialization within and across networks from a broad spectrum of biological interactions. The degree of specialization of plants or animals has been studied and debated extensively, and a continuum from complete specialization to full generalization can be found in various systems [1-6]. In general, two levels of specialization measures may be distinguished: first, the characterization of focal species and, second, the degree of specialization of an entire interaction network, representing an assemblage of species and their interaction partners (e.g. food webs, mutualistic networks, predator-prey relationships). When interactions are considered as ecological niche, the first level describes the niche breadth of a species and the second level the degree of niche partitioning across species. While the species level is more straightforward in its biological interpretation, analyses at the network level can be useful for comparisons across different types of networks. Such analyses have been performed to compare plant-pollinator webs versus plant-seed disperser webs [4,5], different plant-pollinator networks along geographic gradients [1,7,8], or food webs of variable size [9,10]. Entire network analyses are also used to study patterns on a community level such as coevolutionary adaptations [3], ecosystem stability or resilience [11-14]. Quantifying specialization at the species level Specialization or generalization of interactions are most commonly characterized as the number of partners (or 'links'), e.g. the number of pollinator species visiting a flowering plant species or the number of food plant families a herbivore feeds upon. In this qualitative approach, interactions between a consumer and a resource species are only scored in a binary way as 'present' or 'absent', ignoring any distinction between strong interactions and weak or occasional ones. For example, binary representation of interactions do not distinguish a scenario where 99% of the individuals of a herbivore species feed on a single plant species only, but occasionally an individual is found on another plant, from a different scenario where a herbivore regularly feeds on both food plants. The problem is analogous to the measurement of biodiversity either as a crude species richness versus as a more elaborate diversity index including relative abundances [15]. Several approaches have thus been used to directly include variation in interaction frequencies (i.e., their evenness) in characterizing the diversity of partners, e.g. Simpson's diversity index for pollinators [16,17] or Lloyd's index for host specificity [18]. Alternatively, other studies indirectly controlled for abundance or sampling intensity using rarefaction methods [13,19]. Correspondingly, Bersier and coworkers [20] have suggested to quantify the diversity of biomass flows in food webs using a Shannon diversity measure. Niche breadth theory provides several additional indices that include some measure of resource frequency or resource use intensity [21], which can be viewed in analogy to 'partner diversity' in the context of association networks. However, Hurlbert [22] emphasized that not only proportional utilization, but also the proportional availability of each niche should be taken into account. A species that uses all niches in the same proportion as their availability in the environment should be considered more opportunistic than a species that uses rare resources disproportionately more. If variation in resource availability is large, diversity-based measures that ignore this availability may be highly misleading [22,23]. Several niche breadth measures thus combine proportional resource utilization with proportional resource availability [22-24]. These concepts have been rarely applied in the context of species interaction networks, e.g. plant-pollinator webs where binary data are more common than quantitative webs. Quantifying specialization at the community level The measurement used most commonly to characterize community-wide specialization is the 'connectance' index (C) [1,4,8-10,25-27]. C is defined as the proportion of the actually observed interactions to all possible interactions. Consider a contingency table showing the association between two parties, with r rows (e.g., plant species) and c columns (e.g., pollinators). Connectance is defined as C = I/(r·c), with I being the total number of non-zero elements in the matrix. Therefore, like the number of partners or links (L) described above, C uses only binary information and ignores interaction strength. C is directly related to the mean number of links ( ) of plant species or pollinator species as C = [plants]/c = [poll]/r. This measure, , has also been used to compare networks [1,3,7,8,28]. Recently, it has been suggested to use instead of C to characterize networks [29]. However, note that comparisons across networks of different size (number of species) are problematic, since , unlike C, is not scaled according to the number of available partners (see also [2,10]). in a small network may represent a larger proportion of available partners compared to the same value of in a large network. Analyses based on binary data – both at the species and the community level – have obvious shortcomings, since they are highly dependent on sampling effort, decisions which species to include or not, and the size of investigated networks. Several authors thus emphasized the need to move beyond binary representations of interactions to quantitative measures involving some measure of interaction strength [4,20,27,29-32]. A way to at least partly overcome these deficiencies is to cut off all rare species or weak interactions below a frequency threshold [3,9,33,34] or to control for sampling effort in null models [7,8,13,19,25,35]. However, for interaction webs where a more detailed information is available, simplification to binary data as in C or remains unsatisfactory. Conveniently, the observed interaction frequency may represent a meaningful surrogate for interaction strength, at least in pollination and seed-dispersal systems as shown by Vázquez et al. [30] (see also [16]). Incorporating interaction frequency or even a direct measure of interaction strength in a network measure of specialization would thus provide an important progress frequently called for. A severe additional problem of connectance is that its lower and upper constraints are not scale-invariant [25], which limits its use for comparisons across networks. The minimum possible value (C [min]) to maintain at least one link per species declines in a hyperbolic function with the number of interacting species, since C[min ]= max(r, c)/(r·c), and an upper limit (C[max]) may be constrained by, or a function of, total sampling effort. Across networks, C decays strongly with network size, which has been debated in detail in the context of food web analysis [9,10,26,27,36,37]. The strong relationship between C and network size generates a problem for disentangling any biologically meaningful effect from this mathematically inherent scale dependence. For instance, network comparisons may focus on residual variation in C after an average effect of network size has been controlled for [1,4], or C could be rescaled to account for this size effect (see [25,36]). For natural networks of similar size, the range of actual C values is typically very narrow [4], thus other structural forces may be poorly detectable. The objective of this paper is to develop and discuss specialization measures that are based on frequency data and thus account for sampling intensity, and that overcome the problem of scale dependence. We then test these approaches by evaluating the effect of sampling effort and scale dependence on a published natural pollination network, and on randomly generated associations as a null model. We differentiate between species-level measures of specialization, useful to investigate variability among species within a web, and a single network-wide measure that can be used for comparisons across networks. Patterns in two pollinator networks Two selected plant-pollinator networks (British meadows studied by Memmott [32], Argentinean forests studied by Vázquez and Simberloff [33]) differ markedly in their degree of specialization when quantitative analyses are applied. The qualitative network index, connectance, is similar in both interaction webs (British web: C = 0.15, Argentinean web: C = 0.13). However, frequencies of pollinator visits are much more evenly distributed in the British community than in the Argentinean example. In the British web, the interaction between a dipteran species and Leontodon hispidus was the most frequent one, representing 6% of the total 2183 interactions observed. In the Argentinean network, visits of Aristotelia chilensis by a colletid bee species represented 20% of the 5285 interactions alone. Interactions between the top five plant and top five pollinator species made up 44% of the interactions in the British web, but 74% in the Argentinean web. This difference in the heterogeneity of interaction frequencies is not evident in measures based on binary information such as number of links (L) or connectance (C). In contrast, the degree of specialization shown by the frequency-based index H[2]' (standardized two-dimensional Shannon entropy, see Methods: Network-level index) is much lower in the British community (H[2]' = 0.24) compared to the Argentinean community (H[2]' = 0.63). The variation of species-level specialization measures (standardized Kullback-Leibler distance, d') holds valuable information for the structural properties of a network (see Methods: Species-level index). The British pollination web is dominated by highly generalized pollinators (low d', both in terms of individuals as well as species), while putative specialists are represented by very few individuals and species (Fig. 1A). In contrast, most pollinators in the Argentinean web are moderately generalized to specialized, with the second highest level of specialization found in the most common species (Fig. 1B). Consequently, the weighted mean degree of specialization is much lower in the former web (<d'[poll]> = 0.16) than in the latter (<d'[poll]> = 0.54). The relationship between specialization of species i (d'[i]) and its interaction frequency (A[i]) across the pollinator species differs between the two webs. In the British web, d'[i ]and A[i ]were not correlated significantly (Spearman's r[s ]= -0.08, p = 0.46), while a highly positive correlation was found in the Argentinean web (r[s ]= 0.65, p < 0.0001). Note that designation of any specialization index to a species i that is only represented by a single individual may be critical. However, significances in the above correlations remain unaffected when pollinators with one single interaction are excluded. From the plants' point of view, the species in Memmott's web are also more generalized in terms of their pollinator spectrum (Fig. 1C) than the plants studied by Vázquez and Simberloff (Fig. 1D). The respective weighted means are <d'[plants]> = 0.27 and <d'[plants]> = 0.53. No significant correlation was found between the plants' frequency and specialization in either web (both p ≥ 0.16). Interestingly, plants were on average more specialized than pollinators in the British web (<d'[plants]> > <d'[poll.]>), but not in the Argentinean web. This distinction is not found when only the weighted mean number of links (L) are examined, since <L[plants]> is much greater than <L[poll.]> in both networks. The difference in <L> may be driven by the highly asymmetrical matrix architecture in both webs, where the number of pollinator species greatly exceeds the number of plant species. The unweighted mean is even directly linked to the matrix architecture (i.e., number of rows and columns, r and c) by a constant (connectance C), since [r ]= c·C and [c ]= r·C. In contrast, the matrix asymmetry does not affect d' (see also below, Null model patterns). Figure 1. Patterns within pollinator networks. Frequency distribution of the species-level specialization index (d') for pollinators and plants from two published networks, one from Britain [32] and one from Argentina [33]. Bars show the number of individuals in each category (label '0' defines 0.00 ≤ d' < 0.05, etc.). Bars are separated for different species, and total number of species in each category is given on top. Arrows indicate cases where bars are invisible due to low numbers of individuals. Simulation of sampling effort In order to test whether specialization estimates are dependent on sampling and scale effects, we simulated a decreased sampling intensity in both networks using rarefaction (see Methods: Simulation of sampling effort and matrix architecture). In both networks, H[2]' is robust and already very well estimated by a small fraction of the interactions sampled (Fig. 2). The coefficient of variance of H[2]' remains below 5% from about half of the total number of visits onwards in the British web and even at one-tenth of the total sampling effort of the Argentinean web. The estimation of connectance (C) is also relatively stable at least in the Argentinean web, although it shows a positive trend across sampling effort in the British web (Fig. 2). These findings suggest that network-wide measures of specialization, particularly H[2]', do not necessarily require a very large or even complete association matrix, but can also be very well estimated from a smaller representative subset as long as there is no systematic sampling bias. Figure 2. Sampling effect in pollinator networks. Rarefaction of sampling effort in a British and an Argentinean pollination web [32,33]. Two network-level measures of specialization – the frequency-based specialization index (H[2]') and the 'connectance' index (C) – are shown for networks in which the total number of interactions (m) has been reduced by randomly deleting interactions. Black dots show the effect of sampling effort for the original association matrix, gray dots the effect for a null model, i.e. five networks in which partners were randomly associated (same row and column totals as in the original matrix). Null model patterns The degree of specialization can be further characterized by comparison with a null model. The null model used here is that each species has a fixed total number of interactions (given by the observed association matrix), but interactions are assigned randomly. In the above pollinator networks, random associations yield a specialization index H[2]' that remains close to zero for almost the entire range of sampling intensity, while connectance (C) shows a positive trend over the total number of interactions (m) (Fig. 2). Therefore, H[2]' derived from real networks may typically be clearly distinguished from this null model, while the comparison of C is complicated by scale dependence and the relatively large values yielded by the null model. Simulations of artificially generated random associations (see Methods: Simulation of sampling effort and matrix architecture) confirm that the network-level specialization index H[2]' is largely unaffected by network size (Fig. 3A), network architecture (Fig. 3B) or total number of interactions (m) for a fixed matrix size (Fig. 3C). For random associations as shown here, H[2]' is usually close to zero. Connectance values (C) of random matrices show the known hyperbolic function over the number of associated species (Fig. 3A), changes with matrix asymmetry (Fig. 3B) and increase strongly with increasing m (Fig. 3C). For specialization measures at the species level, the average number of links per species ( ) increases strongly with network size, number of available partners, and m (Fig. 3). While other niche breadth measures may also show some variation across different network scales (not shown), the weighted mean Kullback-Leibler distance <d'> is poorly affected by network size, network asymmetry, and number of interactions (Fig. 3). Both H[2]' and d' may thus be appropriate for comparisons across matrices of different scale. Figure 3. Simulated random networks. Behavior of specialization measures in simulated random networks. Each point represents one matrix with random associations, based on specific row and column totals that follow a lognormal distribution. The size of squared matrices in (A) increased from 2 × 2 to 200 × 200. In (B), only the number of rows changed, while the number of columns was fixed at 20, rectangular matrices thus increased from 2 × 20 to 200 × 20. In (C), the network size was fixed at 20 × 20. The total number of interactions (m) increased with matrix size in (A), where each species had on average 20 individuals. In (B), m was fixed at 4000, resulting in a reduced interaction density for larger matrices. In (C), m increased from 20 to 4000. The index H[2]' and connectance C are specialization measures of the whole matrix and thus reciprocal, while the average number of links ( ), and weighted mean standardized Kullback-Leibler distance (< '>) are given for all columns (rows give a similar pattern). Properties of specialization measures The suggested indices, d' and H[2]', quantify the degree of specialisation of elements within an interaction network and of the entire network, respectively. While the number of links (L) and connectance (C) represent species-level and community-level measures of interactions based on binary data, respectively, d' and H[2]' represent corresponding measures for frequency-based data. The need to include information on interaction strength or interaction frequency into network analyses has been announced by various authors [4,20,27,30,31,38]. Parallel to earlier advances in diversity measures compared to species richness, quantitative network measures account for the heterogeneity in link strength rather than assigning equal weights to every link. Moreover, we have shown that d' and H[2]' are largely robust against variation in matrix size, shape, and sampling effort. In several cases, C may be strongly affected by sampling effort [25,27], while H[2]' remained largely unchanged in simulations of random associations over a range of network sizes, variable network asymmetries, and number of interactions. This scale invariance suggests that both d' and H[2]' can be used directly for comparisons across different networks, while comparisons of L and C are more problematic [1,35]. Qualitative methods like the indices suggested here also allow a more detailed analysis of interaction patterns within and across networks. Fruitful areas include comparisons of networks across different interaction types [4], biogeographical gradients [1], biodiversity and land use gradients [13], robustness of networks against extinction risks [39], asymmetries between plants and animals [38], and relationships between specialisation and abundance [35]. While a comparison of the average number of partners between plants versus animals is solely dependent on the matrix architecture ( i.e., the number of rows r versus columns c, since [plants ]= c·C and [poll ]= r·C), this limitation does not apply to d'. In the two selected pollinator webs, plants are either similarly or more specialised than pollinators in regard to weighted mean d'. This allows an scale-independent evaluation of asymmetries in the degree of specialization between partners (see also [38]). Moreover, Vázquez and Aizen [35] noted that the number of links of a species (L[i]) is strongly positively correlated with its overall frequency (A[i]) in five pollination networks including the datasets analyzed above. They argued that this apparent higher generalization of common plants and common pollinators may be largely explained by null models, calling for an improved measurement of specialization. Our results for the correlation between d'[i ]and A[i ]in two pollinator webs suggest that the relationship between specialization and abundance may be more variable, and even positive as in the Argentinean network. Some problems apply to any measure of network analyses including the proposed indices. Measures of specialization mostly ignore phylogenetic relationships or ecological similarity within an association matrix. For example, a plant species that is pollinated by multiple moth species may be unsuitably regarded as more generalized than a plant pollinated by few insect species comprising several different orders [40]. In addition, the fact that herbivores are commonly specialized on host plant families rather than species may skew network patterns if not carefully accounted for. A first approach to investigate such effects may be to compare the level of specialization after a stepwise reduction of the matrix by pooling species to higher taxonomic units, such as genera, families, and orders. For known phylogenies, more advanced techniques for analyses with a particular evolutionary focus are available [41-43]. Another deficiency may be that species or their partners are all given the same individual 'weight' in the analyses, whether they may be small bees or large bats visiting a small herb with little nectar or a mass flowering tree. Null models as in the calculation for both C and H[2].' imply that all individuals can be shifted around between resources in the same way, irrespective of their size or non-fitting parameters. The role of 'forbidden links' as constraints to network analyses has been discussed elsewhere [44,45]. Similarly, calculations of d' or other niche breadth measures are based on the implicit assumption that each species adjusts its interactions according to the availability of partners (niches), irrespective of morphological or behavioral constraints. Moreover, if data are collected from a large heterogeneous habitat or over a prolonged time period, calculations of the degree of specialization may be severely constrained by the spatiotemporal overlap or non-overlap between partners for other reasons than resource preferences, e.g. when not all species are able to reach all sites in the same way, or when some resources and consumers have asynchronous phenologies. Consequently, network analyses as suggested here will be most useful to study resource-consumer partitioning within a short time frame and limited spatial scale. For both indices d' and H[2]', we proposed above to use the total number of interactions for each species as a measure of partner availability (q[j]) and as constraint for standardization (fixed row and column totals). It may be debated whether independent measures of plant and animal abundances could be more appropriate than using interaction frequency data as such. However, despite the fact that such abundance data barely exist for most networks, note that the actual number of interactions often more suitably reflects resource availability and consumer activity than an independent measure of species abundance. For instance, a flower of one species may have a much higher nectar production than another and consequently receive a higher number of visitors, while the local abundance of the plant species does not reflect such differences in resource quality and/or quantity. Both d' and H[2]' thus focus on the actual partitioning between the interacting species. In studies where detailed knowledge or theoretical assumptions about resources (availability and quality) or consumers (activity density and consumption rate) are available or under experimental control, such data may be incorporated into the analysis (defining q[j ]and constraints) instead of interaction frequencies. The constraint of fixed row and column totals has been debated elsewhere in the context of species co-occurrence patterns, where it was found to be most appropriate in null model comparisons, although critics have argued earlier that these marginals themselves may already reflect competitive interactions ([46] and references therein). Any approach to compare networks based on fixed marginals for standardization will fail to detect potentially meaningful patterns displayed by these architectural features, namely the number of resource and consumer species and the heterogeneity of total interaction frequencies. This network architecture may already be shaped by past competitive interactions or indicate fundamental constraints, a largely unexplored hypothesis that merits additional investigations. It should also be emphasized that analyses of frequency data may be susceptible for pseudoreplication of repeated associations of the same individuals or close associations derived from a single dispersal event (e.g. a social insect colony, aggregating individuals, multiple offspring from a single egg cluster, or monospecific plant clusters). These may lead to an overestimation of specialization. To be more meaningful on a population level, frequency analyses should thus be based on spatially independent association replicates. Note that all species-wise specialization measures such as d' are sensitive to the behavior of the other species. Any systematic sampling bias (e.g. a taxonomic focus within a guild) will therefore affect the conclusions of comparisons within or across networks. In accordance with previous calls [4,20,27,30,31,38], we suggest that the explicit inclusion of frequency data reflects an important step forward in network analyses, as too many assumptions are implicit in any measure based on binary representation. Most notably, connectance and 'number of partners' imply an equal availability of all partners – an unlikely scenario. Qualitative indices are not robust against sampling effort. On the contrary, the proposed quantitative measures based on interaction frequencies explicitly account for this source of variation. Our study suggests that d' and H[2]' represent scale-independent and meaningful indices to characterize specialization on the level of single species and the entire network, respectively. These novel indices allow us to investigate patterns within and across networks that have not been detected with qualitative measures such as correlations with species frequencies, network size and asymmetries in specialization between partners. Recently, Bascompte et al. [38] showed that the incorporation of frequency data may unveil pervasive asymmetries within networks. Particularly since Vázquez et al. [30] demonstrated that interaction frequencies in plant-pollinator and plant-seed disperser systems often correlate with the magnitude of mutualistic services for the plant (although variation in pollinator effectiveness can be important, see [47]), an increased collection of frequency data and appropriate quantitative analyses would greatly benefit future network studies. Species-level index As species-level measure of 'partner diversity', we propose the Kullback-Leibler distance (or Kullback-Leibler divergence, relative entropy) in a standardized form (d'). Coming from information theory, this index quantifies the difference between two probability distributions [48]. While the standardized Hurlbert's and Smith's measure of niche breadth could be used alternatively [21,22,24], d' has some advantages in the context of networks. While all three indices regard an exclusive pairing between two species as high degree of specialization as long as interactions between the two partners are infrequent, Hurlbert's and Smith's indices show a undesired trend towards full generalization when the number of interactions between the two partners increase, although this should be considered a stronger indication of specialization (see below, Properties of alternative niche breadth measures). The interaction between two parties is commonly displayed in a r × c contingency table, with r rows representing one party such as flowering plant species, and c columns representing the other party such as pollinator species. In each cell, the frequency of interaction between plant species i and pollinator species j (or another useful measure of interaction strength) is given as a[ij], (Table 1). Instead of frequencies (a[ij]), each interaction can be assigned a proportion of the total (m) as Table 1. Elements in a species association matrix. Interaction frequencies (a[ij]) between c animal and r plant species and their respective totals (rows:A[i], columns: A[j], total elements: m). Let p'[ij ]be the proportion of the number of interactions (a[ij]) in relation to the respective row total (A[i]), and q[j ]the proportion of all interactions by partner j in relation to the total number of interactions (m). Thus, To quantify the specialization of a species i, the following index d[i ]is suggested. This d[i ]is related to Shannon diversity, similar to an index recently suggested to characterize biomass flow diversity in food webs [20]. However, an appropriate index in this context should not only consider the diversity of partners, but also their respective availability (see [22]). Consequently, the following index compares the distribution of the interactions with each partner (p'[j]) to the overall partner availability (q[j]). The Kullback-Leibler distance for species i is denoted as which can be normalized as The theoretical maximum is given by d[max ]= ln (m/A[i]), and the theoretical minimum (d[min]) is zero for the special case where all p'[ij ]= q[j]. However, a realistic d[min ]may be constrained at some value above zero given that p'[ij ]and q[j ]are calculated from discrete integer values (a[ij]). To take this into account, d[min ]is more suitably computed algorithmically as in a program available from the authors and online [49], providing all d' for a given matrix. This standardized Kullback-Leibler distance (d') ranges from 0 for the most generalized to 1.0 for the most specialized case. Thus, d' can be interpreted as deviation of the actual interaction frequencies from a null model which assumes that all partners are used in proportion to their availability. An average degree of specialization among the species of a party can be presented as a weighted mean of the standardized index, e.g. <d'[i]> for pollinators as While <d'[i]> usually differs from <d'[j]>, the weighted means of the non-standardized Kullback-Leibler distances are the same for both parties, hence <d[i]> = <d[j]>. Network-level index The following network-wide measure is based on the bipartite representation of a two mode network of interactions such as plant-animal or other resource-consumer interactions where members of each party interact with members of the other party but not among themselves (unlike many food webs). The two-dimensional Shannon entropy (termed H[2 ]in order to avoid confusion with the common one-dimensional H) is obtained as H[2 ]decreases with higher specialization. This measure is closely related to the weighted mean of the non-standardized Kullback-Leibler distance of all species, since <d[i]> = <d[j]> = H[2max ]- H[2] (see below, Relationship between d[i ]and H[2]). H[2 ]can be standardized between 0 and 1.0 for extreme specialization versus extreme generalization, respectively, when its minimum and maximum values (H[2min ]and H[2max]) are known. H[2min ]and H[2max ]can be calculated for given constraints. The constraints used here are the maintenance of the total number of interactions of each species, thus all row and column totals, A[i ]and A[j], being fixed (see also [46]). Alternative constraints may be defined depending on the knowledge of the system studied. H[2 ]reaches its theoretical maximum where each p[ij ]equals its expected value from a random interaction matrix (q[i]·q[j]), such that while its theoretical minimum (H[2min]) may be close to zero depending on the matrix architecture. Like for d[min] above, H[2max ]and H[2min ]are constrained by the fact that they are derived from integer values. A program implementing a heuristic solution to obtain H[2max ]and H[2min], and to perform the entire analysis is available from the authors or online [49]. The degree of specialization is obtained as a standardized entropy on a scale between H[2min ]and H[2max ]as Consequently, H[2]' ranges between 0 and 1.0 for extreme generalization and specialization, respectively. Comparison with random associations H[2 ]can be tested against a null model of random associations (H[2ran]). A number of random permutations of the matrix can be performed using a r × c randomization algorithm (also available at [49 ]). The probability (p-value) that the observed H[2 ]is more specialized than expected by random associations is simply given as the proportion of values obtained for H[2ran ]that are equal or larger than H[2], a common procedure in randomization statistics [25,50]. H[2ran ]is usually only slightly larger than H[2min].[]Previously, permutations of r × c contingency tables often used a different test statistics instead of H[2 ][25,51,52]: The relationship between T and H[2 ]is described by a constant, the total number of interactions (m), as T = m·ln m - m·H[2]. Consequently, both methods yield exactly the same p-values. Relationship between d[i ]and H[2] In the following we derive the relationship between the individual levels of specialization (d[i]) and the community level (H[2]). The non-standardized Kullback-Leibler distance for row i can be rewritten as The weighted mean of d[i ]for all i rows (each row weighted by q[i]) yields While the first summand in the final equation for <d[i]> equals -H[2], the remaining two summands correspond to the maximum entropy H[2max], because <d[i]> = H[2max ]-H[2]. The same calculation applies for <d[j]>, thus <d[i]> = <d[j]>. Consequently, the degree of specialization of the entire network (corresponding to the deviation of the network-wide entropy from its maximum value) equals the weighted sum of the specialization of its elements (species). Properties of alternative niche breadth measures The standardized Hurlbert's (B') and Smith's (FT) measure can be applied widely for niche breadth analysis [21,22,24]. In this context, the Kullback-Leibler distance (d) can be viewed as a modified Shannon-Wiener measure of niche breadth that accounts for niche availabilities. Like the Kullback-Leibler distance, both B' and FT compare the proportional distribution of individuals (p) to the proportional resource availability (q) (here: partner availability). For a certain species i, the two measures are in our notation: Each p'[ij ]is the proportion of the number of interactions in relation to the respective row total, and q[j ]is the proportion of all interactions by partner j in relation to the total number of interactions. Thus, Both the standardized Hurlbert's (B') and Smith's (FT) measure range between 0 for the most specialized case to 1.0 for extreme generalization (broadest niche). In the context of niche breadth, it has been shown that the Shannon-Wiener measure is most sensitive, while Hurlbert's and particularly Smith's measure are less sensitive for the selection of rare resources [21] (see also [20]). For the application in network analyses, however, both B' and FT may show some undesired properties. Generally, B', FT and d' are reasonably well correlated with each other across the species within a network (e.g., r[s ]= -0.49 between d' and B', and r[s ]= -0.36 between d' and FT for the 90 pollinators in the network of Vázquez and Simberloff [33], both p < 0.001). However, differences with d' are substantial when a highly specialized species interacts largely exclusively with a specialized partner, e.g. a specialized pollinator with a plant that is almost exclusively pollinated by this one. Imagine a scenario where one exclusive interaction occurs between a plant species and a pollinator species in a 3 × 3 matrix (Table 2). If the interaction between pollinator sp. 3 and plant sp. 3 is only infrequent (e.g. a[33 ]= 1), all indices show a high degree of specialization (d' = 1.0, B' = 0, FT = 0.14) for both partners. However, as the number of exclusive interactions (a[33]) increases, the values for both B' and FT of pollinator sp. 3 and plant sp. 3 show a highly undesired change towards generalization, although a higher a[33 ]is intuitively considered as extreme specialization (e.g., for a[33 ]= 50 the values for pollinator sp. 3 are B' = 0.31 and FT = 0.70), while only d' remains unaffected (d' = 1.0). FT is always larger than zero, and B' becomes larger than zero when the specialists interact more frequently than one of the other partners, thus when q[j ]> min(q[1], q[2], ... q[c]). Both FT and B' approach a value of 1.0 (maximum generalization) for very large a[33]. This undesired effect of FT and B' is not restricted to completely exclusive interactions between two partners. Table 2. Association matrix example. Fictive association matrix between three pollinator species and three plant species. Numbers in each cell are counts of interaction frequencies. Simulation of sampling effort and matrix architecture Two published plant-pollinator networks were selected to investigate the behavior of different specialization measures [32,33]. Both articles use their observed interaction matrices as a model to discuss network properties based on the number of links per pollinator or plant species, allowing a comparison of conclusions drawn. Both networks may be compared as they comprise relatively large datasets from temperate ecosystems, reporting interaction frequencies between plants and their floral visitors: the British meadow community studied by Memmott [32] involved 79 pollinator and 25 plant species (2183 pollinator visits observed), the forests in Argentina studied by Vázquez and Simberloff [33] involved 90 pollinator and 14 plant species (5285 visits). The datasets can be obtained from the Interaction Web Database [53]. We simulated a decreased sampling intensity in both networks using a rarefaction method in order to investigate how sampling effort affects the estimation of specialization indices. Real association matrices were reduced by randomly extracting interactions, e.g. from the total of m = 2183 visits in Memmott's web down to m = 5 visits (in steps of five, repeated ten times for each m). In order to compare the null model characteristics of the specialization measures, we simulated artificial matrices with randomly associated partners and plotted the indices against an increasing number of partners and/or total number of interactions. We assumed that the total frequency of participating species approximates a lognormal distribution, which is typical for biological communities [21,22,24]. All row and column totals were randomly generated from a lognormal distribution (μ = 50, ∑= 1) that was scaled to the desired total number of interactions. Ten different combinations of row and column totals were obtained for each matrix size and taken as template to randomly associate the partners five times, thus each matrix size was represented by 50 random associations. Authors' contributions NB1 conceived of the study and all authors (NB1, FM, NB2) were involved in designing the methods, analyses, interpretation and drafting the manuscript. We thank Diego Vázquez, Pedro Jordano, Thomas Hovestadt, and Michel Loreau for helpful comments and valuable discussion on earlier versions of this manuscript and the Interaction Web Database [53] for providing the datasets used here. 1. Olesen JM, Jordano P: Geographic patterns in plant-pollinator mutualistic networks. 2. Novotny V, Basset Y: Host specificity of insect herbivores in tropical forests. Proc R Soc London Ser B 2005, 272:1083-1090. Publisher Full Text 3. Waser NM, Chittka L, Price MV, Williams NM, Ollerton J: Generalization in pollination systems, and why it matters. Ecology 1996, 77:1043-1060. Publisher Full Text 4. Jordano P: Patterns of mutualistic interactions in pollination and seed dispersal: connectance, dependence asymmetries, and coevolution. Am Nat 1987, 129:657-677. Publisher Full Text 5. Bascompte J, Jordano P, Melian CJ, Olesen JM: The nested assembly of plant-animal mutualistic networks. Proc Natl Acad Sci USA 2003, 100:9383-9387. PubMed Abstract | Publisher Full Text | PubMed Central Full Text 6. Waser NM, Ollerton J, Eds: Plant-pollinator interactions: from specialization to generalization. Chicago: University of Chicago Press; 2006. 7. Ollerton J, Cranmer L: Latitudinal trends in plant-pollinator interactions: are tropical plants more specialised? Oikos 2002, 98:340-350. Publisher Full Text 8. Devoto M, Medan D, Montaldo NH: Patterns of interaction between plants and pollinators along an environmental gradient. Oikos 2005, 109:461-472. Publisher Full Text 9. Winemiller KO: Must connectance decrease with species richness? Am Nat 1989, 134:960-968. Publisher Full Text 10. Martinez ND: Constant connectance in community food webs. Am Nat 1992, 139:1208-1218. Publisher Full Text 11. May RM: Will a large complex system be stable? Nature 1972, 238:413-414. PubMed Abstract | Publisher Full Text 12. Rejmánek M, Starý P: Connectance in real biotic communities and critical values for stability of model ecosystems. Nature 1979, 280:311-313. Publisher Full Text 13. Vázquez DP, Simberloff D: Ecological specialization and susceptibility to disturbance: conjectures and refutations. Am Nat 2002, 159:606-623. Publisher Full Text 14. Dunne JA, Williams RJ, Martinez ND: Network structure and biodiversity loss in food webs: robustness increases with connectance. Ecol Lett 2002, 5:558-567. Publisher Full Text 15. Sahli HF, Conner JK: Characterizing ecological generalization in plant-pollination systems. Oecologia 2006, 148:365-372. PubMed Abstract | Publisher Full Text 16. Parrish JAD, Bazzaz FA: Difference in pollination niche relationships in early and late successional plant communities. Ecology 1979, 60:597-610. Publisher Full Text 17. Basset Y: Diversity and abundance of insect herbivores foraging on seedlings in a rainforest in Guyana. Ecol Entomol 1999, 24:245-259. Publisher Full Text 18. Herrera CM: Plant generalization on pollinators: species property or local phenomenon? 19. Bersier LF, Banasek-Richter C, Cattin MF: Quantitative descriptors of food-web matrices. 20. Hurlbert SH: Measurement of niche overlap and some relatives. Ecology 1978, 59:67-77. Publisher Full Text 21. Feinsinger P, Spears EE, Poole RW: A simple measure of niche breadth. Ecology 1981, 61:27-32. Publisher Full Text 22. Smith EP: Niche breadth, resource availability, and inference. Ecology 1982, 63:1675-1681. Publisher Full Text 23. Fonseca CR, Ganade G: Asymmetries, compartments and null interactions in an Amazonian ant-plant community. J Anim Ecol 1996, 65:339-347. Publisher Full Text 24. Kenny D, Loehle C: Are food webs randomly connected? Ecology 1991, 72:1794-1799. Publisher Full Text 25. Goldwasser L, Roughgarden J: Sampling effects and the estimation of food-web properties. Ecology 1997, 78:41-54. Publisher Full Text 26. Vázquez DP, Aizen MA: Asymmetric specialization: a pervasive feature of plant-pollinator interactions. 27. Kay KM, Schemske DW: Geographic patterns in plant-pollinator mutualistic networks: comment. 28. Vázquez DP, Morris WF, Jordano P: Interaction frequency as a surrogate for the total effect of animal mutualists on plants. Ecol Lett 2005, 8:1088-1094. Publisher Full Text 29. Borer ET, Anderson K, Blanchette CA, Broitman B, Cooper SD, Halpern BS, Seabloom EW, Shurin JB: Topological approaches to food web analyses: a few modifications may improve our insights. Oikos 2002, 99:397-401. Publisher Full Text 30. Memmott J: The structure of a plant-pollinator food web. Ecol Lett 1999, 2:276-280. Publisher Full Text 31. Vázquez DP, Simberloff D: Changes in interaction biodiversity induced by an introduced ungulate. Ecol Lett 2003, 6:1077-1083. Publisher Full Text 32. Dicks LV, Corbet SA, Pywell RF: Compartmentalization in plant-insect flower visitor webs. J Anim Ecol 2002, 71:32-43. Publisher Full Text 33. Vázquez DP, Aizen MA: Null model analyzes of specialization in plant-pollinator interactions. 34. Auerbach MJ: Stability, probability, and the topology of food webs. In Ecological communities: conceptual issues and the evidence. Edited by Strong DR, Simberloff D, Abele LG, Thistle AB. Princeton: Princeton University Press; 1984:413-436. 35. Bascompte J, Jordano P, Olesen JM: Asymmetric coevolutionary networks facilitate biodiversity maintenance. Science 2006, 312:431-433. PubMed Abstract | Publisher Full Text 36. Memmott J, Waser NM, Price MV: Tolerance of pollination networks to species extinctions. Proc R Soc London Ser B 2004, 271:2605-2611. Publisher Full Text 37. Johnson SD, Steiner KE: Generalization versus specialization in plant pollination systems. Trends Ecol Evol 2000, 15:140-143. PubMed Abstract | Publisher Full Text 38. Symons FB, Beccaloni GW: Phylogenetic indices for measuring the diet breadths of phytophagous insects. Oecologia 1999, 119:427-434. Publisher Full Text 39. Webb CO, Ackerly DD, McPeek MA, Donoghue MJ: Phylogenies and community ecology. Annu Rev Ecol Syst 2002, 33:475-505. Publisher Full Text 40. Novotny V, Basset Y, Miller SE, Weiblen GD, Bremer B, Cizek L, Drozd P: Low host specificity of herbivorous insects in a tropical forest. Nature 2002, 416:841-844. PubMed Abstract | Publisher Full Text 41. Jordano P, Bascompte J, Olesen JM: Invariant properties in coevolutionary networks of plant-animal interactions. Ecol Lett 2003, 6:69-81. Publisher Full Text 42. Vázquez DP: Degree distribution in plant-animal mutualistic networks: forbidden links or random interactions? Oikos 2005, 108:421-426. Publisher Full Text 43. Gotelli NJ: Null model analysis of species co-occurrence patterns. Ecology 2000, 81:2606-2621. Publisher Full Text 44. Fenster CB, Armbruster WS, Wilson P, Dudash MR, Thomson JD: Pollination syndromes and floral specialization. Annu Rev Ecol Evol Syst 2004, 35:375-403. Publisher Full Text 45. Montecarlo statistics on RxC matrices [http://itb.biologie.hu-berlin.de/~nils/stat/] webcite 46. Manly B: Randomization bootstrap and Monte Carlo methods in biology. London: Chapman and Hall; 1997. 47. Blüthgen N, Verhaagh M, Goitía W, Blüthgen N: Ant nests in tank bromeliads – an example of non-specific interaction. Insect Soc 2000, 47:313-316. Publisher Full Text 48. Patefield WM: An efficient method of generating random RxC tables with given row and column totals. Appl Stat 1981, 30:91-97. Publisher Full Text 49. Interaction web database [http://www.nceas.ucsb.edu/interactionweb/] webcite Sign up to receive new article alerts from BMC Ecology
{"url":"http://www.biomedcentral.com/1472-6785/6/9","timestamp":"2014-04-23T13:42:23Z","content_type":null,"content_length":"175222","record_id":"<urn:uuid:30e1a2b8-4c0d-4023-bdfe-c1119d829c0d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Web Resources Physics 401: Newton's 1st and 2nd Laws Physics 401: Newton's 1st and 2nd Laws. GPB Television Podcast for physics and physical science. Newton's First Law of Motion Khan Academy Youtube podcast on Newton's First Law of Motion (Part 2) presents the concepts of Newton's First Law as an explanation of a quiz. Physics 402: Newton's 2nd Law Physics 402: Newton's 2nd Law. GPB Television Podcast for physics and physical science. Physics 403: More of Newton's 2nd Law Physics 403: More of Newton's 2nd Law. GPB Television Podcast for physics and physical science. Physics 404: Newton's 3rd Law and Projectile Motion Physics 404: Newton's 3rd Law and Projectile Motion. GBP Television Podcast for Physics and Physical Science. Physics of Projectile Motion Physics 405: Projectile Motion. GPB Television Podcast for physics and physical science. Includes an embedded 30 minute video lesson,two note-taking guides and a problem worksheet (PDF). Physics 301: Analysis of Motion This is a GPB Television Podcast for Physics and Physical Science. This podcast talks about the motion of science. Physics 302: Motion Math This is a GPB Television Podcast on the science of motion. This podcast will explain how to calculate motion. Physics 303: Motion of Falling Objects This is a GBP Television Podcast on the science of motion. The podcast describes the math and science of falling object. Calculating Average Velocity or Speed Khan Academy, Khan works problems for velocity and speed Cacluating Velocity; Solving for time Khan Academy Podcast. Khan works problems in velocity in which he solves for time. Displacement from Time and Velocity Example This Khan Academy podcast works problems to calculation displacement from time and velocity. Newton's First Law of Motion, Part 3 This a podcast from Khan Academy. This is the third part of a four-part series on Newton's first law of motion. Math in Basketball Basketball player Elton Brand describes how he became an NBA star, then presents a challenge about the math behind the perfect free throw shot. Friction Quiz The concepts of frictions placed in a 10 question quiz to assess mastery.Quiz is computer graded. Informational Materials Physics 401: Newton's 1st and 2nd Laws Physics 401: Newton's 1st and 2nd Laws. GPB Television Podcast for physics and physical science. Physics 402: Newton's 2nd Law Physics 402: Newton's 2nd Law. GPB Television Podcast for physics and physical science. Physics 403: More of Newton's 2nd Law Physics 403: More of Newton's 2nd Law. GPB Television Podcast for physics and physical science. Physics 404: Newton's 3rd Law and Projectile Motion Physics 404: Newton's 3rd Law and Projectile Motion. GBP Television Podcast for Physics and Physical Science. Physics of Projectile Motion Physics 405: Projectile Motion. GPB Television Podcast for physics and physical science. Includes an embedded 30 minute video lesson,two note-taking guides and a problem worksheet (PDF). Calculating Average Velocity or Speed Khan Academy, Khan works problems for velocity and speed Rubber Band Airplane Lab You would not think that the rubber band planes that kids play with as toys today had any relevance in our history. But, in fact, the commercial airplanes that you ride on every day originated from the simple idea of the rubber band airplane. Many scientists’ ideas on aerodynamics started with a plane as simple as one powered by a rubber band. This website allows students to identify the best model of a rubber airplane by manipulating its weight and propeller speed. Physics Projectile Motion, Interactive Simulation This is an interactive simulation used to test the students calculations and predictions for projectile motion. Gravity and Orbits, Physics Interactive Simulation This is an interactive simulation for the effects of Gravity on orbiting bodies in space. Lunar Landing Can you avoid the boulder field and land safely, just before your fuel runs out, as Neil Armstrong did in 1969? Our version of this classic video game accurately simulates the real motion of the lunar lander with the correct mass, thrust, fuel consumption rate, and lunar gravity. The real lunar lander is very hard to control Motion in 2D Students will learn about position, velocity, and acceleration vectors. Move the ball with the mouse or let the simulation move the ball in four types of motion. Motion in 1D Explore the forces at work when you try to push a filing cabinet. Create an applied force and see the resulting friction force and total force acting on the cabinet. Charts show the forces, position, velocity, and acceleration vs. time. View a Free Body Diagram of all the forces (including gravitational and normal forces). Investigate how torque causes an object to rotate. Discover the relationships between angular acceleration, moment of inertia, angular momentum and torque Newton's Laws of Motion An interactive discovery.com website that allows a student to learn Newton's 3 Laws of Motion When Pigs Fly! Make Your Own Plane Students use this website to construct and fly their own plane. They can compete to see whose plane flies the furthest and longest. Projectile Motion This interactive makes learning fun either by teachers demonstrating projectile motion objects or students independently practicing by setting the angle, initial speed, and mass by firing various. Teachers and students can add air resistance to make a game out of this simulation by trying to hit a target. Forces and Motion The interactive tool allows students to explore the forces at work when pushing a filing cabinet. Students create an applied force to determine the resulting friction force and total force acting on the cabinet. Charts show the forces, position, velocity, and acceleration vs. time. View a Free Body Diagram of all the forces (including gravitational and normal forces). Interactive Projectile Motion Blast a Buick out of a cannon! Learn about projectile motion by firing various objects. Set the angle, initial speed, and mass. Add air resistance. Make a game out of this simulation by trying to hit a target. Gravity Launch This interactive game challenges students to launch their ship to dock with 1 or more stations in outer space. To do this, the student must select the thrust and angle for their ship to land successfully. THRUST: How much force your rocket ship will use to fly. ANGLE: Set the path you'll take off on your mission. LAUNCH: Blast off. Learning Activities Introduction to Vectors and Scalars This is a Khan Academy video lesson introducing Vector and Scalars. **First video in a four part series on displacement, velocity and time. Investigate how torque causes an object to rotate. Discover the relationships between angular acceleration, moment of inertia, angular momentum and torque Water Bottle Rocket Competition Participants will design, construct, and test a rocket made from a 2 liter plastic soda bottle. The goal is to design a rocket that will remain aloft for a maximum period of time. Every group will launch their rocket 3 times. Do not use the same rocket body (2 liter bottle) for more than two launches. The first 2 launches will be test launches. Your group must choose one or more aspects of the rocket to test. After each test launch you must analyze what hurt your rocket’s performance the most and redesign it before the next launch day. Your goal is to have the best possible rocket by the final launch day. Rubber Band Airplane Lab You would not think that the rubber band planes that kids play with as toys today had any relevance in our history. But, in fact, the commercial airplanes that you ride on every day originated from the simple idea of the rubber band airplane. Many scientists’ ideas on aerodynamics started with a plane as simple as one powered by a rubber band. This website allows students to identify the best model of a rubber airplane by manipulating its weight and propeller speed.
{"url":"http://alex.state.al.us/weblinks_category.php?stdID=41186","timestamp":"2014-04-21T04:55:29Z","content_type":null,"content_length":"124010","record_id":"<urn:uuid:6289183b-226e-4318-821f-8fdf2ed205dc>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Working with Fractional Powers May 23rd 2008, 10:20 AM Working with Fractional Powers Not sure how to solve the last part of this problem. The original equation reads: $\sqrt{xy^3} (\sqrt{x^5y} - \sqrt{xy^7})$ So then you raise the powers to get some nicer exponents to work which looks like this $x^\frac{1}{2}y^\frac{3}{2}(x^\frac{5}{2}y^\frac{1} {2}) - x^\frac{1}{2}y^\frac{3}{2}(x^\frac{1}{2}y^\frac{7} {2})$ and here is where I am lost, the author adds the exponents of the variables and ends up with How is the author getting x^6/2, y^4/2 when you distribute for the first part of the equation? I am completely clueless as to how she did this could someone please fill me in. Thanks! May 23rd 2008, 10:37 AM Not sure how to solve the last part of this problem. The original equation reads: $\sqrt{xy^3} (\sqrt{x^5y} - \sqrt{xy^7})$ So then you raise the powers to get some nicer exponents to work which looks like this $x^\frac{1}{2}y^\frac{3}{2}(x^\frac{5}{2}y^\frac{1} {2}) - x^\frac{1}{2}y^\frac{3}{2}(x^\frac{1}{2}y^\frac{7} {2})$ and here is where I am lost, the author adds the exponents of the variables and ends up with How is the author getting x^6/2, y^4/2 when you distribute for the first part of the equation? I am completely clueless as to how she did this could someone please fill me in. Thanks! The 1st term looks like this: $x^\frac{1}{2}y^\frac{3}{2}(x^\frac{5}{2}y^\frac{1} {2})$ To find the new exponent of x, add the two different exponents because this property holds: $x^a * x^b = x^{a+b}$ Therefore: $\frac{1}{2}+\frac{5}{2}=3$, and the new exponent for x is 3. Likewise, y now has an exponent of 2. May 23rd 2008, 10:41 AM Not sure how to solve the last part of this problem. The original equation reads: $\sqrt{xy^3} (\sqrt{x^5y} - \sqrt{xy^7})$ So then you raise the powers to get some nicer exponents to work which looks like this $x^\frac{1}{2}y^\frac{3}{2}(x^\frac{5}{2}y^\frac{1} {2}) - x^\frac{1}{2}y^\frac{3}{2}(x^\frac{1}{2}y^\frac{7} {2})$ and here is where I am lost, the author adds the exponents of the variables and ends up with How is the author getting x^6/2, y^4/2 when you distribute for the first part of the equation? I am completely clueless as to how she did this could someone please fill me in. Thanks! The rule states : $a^b a^c=a^{b+c}$ $x^{\frac 12}y^{\frac 32}(x^{\frac 52}y^{\frac 12})=x^{\frac 12}y^{\frac 32}x^{\frac 52}y^{\frac 12}=x^{\frac 12}x^{\frac 52}y^{\frac 32}y^{\frac 12}=\dots$ May 23rd 2008, 10:52 AM Thanks for the input everyone! I just realized I made a mistake. At times I forgot the author assumes that one is not utilizing a calculator. 1/2+5/2 she had 6/2 I just punched it in the calculator and got 3 and skipped one of her steps. Very stupid of me for not seeing this earlier, thanks!
{"url":"http://mathhelpforum.com/algebra/39409-working-fractional-powers-print.html","timestamp":"2014-04-17T06:12:33Z","content_type":null,"content_length":"10993","record_id":"<urn:uuid:1f6a62d8-42dd-4847-a6dc-308c0e394d33>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Circle pdf' printed from http://nrich.maths.org/ A random variable $X$ has a zero probability of taking non-positive values but has a non-zero probability of taking values in any range $[0, x]$ for any $x> 0$. The curve describing the probability density function forms an arc of a circle. Which of these are possible shapes (ignoring the scale) for the probability density function $f(x)$? Identify clearly the mathematical reasons, using the correct terminology, for your answers. If the radius of the circle forming the arc of the pdf is $1$, what is the maximum value that the random variable could possibly take? Which of the other arcs are possible candidates for probability density functions? Can you invent mathematical scenarios which would lead to these pdfs?
{"url":"http://nrich.maths.org/6420/index?nomenu=1","timestamp":"2014-04-17T12:52:01Z","content_type":null,"content_length":"3688","record_id":"<urn:uuid:ded710e1-eb94-4458-9489-ca7d8a74c835>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Regression: reversing the roles of X and Y I'm a little confused about your question? Are you asking whether regressing X on Y will always give the same coefficients, or whether it is ever possible to get the same ones? (X_i, Y_i), i=1,2,...n Y hat is a fitted (predicted) value of Y based on fixed values of X. Y hat = b0 + b1 *X with b0 and b1 being the least-square estimates. For X hat, we are predicting the value of X from values of Y which would produce a different set of parameters, b0' and b1'. Is there any general mathematical relationship linking b0', b1' and b0, Thanks for answering!
{"url":"http://www.physicsforums.com/showpost.php?p=2212306&postcount=4","timestamp":"2014-04-20T21:20:48Z","content_type":null,"content_length":"8322","record_id":"<urn:uuid:2d87e391-8612-445a-9444-bf69d6cfa4e4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Question in matlab January 6th 2011, 01:48 AM #1 Sep 2009 Question in matlab Hi All I want to ask question on matlab if we write the following statement in M-file for i=1:10 for j=1:10 so the program print the following now the question How can create a program print the following Best regard Hi All I want to ask question on matlab if we write the following statement in M-file for i=1:10 for j=1:10 so the program print the following now the question How can create a program print the following Best regard for i=1:10 for j=1:10 Look at the code, see what it does and why, then make the change that is needed to achieve the effect that you want. It summarise, you have enough information to do what you want if you make the least bit of effort to understand what you have been shown. January 6th 2011, 02:03 AM #2 January 6th 2011, 02:31 AM #3 Sep 2009 January 6th 2011, 05:42 AM #4 Grand Panjandrum Nov 2005 January 8th 2011, 01:21 AM #5 Sep 2009 January 8th 2011, 04:07 AM #6 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/math-software/167580-question-matlab.html","timestamp":"2014-04-16T16:11:19Z","content_type":null,"content_length":"46527","record_id":"<urn:uuid:a1c0b707-2069-452c-bafe-286a8a85dd58>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Smoothsort Demystified Last Major Update: January 7, 2011 A few years ago I heard about an interesting sorting algorithm (invented by the legendary Edsger Dijkstra) called smoothsort with great memory and runtime guarantees. Although it is a comparison sort and thus on average cannot run faster than Ω(n lg n), smoothsort is an adaptive sort, meaning that if the input is somewhat sorted, smoothsort will run in time closer to O(n) than O(n lg n). In the best case, when the input is sorted, smoothsort will run in linear time. Moreover, smoothsort is an in-place sorting algorithm, and requires O(1) auxiliary storage space. Compare this to mergesort, which needs O(n) auxiliary space, or quicksort, which needs O(lg n) space on average and O(n) space in the worst case. In the worst case, smoothsort is an asymptotically optimal O(n lg n) sort. With all these advantages, smoothsort seemed like an excellent sort to code up and add to my archive of interesting code project, and I set out to learn enough about it to build my own implementation. Unfortunately, I quickly found out that smoothsort is one of the least-documented sorts on the web. Sure, you can find many sites that mention smoothsort or its runtime characteristics, and in a few cases sites that provide an implementation, but hardly any sites explained the full intuition behind how the sort worked or where the runtime guarantees came from. Moreover, Dijkstra's original paper on smoothsort is extremely difficult to read and gives little intuition behind some of the trickier optimizations. After spending a few days reading over existing sites and doing a bit of my own work, I finally managed to figure out the intuition behind smoothsort, as well as the source of many of the complex optimizations necessary to get smoothsort working in constant space. It turns out that smoothsort is actually a generalization of heapsort using a novel heap data structure. Surprisingly, I haven't found this structure mentioned anywhere on the web, and this page may be the first time it's been mentioned online. This page is my attempt to transfer the algorithm's intution so that the beauty of this algorithm isn't lost in the details of a heavily-optimized sorting routine. Although there is a (fairly important!) proof in the middle of this writeup, most of the intution should be immediate once you see the high-level structure. It's a great algorithm, and I hope you find it as interesting as I do. Background review: Heapsort Before I actually go over the intuition behind smoothsort, let's take a few minutes to review a similar algorithm, heapsort. This may seem like an unusual first step, but as you see the evolution of the smoothsort algorithm it will become apparent why I've chosen to present things this way. The heapsort algorithm is based on the binary heap data structure. (If you aren't familiar with binary heaps, please read over the link at Wikipedia to get a better sense of how it works before proceeding. It's well-worth your time and I guarantee that you'll like where we're going with this). Conceptually, a binary heap is a complete binary tree where each node is larger than its children. (We could equivalently use min-heaps, where each node is smaller than its children, but the convention in heapsort is to build a max-heap). Because binary heaps are complete, the height of a binary heap over n elements is O(lg n). Moreover, it's possible to build a binary heap in linear time. Finally, given a binary heap, one can remove the maximum element from the heap and rearrange the nodes to restore the heap property in O(lg n) time. This gives a naive implementation of heapsort, which works as follows: 1. Construct a max-heap from the input sequence in O(n) time. 2. Set x to the index of the last spot in the array. 3. While the heap is not yet empty: 1. Remove the maximum element from the heap. 2. Place that element at position x, then move x to the previous position. 3. Rebalance the max-heap. A sketch of the proof of correctness for this algorithm is reasonably straightforward. We begin by putting all of the elements in the range into the max-heap, and as we dequeue the elements we end up getting everything in the original sequence in descending order. We then place each of these elements at the last unused spot in the array, essentially placing the ascending sequence in reverse order. Since we can build the max-heap in O(n) time and rebalance the heap in O(lg n) time, the overall runtime is O(n + n lg n) = O(n lg n), which is asymptotically optimal. In practice, though, no one actually implements heapsort this way. The problem is that the above algorithm ends up using O(n) extra memory to hold the nodes in the max-heap. There is a much better way of implementing the heap implicitly in the array to be sorted. If the array is zero-indexed, then the parent of a node at position i is at position ⌊(i + 1) / 2⌋ - 1; conversely, a node's children (if they exist) are at positions 2i + 1 and 2i + 2. Because the computations necessary to determine a node's parent and children can be done in a handful of assembly instructions, there's no need to explicitly store it anywhere, and the O(n) memory that was dedicated to storing links can instead be recycled from the O(n) memory of the input itself. For example, here's a max-heap and its corresponding implicit representation: The in-place heapsort works by rearranging the array elements into an implicit binary heap with its root in the first position and an arbitrary leaf in the last position. The top element of the heap is then dequeued and swapped into its final position, placing a random element at the top of the binary heap. The heap is then rebalanced and this process repeated until the elements are sorted. Because the heap is represented implicitly using the array itself, only a constant amount of memory must be used beyond the array itself to store the state of the algorithm (in particular, things like the next position to consider, data for the rebalance heap step, etc.) Consequently, this version of heapsort runs in O(n lg n) time and O(1) auxiliary space. This low memory usage is very attractive, and is one of the reasons that heapsort is so prevalent as a sorting algorithm. A High-Level Overview of Smoothsort The one major shortcoming of heapsort is that it always runs in Θ(n lg n). The reason for this has to do with the structure of the max-heap. When we build up the implicit binary heap in heapsort, the largest element of the heap is always on the leftmost side of the array, but ultimately the element must be on the right side. To move the max element to its proper location, we have to swap it with an essentially random element of the heap, then do a bubble-down operation to rebalance the heap. This bubble-down operation takes Ω(lg n) time, contributing to the overall Ω(n lg n) runtime. One obvious insight to have is this: what if we build a max-heap but store the maximum element at the right of the sequence? That way, whenever we want to move the maximum element into the next open spot at the end of the array, we're already done - the maximum element will now be in the correct position. All we need to do now is rebalance the rest of the elements. Here, we encounter a somewhat cool property. When we take off the root of the heap and "break the heap open" to expose the two max-heaps living under the root. Because each of these are roots of max-heaps, they're bigger than all of the elements of their respective max-heaps, and so one of these two elements is the largest element of what remains. The problem now, though, is that we've fractured our nice max-heap into two distinct max-heaps, and the only way we know how to rebalance them is to swap up a leaf and bubble it down. This is the killer step, since it's almost certainly going to take Ω(lg n) time, forcing our runtime to be Θ(n lg n). We're aiming to get O(n) in the best case, and so this isn't going to work. Here is the key idea that makes smoothsort possible - what if instead of having a single max-heap, we have a forest of max-heaps? That is, rather than having a single max-heap, we'll maintain a sequence of max-heaps embedded into the array. That way, it's not a problem if we end up breaking apart one heap into multiple parts without putting it back together. Provided that we don't end up having too many heaps at any one time (say, O(lg n) of them), we can efficiently find the largest element of what remains. At a high level, smoothsort works as follows. First, we make a linear scan over the input sequence, converting it into a sequence of implicit max-heaps. These heaps will not be binary heaps, but rather an unusual type of heap described below called a Leonardo heap. In the course of doing so, we maintain the property that the heaps' top elements are in ascending order, forcing the rightmost heap to hold the maximum of the remaining elements. Once we've done this, we'll continously dequeue the top element of the rightmost max-heap, which is in the correct location since it's in the rightmost unfilled spot. We'll then do some manipulations to reestablish the heaps and the sorted property. These guarantees, plus a bit of clever mathematics, guarantee that the algorithm runs quickly on sorted sequences. The initial implementation of smoothsort that we'll do will end up having excellent runtime guarantees, but high memory usage (O(n)). We'll then see an optimization that compresses this down to O(lg n), and finally a theoretically dubious trick that ends up reducing the space requirement to O(1). Leonardo Trees Before we can get to the actual smoothsort implementation, we need to discuss the structure of the heaps we'll be building up. These heaps are based on the Leonardo numbers, a sequence of numbers similar in spirit to the better-known Fibonacci numbers. I have actually not seen these heaps used anywhere other than smoothsort, and so for lack of a better name I'll refer to them as Leonardo The Leonardo numbers (denoted L(0), L(1), L(2), ...) are given by the following recursive formulation: • L(0) = 1 • L(1) = 1 • L(n + 2) = L(n) + L(n + 1) + 1 For reference, the first few Leonardo numbers are 1, 1, 3, 5, 9, 15, 25, 41, 67, and 109. A Leonardo tree of order k (denoted Lt[k]) is a binary tree with a rigidly-defined shape. Leonardo trees are defined recursively as follows: • Lt[0] is a singleton node. • Lt[1] is a singleton node. • Lt[n + 2] is a node with two children, Lt[n] and Lt[n+1] (in that order) You can show with a fairly simple inductive proof that the number of nodes in Lt[k] is L(k), hence the name. To make it a bit easier to intuit the structure of these trees, here are some pictures of the first few Leonardo trees: This seems like a pretty random data structure choice... why is it at all useful? And where does it come from? It turns out that these heaps are not at all chosen randomly. In particular, there is a useful result about Leonardo numbers (and consequently Leonardo trees) that makes them invaluable in the smoothsort algorithm. This is the only proof in this entire writeup, but I strongly encourage you to read it. The proof of this result will be adapted into the main loop of the algorithm, so a good intuitive understanding of what's going here might be helpful later on. Lemma: Any positive integer can be written as the sum of O(lg n) distinct Leonardo numbers. Proof: We'll begin by proving the first half of this claim by proving a much stronger claim: for any positive integer n, there is a sequence x[0], x[1], ..., x[k] such that: 1. ∑[i]L(x[i]) = n 2. x[0] < x[1] < ... < x[k]. 3. If x[0] = 0, then x[1] = 1. 4. For any i > 0, x[i] + 1 < x[i+1] That's a lot to process, so let's try to give an intuitive feel for what's happening. We're claiming that there is some sequence of Leonardo numbers (indexed by the ascending sequence x[0], x[1], x [2], etc.) that sums up to the number n. Furthermore, the sequence doesn't use the Leonardo number L(0) until first using L(1) (since L(0) = L(1), this makes the proof a lot easier). Finally, if the sum contains two consecutive Leonardo numbers, then those are the smallest two Leonardo numbers in the sequence. Recall that the Leonardo numbers are defined as L(n + 2) = L(n + 1) + L(n) + 1. This last claim states that whenever there are two adjacent Leonardo numbers in the sequence, they must be the smallest numbers in the sequence so that all merges of Leonardo numbers happen at the end. This claim isn't strictly necessary for the correctness proof, but will show up in the smoothsort algorithm and I've included it here. The proof of this claim is by induction on n. As a base case, if n = 0, then take the x's to be the empty sequence and all four claims are satisfied. For the inductive step, assume that for some number n the claim holds and consider the number n + 1. Start off by writing n = L(x[0]) + L(x[1]) + L(x[2]) + ... + L(x[k]) for some sequence of x's meeting the above criteria. There are then three cases to consider: Case 1: x[0] + 1 = x[1]. In this case, note that L(x[0]) + L(x[1]) + 1 = L(x[0]) + L(x[0] + 1) + 1 = L(x[0] + 2). Next, note that x[0] + 2 = x[1] + 1 &lt x[2]. If we let y[0] = x[0] + 2 and then let y[i] = x[i+1] for i > 0, then ∑[i]L(y[i]) = y[0] + ∑[i = 1]y[i] = L(x[0] + 2) + ∑[i=2]L(x[i]) = 1 + L(x[0]) + L(x[1]) + ∑[i=2]L(x[i]) = 1 + n, so the first claim holds. The second claim holds for the y's by the above logic relating x[0] + 2 to x[2]. Claim (3) holds since y[0] ≠ 0, and claim (4) because for any i > 0, y[i] + 1 = x[i+1] + 1 < x[i+2] = y[i+1]. Case 2: x[0] = 1, and case 1 does not apply. Since case 1 does not apply, we know that x[0] + 1 < x[1]. Let y[0] = 0, y[i] = x[i - 1] for i > 0. Then ∑[i]y[i] = L(0) + ∑[i]x[i] = 1 + n, so the first claim holds. The second claim holds because it held for the x's initially, x[0] = 1, and the new first element (y[0]) is 0. The third claim holds because y[0] = 0 and y[1] = x[0] = 1. Finally, because we aren't in case 1, x[0] + 1 < x[1], so for any i > 1, y[i] + 1 = x[i - 1] + 1 < x[i] = y[i+1], so the claim holds for n + 1. Case 3: x[0] ≠ 1, and case 1 does not apply. We know that x[0] ≠ 0, since if it were so, by (3) x[1] = 1, and so case 1 would apply, a contradiction. Thus x[0] > 1. Then let y[0] = 1, y[i] = x[i - 1] for i > 0. Using similar logic to the above (and the fact that L(0) = L(1) = 1), the sum of these y's is equal to n + 1. Since x[0] > 1, the y's are in ascending order. x[0] ≠ 0, so the third claim does not apply. Finally, since case one did not apply, x[0] + 1 < x[1], and so the final claim holds, and the claim holds for n + 1 These three cases are exhaustive and mututally exclusive, and so the induction is complete. Finally, we need to show that each of the sequences described above uses at most O(lg n) Leonardo numbers. To do this, we show that L(k) = 2 F(k + 1) - 1, where F(k + 1) is the (k+1)st Fibonacci number. From there, we have that L(k) > (2 / √5)φ^(k + 1), with φ = (1 + √ 5) / 2 by the closed-form equation for Fibonacci numbers.. We then have that for any n, if we let k = ⌈ log[φ]((√5/2) n) ⌉ = O(lg n), we have that L(k) > n. Since n can be written as the sum of unique Leonardo numbers, none of which can be bigger than n (and thus no greater than L(k)), this means that n can be written as the sum of some subset of the first k Leonardo numbers, of which there are only O(lg n). The proof that L(k) = 2F(k + 1) - 1 is by induction on k. For k = 0, 2F(1) - 1 = 2 - 1 = 1 = L(0). For k = 1, 2F(2) - 1 = 2 - 1 = 1 = L(1). Now assume that for all k' < k, the claim holds. Then L(k) = L(k - 2) + L(k - 1) + 1 = 2F(k - 1) - 1 + 2F(k) - 1 + 1 = 2(F(k - 1) + F(k)) - 1 = 2F(k + 1) - 1. Leonardo Heaps Much in the same way that you can build a binomial heap using a collection of binomial trees (not required reading, but highly recommended!), you can build a "Leonardo heap" out of a collection of Leonardo trees. A Leonardo heap is an ordered collection of Leonardo trees such that: 1. The sizes of the trees is strictly decreasing. As an important consequence, no two trees have the same size. 2. Each tree obeys the max-heap property (i.e. each node is at least as large as its children) 3. The roots of the trees are in ascending order from left to right. Here is a sample Leonardo heap: Notice that properties (1) and (3) of Leonardo heaps mean that the smallest heap has the largest root and the largest heap has the smallest root. The roots increase from left to right. In order for Leonardo heaps to qualify as max-heaps, we'll need to implement some basic functionality on them. In particular, we'll show how to implement heap insert and dequeue-max. Inserting into a Leonardo heap. There are three steps to inserting into a Leonardo heap. First, we need to ensure that the resulting heap has the correct shape; that it's a collection of Leonardo trees of unique size stored in descending order of size. Next, we need to ensure that the tops of the heaps are sorted in ascending order from left to right. Finally, we'll ensure that each of the Leonardo trees obeys the max-heap property. Let's start by seeing how to get the shape right. Earlier we proved that each number can be partioned into a sum of descending, unique Leonardo numbers obeying certain properties, and the algorithm for inserting into a Leonardo heap is based on the three cases of the proof. We begin by checking whether the two smallest Leonardo trees correspond to consecutive Leonardo numbers L(k) and L(k + 1). If so, we create a new Leonardo tree of type Lt [(k+2)] with the inserted element as the root. Otherwise, if the smallest Leonardo tree is of size L(1), we insert the new element as a singleton Leonardo tree of size L(0). Finally, if neither other case applies, we insert the new element as a singleton Leonardo tree of size L(1). The proof that this ends up producing a sequence of Leonardo trees of decreasing size is almost identical to the earlier proof, and so I omit it. Here are a few pictures of different insertions into a Leonardo heap: A simple merge of Lt[2] and Lt[1] with a new node to form Lt[3] Creating a new Lt[1] in addition to the existing Lt[3]. Merging together the Lt[2] and Lt[1] trees into an Lt[3] tree, ignoring the Lt[4] tree as it isn't one of the two smallest. Now, let's see how to guarantee that the topmost elements of each heap are sorted in ascending order from left to right. Essentially, this step does an insertion sort of the new value into the list defined by the roots of all of the Leonardo trees, though it's a bit more complex than that. In particular, because the new element is not necessarily the largest element of the tree containing it (because we haven't yet restored the heap property), if we naively swap the new element down until it comes to rest, we can't be guaranteed that the heap property will hold for any of the heaps that were modified. For example, consider this setup, which corresponds to the state of the Leonardo heap in the third of the above examples: An erroneous swap Here, the root of the rightmost heap (the number 54) was just added. If we naively insertion sort it down to its proper resting place, then in the end we might need to restore the heap property for each of the heaps we swapped roots with, as seen by the fact that the new root of the rightmost heap is smaller than any of its elements. The problem with this is that what we really want to do is insertion sort on the values that will ultimately be at the roots of the trees. Furthermore, we'd like to do this as efficiently as possible; that is, without reheapifying each tree at every step. Fortunately, we can do this fairly easily. Given a "mostly-heapified" Leonardo tree (one where the root may be out of place, but the rest of the structure is valid), we can guarantee that the node that ultimately ends up being the root of the tree is either the root or the roots of one of its two children. Consequently, we do a modified insertion sort, swapping the root of the preceding tree with the current one only if its root is bigger than the new element and the roots of its child nodes. Note that if this is true, after the swap the tree that used to contain the new element is now a correctly-balanced heap, since the new root is bigger than either of the roots of the subtrees. Finally, once we end up in a situation where the new root is atop the correct heap, we can use the heapify operation originally developed for binary heaps to restore the heap property to that tree. At this point, as mentioned above, all of the trees to the right are valid max-heap Leonardo trees, the current tree is valid, and the trees to the left are all unchanged. Moreover, the roots of the trees are in descending order, since we used an (albeit modified) insertion sort to rearrange them. Let's consider the runtime of this operation. Creating a new Leonardo tree from the new element and (possibly) the two preceding trees can be done in O(1). The insertion sort step might move the new element across the tops of at most O(lg n) trees (since, as mentioned before, the partitioning of the n elements into distinct Leonardo numbers uses at most O(lg n) such numbers), and when it finally comes to rest, it's inserted into a Leonardo tree of order at most O(lg n). A quick inductive argument can be shown that the height of a Leonardo tree of size k is O(k), and so this bubble-down step takes at most O(lg n) time, netting an insert time of O(lg n). However, what if the element we're inserting is the largest element in the heap? In that case, the heap-building runtime is the same, but the time to insertion-sort the element into place is now O(1) instead of O(lg n), and the time to heapify the tree containing the new element is also O(1) since no rearrangements are made. In other words, inserting a new max element into a Leonardo heap takes time O(1). This is crucial to getting smoothsort to run in O(n) time on already-sorted inputs. Dequeuing elements from a Leonardo heap.. This process is similar to the process for building a Leonardo heap, though with a bit more bookkeeping. We know that the largest element is atop the smallest heap, and so we can dequeue it quite easily. There are now two cases to consider, depending on what kind of heap the last element was in. If it was a Lt[0] or Lt[1] heap, then all of the guarantees we had about the heap structure still hold, since all we did was remove a heap from the front of the list. Otherwise, the heap has two child heaps which have just been "exposed" to the rest of the trees. To rebalance the heap, we apply the modified insertion sort algorithm to reposition the root of the leftmost tree, then heapify whichever tree the root ends up in. We then do the same for the rightmost tree. Once this step is complete, all of the heap properties are satisfied and we are done. Here are some examples of Leonardo heap dequeues: Dequeuing from this Leonardo heap splits the Lt[3] heap in two and forces a rebalance. Dequeuing from this heap deletes the last element and does not necessitate a rebalance. What is the runtime of this algorithm? As mentioned earlier, the insertion-sort-and-heapify operation runs in O(lg n) time in the worst case. However, if the roots of whatever heaps were just exposed are already in the correct position (i.e. neither one needs to be moved)? In this case, the dequeue operation is O(1). This happens if the input is already sorted to some extent, and in particular if the elements fed in to the Leonardo heap were already sorted. Consequently, using a Leonardo heap to sort a range of already-sorted elements takes time O(n). A First Attempt at Smoothsort From this definition of Leonardo heaps alone, we can get a first approximation of the smoothsort algorithm. This algorithm, which looks surprisingly similar to regular heapsort, is as follows: 1. Construct a Leonardo max-heap from the input sequence in O(n) time. 2. Set x to the index of the last spot in the array. 3. While the heap is not yet empty: 1. Remove the maximum element from the heap. 2. Place that element at position x, then move x to the previous position. 3. Rebalance the max-heap. In the worst case, this algorithm runs in O(n lg n), since each insert or removal could run in O(lg n) time. If the input sequence is already sorted, though, the algorithm will run in O(n) time. This current version of smoothsort uses O(n) memory and is not in-place, but is still pretty elegant nonetheless. The rest of this page deals with how to whittle down the memory usage to O(1). "Mostly Implicit" Leonardo Heaps in O(lg n) Space The naive heapsort algorithm runs in O(n lg n) and uses O(n) memory to maintain the explicit binary heap. Switching from an explicit representation of the max heap to an implicit representation cuts the memory usage down to O(1) without sacrificing any performance. Can we do the same to the Leonardo heap? The answer is yes, but the method is somewhat indirect. We can move from an explicit Leonardo heap that uses O(n) memory to a "mostly implicit" Leonardo heap that requires only O(lg n) extra space by using the input array to be sorted to encode the heap. From there, only a bit of hacky mathematics stops us from fitting things into O(1). We'll begin our discussion by talking about a way of implicitly representing a single Leonardo tree using O(1) auxiliary storage space. This ends up not being particularly difficult and can be done inductively. For starters, Lt[0] and Lt[1], the first two Leonardo trees, are both a single node and can easily be represented implicitly in an array. Then, given a Leonardo tree of any other order k > 1, we can represent it as the concatenation of its child of size k - 1, then its child of size k - 2, and then its root node. For example, here is a Leonardo tree of order 4 and its corresponding Given such a representation, how do we navigate around in it to get from the root to its subtrees? Well, we know that the root is the rightmost element. If we take one step to the right, we're looking at the root element of the smaller subtree. If we then jump backwards by the size of that tree, we're looking at the root element of the larger of the two subtrees. This gives us an easy procedure for navigating around implicit Leonardo trees. Assuming we are looking at the encoding of a Leonardo tree of order k in a zero-indexed array of size L(k): • The root of the tree is at position L(k) - 1. • The root of the Lt[k-1] subtree is at position L(k - 1) - 1. • The root of the Lt[k-2] subtree is at position L(k) - 2. Assuming we have O(1) access to each Leonardo number at a given position, we can descend one level in the tree in constant time. We can guarantee this by memoizing the result of each computation of a value L(k), or by precomputing every single L(k) less than the maximum sequence length representable on the given machine. However, this discussion only talks about how to represent a single Leonardo tree implicitly, not a forest of them as we've done in a Leonardo heap. Fortunately, with a little extra overhead to track where each representation starts and ends, we can easily adapt it to represent entire Leonardo heaps. The idea is simple - we encode the Leonardo heap implicitly as the concatenation of all of the representations of all of its trees (in descending order of size), along with an auxiliary list storing the sizes of each of the heaps. Because each individual implicit Leonardo tree has its root at the rightmost element, the rightmost element of the entire array must be the root of the smallest heap. The information in the auxiliary list then lets us locate any tree in the heap in O(k) time, where k is the length of the list, by starting at the leftmost tree, then skipping backwards past the lengths of each intermediary tree until we arrive at our destination. For example, here is a Leonardo heap and its corresponding implicit representation: As mentioned earlier, any Leonardo heap of size n has at most O(lg n) trees in it. This means that the list of tree sizes is therefore of size O(lg n), and we can look up the location of any tree in the heap in O(lg n) time. One major advantage of this representation is that it naturally supports Leonardo heap insertion and removal. When inserting a new element into the Leonardo heap, we can check in O(1) time whether the two rightmost heaps are mergable by checking whether the last two elements of the size list differ by one. If so, we can merge them implicitly in constant time by adding the new element to the end of the array, then replacing the last two entries of the size list with the size of the resulting heap, as shown here: From here, we can apply the pseudo-insertion sort and heap rebalance operations with only a constant factor more work to look up the position of the previous heaps and the values of the roots of their child heaps. The cases where we insert a new tree of type Lt[0] or Lt[1] are also easily accomplished. We can check which type to insert by looking at the last element of the heap list in O(1), then appending the (singleton) representation of these trees to the array. In either case, we add the proper size information to the end of the heap list in O(1). Perhaps the biggest advantage of this mostly-implicit representation is that it allows for an efficient dequeue max that leaves the largest element of the heap in its proper place in the sorted array. Given a mostly-implicit representation of a Leonardo heap, the maximum element is always in the rightmost spot in the array. To dequeue it, we simply leave it in place, treat the rest of the elements as the remaining Leonardo heap, then do the rebalance operation. Rebalancing is similar to the original case, though we need to make corresponding changes to the size list in addition to everything else. In particular, on dequeuing an element in a tree of type Lt[0] or Lt[1], we simply discard the last entry from the size list. On dequeuing an element of a tree of order k > 1, we represent the newly-exposed trees by replacing the last entry of the size list with two new entries k - 1 and k - 2 (in that order). In short, this "mostly-implicit" representation allows us to perform all of the normal operations on a Leonardo heap while reducing the memory usage from O(n) to O(lg n). Given this mostly-implicit Leonardo heap implementation, we can rewrite our smoothsort implementation accordingly: 1. Construct a mostly-implicit Leonardo max-heap from the input sequence. 2. While the heap is not yet empty: 1. Remove the maximum element from the heap. 2. Place that element at the back of the sequence. 3. Rebalance the max-heap. The beauty of this sorting algorithm is that, at a high level, it's identical to what we had before. There is a strong connection between priority queues and sorting algorithms at work here - the more we refine our priority queues, the better our sorting algorithms get. Implicit Leonardo Heaps in O(1) Space At this point we have an extremely good sorting algorithm: it's adaptive and uses only O(lg n) memory. But to truly round out the algorithm, we need to further cut down on its space usage. Our goal will be to get this entire algorithm working with only O(1) auxiliary storage space. This step is going to be extremely difficult, and will require a combination of clever bitwise hackery and amortized analyses. The basic idea of this next step is to take the size list and compress it down from using O(lg n) space to using O(1) space by encoding the size list in a specially-modified bitvector. Rather than diving in headfirst and looking at the final result (which is, by the way, fairly terrifying), let's ease into it by reviewing a few simple properties of Leonardo numbers that we've talked about If you'll recall, we proved that when partitioning an integer into a sequence of Leonardo numbers, we can do so such that the numbers have unique order (i.e. we don't use the same Leonardo number twice). This means that for each Leonardo number, either the number is in the size list or it isn't. This means that if all we care about is whether a tree of a particular order exists in the size list, we can store the answer using a single bit of information Moreover, we know that when splitting a sequence apart into Leonardo trees, those trees always appear in descending order. Consequently, if we knew which trees were in the Leonardo heap, we could recover their order implicitly by simply finding which tree was smallest. This suggests an entirely different approach to storing the size list - a bitvector with enough entries to hold all the Leonardo numbers that might reasonably come up during the algorithm's execution. Before we go into some of the subtleties or complexities involved with using a bitvector, we should first ask an important question - how many bits are we going to need for this vector? We know that for any sequence of length n, there can be at most O(lg n) Leonardo trees in the heap, and so we'll need O(lg n) bits. Amazingly, we can encode all of these O(lg n) bits using O(1) machine words! To see this, we'll first make an assumption that the computer we're on has transdichotomous memory. Informally, a machine is transdichotomous if each machine word has size at least Ω(lg n). The logic behind this idea is that each machine word is large enough to store a pointer to any other location in memory. Virtually all computers have this property - on a 32-bit machine, there are 2^32 addressable bytes, and four bytes collectively can store a pointer anywhere in memory. A similar claim holds for 64-bit machines. Note, however, that this is not the same thing as claiming that lg n = O(1). That would be tantamount to saying that the input never gets larger than some size. Rather, the idea is that when we have the input to our problem, we can only run the sorting algorithm on it if we go to a machine that has sufficient space for it, and on that machine we assume that the word size is at least Ω(lg n). In other words, as our problem size grows, so does the size of each word of memory we're using. The fact that we only have O(lg n) bits in our bitvector, coupled with the fact that each machine's word size is Ω(lg n), means that while the number of bits necessary goes up as the problem size increases, the number of words needed to encode those bits is a constant. In fact, it's actually a fairly small constant! Rearranging our above math for the index k of the smallest Leonardo number bigger than some n, if we let k = ⌈ log[φ]((√5/2)n) ⌉, then L(k) > n. Now suppose that we are working on a machine whose address space is of size 2^i; then if we pick k = ⌈ log[φ]((√5/2) 2^i) ⌉ = ⌈ i log[φ]2 + log[φ](√5 / 2)⌉ ≈ ⌈ 1.44042009 i + 0.231855848 ⌉ ≤ 1.7i, we only need at most 1.7i bits to encode all the Leonardo numbers that can fit on that machine. If we assume that there there are 2^ i words on the machine, each word of which has i bits, then 1.7 machine words suffices! Rounding up, we need only two machine words to store a bit for each possible Leonardo number! But let's not get ahead of ourselves... we still have a long way to go before we'll get the bitvector working the way we want it to. In particular, we need to look at exactly how we're using the size list in our Leonardo heap to see if we can adapt the operations from an explicit list of sizes to a highly compressed bitvector. When inserting into a Leonardo heap, we need to perform several key steps. The first step is deciding what the insertion will do - will it merge two old trees into a new tree, or insert a new tree of order one or zero? In order to answer this question, we need to know the order of the smallest tree in the heap. With the original, uncompressed size list representation, this was easily accomplished in O(1); we just looked at the first entry of the list. But with our new bitvector representation, this isn't going to work. In fact, without some sort of optimization, we might have to look at all of the O(lg n) bits to decide which one is the smallest (for example, using a linear search over the bits). Since we're doing n insertions, if we're not careful, this could take time Ω(n lg n), eating up our O(n) best-case behavior. One idea that might come to mind as an easy way to fix this problem would be to keep a pointer into our bitvector indicating what the smallest bit is that's currently set. This then gives us O(1) lookup of the smallest tree, fixing the above problem. For reasons that will become a bit clearer later on, we'll instead opt to use another strategy that will make the analysis easier. If we have a bitvector with a pointer into it indicating where the first non-zero value is, then you can think of the pointer as splitting the bitvector into two parts - a high-order part containing the trees in use, and a low-order part consisting solely of zeros. For example, the bitvector 101011000 gets split as 101011||000, with the trees in use in the upper bits and unused trees in the lower bits. Of course, encoding these zero bits explicitly is a bit redundant; rather than encoding these zeros explicitly, we'll just keep track of how many of them there are. This means that we could encode the bitvector 101011000 as (101011, 3), for example. Notice that this second number can also be interpreted as the smallest tree currently in the heap, which is exactly what we set out to do. For notational purposes, I will write out these tuples using ω to mean "some bitstring" and n to mean "some number." For example, when talking about an encoding with a bitstring ending with 1, I might write (ω1, n). Here are a few examples of implicit Leonardo heaps that use these modified bitvectors to encode their sizes: In the first of these pictures, the trees have order 4, 2, and 1, which would yield a naive bitvector 10110. However, since we do not allow trailing zeros, it is encoded as (1011, 1). The second picture has a heap with trees of order 4 and 3, whose naive bitvector would be 11000, but is encoded as (11, 3) using our notation. Now, suppose that we have a bitvector keeping track of the existing tree sizes and suppose that we want to do an insertion. In order for this step to work, we need to be able to discern which of the three cases we're in. This, fortunately, is not particularly difficult. • Suppose that the last two trees in the heap have indices that differ by one. This means that the encoding of the tree structure must look like (ω011, n) for some ω and n, since trees of adjacent index can only appear at the end, and there's at most one pair. We can detect this case very easily by just testing the last two bits. If we find that this is the case, we can represent the merged encoding by rewriting it as (ω1, n + 2), since we merged the last two entries together and made the smallest tree two orders bigger. • Otherwise, if the last tree in the heap has order one, we can detect this because we'll have an encoding of the form (ω1, 1). We need to add a tree of order zero, which can be done easily by changing this representation to (ω11, 0). • Otherwise, we need to add a tree of order one. Given (ω1, n) for some n, we change this to be (ω100...01, 1) by adding enough zeros to the bitstring to correctly encode the data after saying that the smallest tree has size one. It shouldn't be too hard to see that each of these operations can be implemented in O(1) using simple shifts and arithmetic. After we've inserted the node into the Leonardo heap, we need to ensure that its two heap properties hold (that each heap is internally balanced and that the string of heaps is in sorted order). With an explicit size list, we could easily walk across the tops of the heaps since we could, in O(1), look up the size of each of the heaps. However, with our new bitvector approach, we can no longer claim that it takes O(1) to scan across the sizes of the heaps. In particular, suppose that our bitvector is (10101000000001, 1). Even though we've cached the size of the first tree, we can't necessarily find the next tree without repeatedly shifting the bitvector over until we encounter a 1. (Some machines might have special hardware to support this, but we can't necessarily assume this). This means that every time we try looking up a bit, it might take time O(lg n), and since there's O(lg n) bits, it seems initially like this might take O(lg^2 n) time per element, making the runtime O(n lg^2 n) in the worst case! This analysis, while correct, is not tight. It's true that any individual "shift to find the next tree" might take time O(lg n), but collectively all of the shifts we would make while inserting a single element into the heap can't take more than O(lg n) time because once we've shifted past an element, we never shift past it again. However, there's one more thing that we need to worry about. The whole point of developing this smoothsort algorithm was to get a sorting algorithm with best-case O(n) runtime. This means that when building up the Leonardo heap for a sorted list, the runtime must be O(n). If we have to do a potentially large number of shifts every time we try to check whether the heap is balanced, this guarantee may be compromised. Fortunately, though, we don't need to worry about this. We can always compare the root of the current heap to the root of the previous heap by skipping backwards a number of elements equal to L(k), where k is the order of the current heap. Since we cache this k, if the elements are already sorted, no shifting is necessary. The runtime guarantees are unchanged in this step. The runtime analysis for the dequeue step is significantly more involved. Every time that we dequeue from this new Leonardo heap, we need to be able to check the size of the rightmost tree (so we know what children to expose, if any) and then need to run up to two rebalance passes. The runtime analysis for the rebalances is identical to the insertion case - each rebalance takes worst-case O (lg n) time and O(1) best-case time - but the logic required to delete the root of the rightmost tree and expose its children is a bit more complicated and the runtime analysis more involved. There are three cases to consider during deletion: • Case 1: The root being deleted is of a tree of order at least two. Then our encoding looks like (ω1, n + 2) for some ω, n. Exposing the two heaps then converts this to (ω011, n). • Case 2: The root being deleted is of a tree of order zero. Then our encoding must look like (ω11, 0) and we convert it to (ω1, 1) to expose the tree of order 1 that must be right behind this one. • Case 3: The root being deleted is of a tree of order one. Then our encoding is of the form (ω100...001, 1). After deleting this one from the encoding, we need to shift past all of the zeros to get to the next tree root. This yields an encoding of the form (ω1, 1 + n), where n is the number of zeros we shifted past. In the original algorithm all of these steps ran in O(1). Now, the first two steps clearly run in O(1), but that last step takes a variable amount of time; in particular, it needs to perform one shift for each of the zeros before the next tree. Since there are O(lg n) bits in the representation, initially it might seem like this would mean that the best-case runtime for this algorithm is Ω(n lg n), but it turns out that this is not the case. If we use an amortized analysis, we can show that each operation runs in amortized O(1), giving a total runtime of O(n). This amortized analysis is a bit tricky because the structure of encodings changes so wildly during each of these steps, especially during deletion of a tree of order 1. To prove the time bound, we'll therefore adopt an alternate approach actually suggested by Dijkstra in his original paper. We know that there will be a total of n deletions from the heap, and each one of those deletions is essentially an insertion step run backwards. Thus if we can bound the total number of shifts done as we insert all n elements, we have a bound for the total number of shifts done during deletion. Let's define a potential function on the encoding of our heap sizes as Φ(ω, n) = n. We will count the number of one-position shifts performed on the encoding during insertion, even though in some cases during insertion we can group these shifts together into one bulk shift operation. The reason for not batching shifts together is that it's unclear whether we'll be able to perform those same shifts in reverse during the delete step (in fact, we can't, or we wouldn't need this analysis!) • Case one: The last two trees have adjacent order, and we transform the encoding from (ω011, n) to (ω1, n + 2). To do this transformation, we need to do two shifts to drop off the last two ones from the representation. Moreover, ΔΦ in this case is 2, and so the amortized cost of this step is four. • Case two: The last tree has order one, and we transform the encoding from (ω1, 1) to (ω11, 0). This requires one shift to make space for the new 1 bit, and ΔΦ = -1 for an amortized cost of zero. • Case three: The last two trees do not have adjacent order, nor does the last tree have order one. Then we transform (ω1, n) into (ω1000...001, 1), where there are n zeros inserted. This requires n shifts (n - 1 for the zeros, and 1 for the one bit) and we have ΔΦ = 1 - n for an amortized cost of 1 shift. From this we see that in all three cases, the amortized cost of an insertion is O(1), and by symmetry the amortized cost of a deletion is O(1) as well. This guarantees that our time bound is as it was before, at least in an amortized sense. At this point, we have just proven that if we're willing to use a crazy encoding scheme for our Leonardo heap size list, it's possible to get smoothsort working in O(1) memory. We have just developed an adaptive heapsort variant with O(1) memory usage! A Final Version of Smoothsort Having done all the research necessary to figure out exactly how Dijkstra's mysterious smoothsort algorithm worked, I've put together a smoothsort implementation of my own. It uses O(1) memory via the encoding scheme described above. It also contains a few minor optimizations based on Dijkstra's original paper. I will probably update this writeup to explain them when I recover from fully-detailing the O(1) memory version. :-) Concluding Remarks This minor quest of mine to understand smoothsort ended up being one of the most interesting research projects I've undertaken. I learned a fair amount about data structures and algorithms in the process (in particular, a much more general framework for heapsort than I had known before). To the best of my knowledge, no one has previously described the Leonardo heap structure detailed on this page explicitly as a heap data structure, though undoubtedly Dijkstra knew of them when putting together smoothsort. To this day I have no idea idea how Dijkstra came up with this algorithm. There are some many unintuitive insights necessary to put the whole thing together, and it has taken me the course of two months to completely and fully appreciate all the complexities of the implementation. In fact, my first analysis of the algorithm completely missed the point of the size list, and ended up not correctly using O(1) space! Moreover, in the course of writing all of this up, I've cemented my understanding of the transdichotomous machine model and of amortized analysis. I hope that you found this intro to smoothsort and Leonardo heaps interesting and accessible. I hope that this site increases the profile of this particular sort, since prior to reading up on it myself I had never encountered anyone who had even heard the name of the algorithm before. Ideally, this writeup will make it possible to pick up smoothsort without spending several days of effort doing so. Feel free to email me if you have any comments or questions!
{"url":"http://www.keithschwarz.com/smoothsort/","timestamp":"2014-04-18T18:10:48Z","content_type":null,"content_length":"57965","record_id":"<urn:uuid:b8286a53-8d2d-438b-be1e-66f65cce7f11>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Middleboro Math Tutor Find a Middleboro Math Tutor ...I am certified to teach grades 5 - 12. I have tutored students of all ages from elementary school students to adults who have decided to go back to school. My philosophy is to instill confidence which is a key to a student obtaining success. 11 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I've tutored nearly all the students I've worked with for many years, and I've also frequently tutored their brothers and sisters - also for many years. I enjoy helping my students to understand and realize that they can not only do the work - they can do it well and they can understand what they're doing. My references will gladly provide details about their own experiences. 11 Subjects: including algebra 1, algebra 2, Microsoft Excel, general computer ...I have 10 years experience working with D.D., LD, and other Special Needs students. I have 20 years' experience coaching tennis at the high school level. I presently coach at Canton HS. 15 Subjects: including geometry, SAT math, algebra 1, prealgebra ...Because I teach technology I also tutor in Microsoft Office, Google Apps, etc. I am available days, evenings and weekends for tutoring in the summer and on weekends during the school year. I am a Massachusetts licensed teacher in the elementary grades of 1 through 6. 14 Subjects: including prealgebra, reading, ESL/ESOL, grammar ...Since then, I have always been a math tutor. In a way, being a math tutor came naturally to me. What most of my clients say they like most about having me as a tutor is how I help them find their own sense of style when it comes to problem solving skills by showing them a few ways of how to solve them, and then letting them choose which method works best for them. 31 Subjects: including calculus, trigonometry, statistics, ACT Math Nearby Cities With Math Tutor Acushnet Math Tutors Braintree Math Tutors Bridgewater, MA Math Tutors Carver, MA Math Tutors Dartmouth Math Tutors Easton, MA Math Tutors Halifax, MA Math Tutors Lakeville, MA Math Tutors Marshfield, MA Math Tutors Middleborough, MA Math Tutors Norwood, MA Math Tutors Plympton Math Tutors Randolph, MA Math Tutors Somerset, MA Math Tutors Taunton, MA Math Tutors
{"url":"http://www.purplemath.com/middleboro_math_tutors.php","timestamp":"2014-04-20T08:40:18Z","content_type":null,"content_length":"23750","record_id":"<urn:uuid:8954c4ed-a672-42eb-84be-0d9440c3b488>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Particles go with the flow When dropping small particles into a turbulent fluid, we know from daily experience that the particles will be swept up in the swirling eddies and vortices of the fluid motion. But how will the particles be arranged in the flow, and does the addition of these particles smooth out the flow or make it more turbulent? These questions not only have industrial and technological implications, but are at the heart of our understanding of turbulence. It is also a puzzle that has resisted conventional fluid dynamics analysis. Now, in a paper published in Physical Review Letters, Tanaka and Eaton [1] of Stanford University have scrutinized a set of experimental measurements of particle motion in turbulent flows and find that a new dimensionless parameter, the particle momentum number $Pa$, can be used to assess whether particles will enhance or attenuate the turbulence. Understanding turbulence has been a longstanding challenge. The press release accompanying the 1982 Physics Nobel Prize awarded to Kenneth G. Wilson for his theory of critical phenomena [2] cited “fully developed turbulence” as a prime example of an important and yet unsolved problem in classical physics. Mathematically, turbulence is described by the Navier-Stokes equations, and in 2000, the Clay Mathematics Institute called unlocking the secrets of these equations one of seven “Millenium Problems” and offered a $1 million prize for their solution [3]. Turbulence is such a difficult problem because of its multiscale character: Although the large length scales (of the order of some maximal outer length scale $L$, for example, the size of the container) can be many orders of magnitude larger than the inner length scales of order $η$ (where the smoothing effects of viscosity become important), they are nevertheless strongly coupled to each other. The problem becomes worse the larger the Reynolds number Re (the ratio of inertial forces to viscous forces, for which high or low values characterize turbulent vs laminar flow, respectively) because $L/η∼Re3/4$. While the transition to turbulence occurs at a Reynolds number of several thousands (depending on the geometry), typical turbulent flow in the lab would have $Re∼106$ , and in the atmosphere, flows with $Re∼109$ can easily occur. The problem obviously becomes further complicated when the turbulent flow transports particles—a situation that is omnipresent in nature and technology. Examples include aerosols, rain drops, snow flakes or dust particles in the atmosphere (and especially in clouds), plankton in the ocean, or catalytic particles and bubbles in process technology. For these situations it is a priori not clear how the particles distribute in the turbulent flow, which consists of vortices of various sizes—clearly, the distribution will be inhomogeneous (Fig. 1). Particles heavier than the carrier fluid are thrown out of the vortices due to centrifugal forces, whereas light particles accumulate close to the vortex cores—an effect that everybody can easily observe when stirring a glass of bubbly water. Neither is it a priori clear whether the particles enhance or attenuate the turbulence (see Ref. [4] for a classical review article). Take heavy particles in turbulent water: On one hand, one could argue that the particles thrown into still water will sink and thus excite some flow and therefore the flow should also be enhanced when starting with a turbulent flow situation. On the other hand, putting heavy particles in motion and rotation in turbulent flow costs energy and therefore the turbulence intensity should decrease. One would hope to be able to predict the enhancement or attenuation of turbulence in the way common to fluid dynamics—by looking at the appropriate dimensionless numbers, such as the Reynolds number mentioned before. The classical dimensionless parameters for this problem would be the large scale Reynolds number $Re$ of the turbulent flow, the density ratio of the dispersed particles and the carrier fluid, the volume concentration of particles, and the Stokes number $St$, which is the ratio of the particle relaxation time and the intrinsic timescale of the turbulent flow. Tanaka and Eaton [1] have now mapped out 30 experimental data sets taken from literature with different combinations of $Re$ and $St$, arguing that the particle concentration and the density ratio should only lead to a quantitative effect with regard to turbulence enhancement or attenuation. However, they did not find any systematic trend in the turbulence modification in this $Re-St$ plane. Data sets showing either turbulent kinetic-energy attenuation or augmentation were seemingly randomly scattered over the $Re-St$ plane, suggesting that the Stokes number is not the correct control parameter for turbulence modification. This finding and their further dimensional analysis of the underlying Navier-Stokes equations with an extra forcing term due to the dispersed particles, led Tanaka and Eaton to introduce a new type of dimensionless parameter, which they call the particle momentum number $Pa$. One version of this parameter can be written as $Pa=Re2St(η/L)3$, which one would expect should scale as $∼Re-1/4St$, given the aforementioned expression for $L/η$. Now, in the $Re-Pa$ plane the 30 analyzed data sets do fall into different groups: For $Pa<103$ the turbulence is augmented, for $103<Pa<105$ it is attenuated, and for $Pa>105$ it is augmented again. This finding is surprising as (i) the dependence of the turbulent kinetic energy is nonmonotonic as a function of $Pa$, and (ii) one would assume that a simple rescaling of $St$ with $Re-1/4$ would not all of a sudden lead to a grouping of the data sets. Without any doubt this paper will trigger much further analysis. Presently, only data sets that show at least 5% attenuation or augmentation have been included in the study, in order to overcome experimental inaccuracies. I would expect the relative inaccuracies of the turbulent kinetic-energy modification to be smaller in numerical simulations of two-way-coupled point-particles in Navier-Stokes turbulence, such as those done in Refs. [5, 6, 7, 8], where the particles act back on the flow, supplying an additional driving mechanism. Although the Reynolds numbers achieved in such simulations are considerably smaller than those in the data analyzed by Tanaka and Eaton [1], they would allow calculation of three-dimensional plots of the turbulent kinetic-energy modification as a function of $Re$ and $Pa$, from which systematic trends could be derived. The gap between numerical simulations and experiment could be narrowed by extending Tanaka and Eaton’s analysis of experimental data towards smaller Reynolds numbers. Another extension of the parameter space would involve particles lighter than the carrier fluid, such as bubbles in turbulent flow, for which a wealth of numerical [9, 10] and experimental [11, 12] data on the energy modification exist. Tanaka and Eaton’s work thus gives hope that we can finally obtain order from the mist of turbulent data points on dispersed multiphase flow. I thank Enrico Calzavarini (Ecole Normale Supérieure, Lyon, France) for providing Fig. 1 and for many stimulating discussions over the years. Moreover, I would like to thank the Fundamenteel Onderzoek der Materie (FOM) for continuous support.
{"url":"http://physics.aps.org/articles/print/v1/18","timestamp":"2014-04-17T13:25:25Z","content_type":null,"content_length":"19705","record_id":"<urn:uuid:43de5d2b-2895-4d2c-a229-688b52fa3076>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple Linear Regression Simple Linear Regression (SLR) Simple linear regression is a method that enables you to determine the relationship between a continuous process output (Y) and one factor (X). The relationship is typically expressed in terms of a mathematical equation such as Y = b + mX Suppose we believe that the value of y tends to increase or decrease in a linear manner as x increases. Then we could select a model relating y to x by drawing a line which is well fitted to a given data set. Such a deterministic model – one that does not allow for errors of prediction –might be adequate if all of the data points fell on the fitted line. However, you can see that this idealistic situation will not occur for the data of Table 11.1 and 11.2. No matter how you draw a line through the points in Figure 11.2 and Figure 11.3, at least some of points will deviate substantially from the fitted line. The solution to the proceeding problem is to construct a probabilistic model relating y to x- one that knowledge the random variation of the data points about a line. One type of probabilistic model, a simple linear regression model, makes assumption that the mean value of y for a given value of x graphs as straight line and that points deviate about this line of means by a random amount equal to e, i.e. y = A + B x + e, where A and B are unknown parameters of the deterministic (nonrandom ) portion of the model. If we suppose that the points deviate above or below the line of means and with expected value E(e) = 0 then the mean value of y is y = A + B x. Therefore, the mean value of y for a given value of x, represented by the symbol E(y) graphs as straight line with y-intercept A and slope B. All-In-One Multivariate Data Analysis (MVA) and Design of Experiments (DoE) Package with Simple Linear Regression Multiple Linear Regression (MLR) This procedure performs linear regression on the selected dataset. This fits a linear model of the form Y= b 0 + b 1 X 1 + b 2 X 2 + .... + b k X k + e where Y is the dependent variable (response) and X 1 , X 2 ,.. .,X k are the independent variables (predictors) and e is random error. b 0 , b 1 , b 2 , .... b k are known as the regression coefficients, which have to be estimated from the data. The multiple linear regression algorithm in XLMiner chooses regression coefficients so as to minimize the difference between predicted values and actual values. Linear regression is performed either to predict the response variable based on the predictor variables, or to study the relationship between the response variable and predictor variables. For example, using linear regression, the crime rate of a state can be explained as a function of other demographic factors like population, education, male to female ratio etc. Verticals where SLR classifications are applied A Snapshot of Industry Applications of The Unscrambler® Suite of Software Products The Unscrambler® Suite of Software Products (The Unscrambler® X, Unscrambler Predictor & Unscrambler Classifier and Unscrambler Optimizer) are industry leading standards used in a variety of industries. Select an industry from below to read more on how the software products are useful to each industry, with actual case studies included. Tailor-made for advanced multivariate statistical modeling, prediction, and classification, The Unscrambler® X Software’s wizard-driven design of experiments functionality completes this all-in-one, powerhouse analytical package, enabling users to delve deep into the value embedded within their data and derive models and results that add tremendous value in R&D efficiency, time and cost savings to a wide array of growing client installations. Food and Beverage Agriculture Oil and Gas Chemical Manufacturing Polymer and Paper Pharmaceutical and Biotechnology Submit a SLR Research Document CAMO encourages research scholars, professors, faculty members and research students to publish their research papers on www.camo.com Submit your SLR research papers, here Related Training CAMO Software provides professional training in multivariate data analysis, spectroscopy, sensometrics, simple linear regression, statistical regression analysis, Linear Regression, K-Means Clustering and chemometrics across United States & Canada, Europe, South America, Africa, Australia and Asia through our panel of chemometric experts, spectroscopy professionals, sensometrics instructors and Multivariate Data Analysis Trainers.
{"url":"http://www.camo.com/rt/Resources/simple_linear_regression.html","timestamp":"2014-04-18T15:39:34Z","content_type":null,"content_length":"30967","record_id":"<urn:uuid:b3b47719-abdf-49bc-8cb4-fe36645441ff>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Random Optimization Questions Posts: 1,140 Joined: 2005.07 What Cochrane is saying is NSDictionary won't come up with any problems because, though hash values may be the same, it still sees if the objects themselves are the same. If they aren't, then it just moves on to the next possible place to insert it into the hash table. For the big-O notation, technically you could say that everything is Ω(1) and O(n^n^n^n^n^n...), but the values I'm giving are the tightest bounds. In the case of the hash table, it's actually O(n^ 3) to build the table. This is because of the fact that when inserting a single item, it might need to rebuild the entire table if it grows to a point that the current table size is suboptimal, so inserting that single item is O(n^2). (since you're re-hashing each of the n items, each of which can possibly try every position in the hash table) Since that's the worse-case time for each insertion, and you're inserting n items, it's O(n^3) to build the table. To find the total complexity of the algorithm, since you build the table and afterwards search for the vertices, you add the complexities together. In this case it's O(n^3) + O(n^2), which is O(n^3 + n^2). Since n^3 > n^2, when n gets very large the n^3 term overshadows the n^2 term, so it's O(n^3) total. Like I said, though: this is absolute worse case, and on average it will be a lot lower, otherwise the brute-force way of nested for loops would be a faster choice. Whether the hash table or tree method is a better choice is a much harder question. For the tree you can say with certainty that the algorithm is Θ(n*lg(n)) and it will always lay in that realm. The hash table will vary based on the input, though, since the lower and upper bounds aren't the same... Posts: 3,570 Joined: 2003.06 Awesome! It looks like I get it! Thanks! I see where you got the O(n^3) now -- for some reason, it didn't quite register that re-hashing is also a worst-case scenario that must be considered, even though you specifically said that, sorry. I get the NSDictionary thing now too. It doesn't matter how it hashes. For some reason, I was thinking it would throw away my original NSString key and I'd lose the effect, but that wouldn't make sense for the case when it needs to do a finer-level discrimination during a hash collision. Two identical vertices will still hash to the same value regardless. I couldn't quite get the reason why if the objects are equal they will hash to the same value, but that doesn't need to be true in reverse (two different objects having the same hash), but I get it now. Even if that hash value is already used it'll find the right one regardless by comparing the objects (keys) -- just like the man said. Cool! I realize I've been a bit of a bother about this today, but I really can't thank you guys enough for helping me out here. Posts: 1,140 Joined: 2005.07 I was looking for a distraction from studying, so don't worry. Possibly Related Threads... Thread: Author Replies: Views: Last Post Obj-C optimization woes NitroPye 10 4,187 Apr 26, 2005 10:20 AM Last Post: NitroPye
{"url":"http://www.idevgames.com/forums/thread-3246-page-3.html","timestamp":"2014-04-19T17:27:05Z","content_type":null,"content_length":"21849","record_id":"<urn:uuid:2c366866-44d8-4fb7-be5d-5e2d3c94d2b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Topic review (newest first) 2013-10-05 09:42:24 Hi mcassidy; Welcome to the forum. I too have always loved mathematics, it just never loved me. 2013-10-05 08:16:29 Each of the elements in each set are different from one another. For example, A1 is different than A2. So also, the elements in set A are different than those in set B, e.g. A1 is different from B3. I am comfortable with the notion of n things taken r at a time, but I frankly never ran into a situation with a constraint such as - given that one may only choose 1 element from set A, etc. The answer you propose, 126, evokes a familiar but long neglected memory. I very much appreciate your counsel. I have managed to maintain a love of mathematics, despite it remaining a foreign and ineluctable domain. Thank you! 2013-10-05 05:15:51 The elements in these sets, are they the same or are they different? It is usual to speak of set elements as different but there are multisets where elements are the same. If they are all different there is 6 x 7 x 3 = 126 ways If they are all the same in each set then there is only 1 way. 2013-10-05 05:03:41 I am trying to define the number of unique combinations where: I have three mutually exclusive sets (A,B,C), each of which has a varying number of elements, viz.(A=6; B= 7; and C = 3), and the combinations must have three elements, with one and only one element from each of the three sets. I don't know why I am having difficulties with this, but, alas, I am. Any help would be very much 2007-01-17 03:26:13 If you seat one person at every table, it will also work. 2007-01-17 02:23:47 pi man wrote: "Due to popular demand, seating is unlimited and must be reserved in advance." Hee hee... I can't laugh at you too much though, that was a very clever method you thought up. It's actually quite flexible as well, because although it only works for prime numbers, it works for ALL prime numbers and when the amount of tables is in the tens and twenties then primes are quite common. So, if 10 more people suddenly wanted to come to the meeting, then you'd just grab two more tables and you'd be all set. In fact, if you were allowed to vary the amount of people on the tables as well, then it work work even better. Let's say you had 91 people who wanted to attend. You'd get 13 tables and allocate 7 people to each of them and everyone would be happy. So, yes. Very well done. pi man 2007-01-17 01:51:26 Yes, 17 being prime is a big part of it. Because it is prime, people in positions 2-5 will never sit at the same table twice (until you have more than 17 sessions). That's critical because those in position 1 don't move. So Jim, you can either figure out how to handle eighteen tables or limit it to 17 and turn it into a marketing ploy - "Due to popular demand, seating is unlimited and must be reserved in advance. First come, first served. Get your seat TODAY!" 2007-01-16 17:34:16 Sounds very workable. Position 5 will be at table 17 after 4 moves. Well done, pi man! Using your idea, this is how 8 tables would go for 10 moves (not that you would have 10 courses!): Note: this is only looking at where the people at table 1 are seated. In other words, they are all at table 1 to start with (1,1,1,..), then the first one is still at table 1, the second at table 2, etc, or in short hand (1,2,3,...), etc. For people at table 2, just add 1 to every number, so you would have (2,2,2,...) then (2,3,4,...), etc (Once you get all that sorted out, you could add one minor enhancement: add 1 table to every move, so every one gets to move. You could even label the tables randomly around the room, so that the person who moves 1 table gets to move around the room a lot. You could also add a position rotation to everybody, too.) Interestingly, with only 16 tables, it breaks down pretty quick: So, 17 being prime is probably a big help. 18 tables works like this: (Don't forget the "Seating Arrangements Courtesy of Math Is Fun Forum" ... just kidding!) pi man 2007-01-16 16:33:19 I'm not positive this would work but it might. Each seat is assigned a table letter (A-Q) and table position (1-5). So there are five seats at table A: A1, A2, A3, A4, A5. Assign every one a seat for the first session. For each subsequent session, each person willl always be seated in the same position at a table as the position they were for the first session. Different table probably, but same position. People in position 1 at each table stay where they're at. So the person in seat A1 will be there for every session. The people in position 2 at each table go to the next lettered table and sit in the same position as they were at their first table. The person in A2 would go to B2, then C2 and then D2. The people in position 3 would skip over a table and go to the next one: A3, C3, E3, F3 The people in position 4 would skip over 2 tables and go to the 3rd table: A4, D4, G4, J4 The people in position 5 would skip over 3 table and go to the 4th table: A5, E5, I5, M5 Consider the tables to be in a circular pattern. If you trying to move past table Q, start over at table A. So the person starting at N5 would go to A5, E5 and then I5. You need to map this completely out before you implement to make sure it works. I got as far as scheduling half of the people and didn't run into any problems. 2007-01-16 15:03:05 This is a real life problem. I am organizing a networking meeting 85 people will attend We will seat them at 17 tables with 5 people at each table At the first seating (while the soup is served) everyone will introduce themselves and present the group with a question or problem they have prepared After the soup and discussion, everyone will be sent to new tables where salad will be served and each person will again introduce themselves and persent their question After salad, everyone will again be sent to new tables where the entre will be served and again they will present their question Finally, everyone will go to a new table for desert and their introduction and question We want to make sure that no one sits with anyone they have been seated with during the earlier seatings Is there a formula or system I can use to assign seating so that everyone gets to meet 16 different people (4 seatings with 4 new people each) with no repeats? Trial and error is just giving me a headache! Can I use the same formula or system if 90 people show up and we put them at 18 tables of 5? What if we change our minds and go with tables of 6? Thanks for any help I can get
{"url":"http://www.mathisfunforum.com/post.php?tid=5738&qid=286436","timestamp":"2014-04-17T12:38:47Z","content_type":null,"content_length":"25591","record_id":"<urn:uuid:7b2edb98-a93d-4b10-8488-1021b954b35a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Acceleration Subsystem for AIX Accelerated mathematical libraries for the IBM AIX platform IBM® Mathematical Acceleration Subsystem for AIX is a collection of mathematical function libraries optimized for the IBM AIX® platform. The Mathematical Acceleration Subsystem software libraries offer improved performance over the standard mathematical library routines, are thread-safe, and support compilations in C, C++ and Fortran applications. The fully supported Mathematical Acceleration Subsystem libraries are shipped with the XL C, XL C/C++ and XL Fortran compiler products. Mathematical Acceleration Subsystem for AIX software libraries: • Include accelerated sets of frequently-used mathematical functions for scalar, vector and single-instruction, multiple-data (SIMD) libraries. • Allow code to run in multithreaded environments in a thread-safe manner. • Support both 32-bit and 64-bit compilations. • Provide tuning for optimum performance on specific IBM POWER® architectures. • Assist with portability of vector code to allow application development on non-IBM systems where the Mathematical Acceleration Subsystem libraries are not available. Mathematical Acceleration Subsystem for AIX Accelerated mathematical libraries for the IBM AIX platform IBM Software Subscription and Support is included in the product price for the first year Not available to purchase online. Other ways to purchase or learn more. Contact IBM • eller ring: 08-477 44 31 Prioritet Code: 109HH03W Not in Sverige? Enkla sätt att handla eller läsa mer. Vill du ha hjälp? Enkla sätt att handla eller läsa mer. • eller ring: 08-477 44 3108-477 44 31 Prioritet Code: 109HH03W
{"url":"http://www-03.ibm.com/software/products/sv/massaix","timestamp":"2014-04-18T13:55:43Z","content_type":null,"content_length":"34596","record_id":"<urn:uuid:034a7835-b87a-451c-8e5e-6f50df1add6c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Greystone Park, NJ Math Tutor Find a Greystone Park, NJ Math Tutor ...I am well versed in Algebra and Calculus. I have taken Calculus 1, Calculus 2, Multivariable calculus, Differential Equations and Discrete Mathematics. My computer science coursework has focused on object oriented programming (Java) and databases (SQL). I have taken various history classes as well and one of my hobbies is teaching myself about history and geography. 11 Subjects: including algebra 1, algebra 2, American history, calculus ...My education is rooted deeply in Physics, as I most recently received a Master's in Physics from the University of Connecticut. I taught introductory physics courses at UConn and enjoyed seeing my students grow both in academics and critical thinking, validated through both testing and laborator... 9 Subjects: including algebra 1, algebra 2, calculus, physics ...I would like to coach one day as well. I have been doing calisthenics and weight lifting for over 20 years. I am familiar with free weights, resistance machines, as well as cardio workouts. 13 Subjects: including algebra 1, algebra 2, prealgebra, reading ...I was also designated as an AP Scholar with Distinction by the College Board for high scores on 7 examinations including English Language, US History, Spanish Language, Statistics, Microeconomics, Biology, and Environmental Science. Furthermore, I previously was affiliated with an elite national... 8 Subjects: including SAT math, ACT Math, SAT reading, ACT English I am an experienced tutor with three years of experience with multiple agencies. I have experience teaching organic chemistry, inorganic chemistry, physics, biology, physiology & anatomy, and all math subjects up to calculus II. High yield sessions are offered for last minute exam prep. 34 Subjects: including precalculus, organic chemistry, MCAT, elementary (k-6th)
{"url":"http://www.purplemath.com/Greystone_Park_NJ_Math_tutors.php","timestamp":"2014-04-18T21:19:31Z","content_type":null,"content_length":"24244","record_id":"<urn:uuid:517dd637-00d6-4840-8dde-552146a33dc7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Unknown Software Exception Unknown Software Exception Hey guys I'm working on a Pythagorean Triple program. In my program everything worked and I found all the possible Pythagorean triples below the value 500. Unfortunately I had repeats due to the fact that the legs of the triangle could just "swap" value. I tried to correct the problem by using a three dimensional array called "used." Because I have a maximum of 500 on each side of the triangle I had to use 500 three times in my boolean value "used." I think this is just too large a value for my compiler but maybe I did something wrong. Check it out. void calculate() int side1 = 1; int side2 = 1; int hypo = 1; int triplecounter = 0; bool used[500][500][500] = {false}; for (side1 = 1; side1 <= 500; side1++) { for (side2 = 1; side2 <= 500; side2++) { for (hypo = 1; hypo <= 500; hypo++) { if (used[side1][side2][hypo] == false) { if (used[side2][side1][hypo] == false) { if ((hypo * hypo) == (side1 * side1) + (side2 * side2)) { used[side1][side2][hypo] = true; used[side2][side1][hypo] = true; cout << "A Pythagorean triple is " << hypo << " " << side1 << " " << side2 << "\n\n"; triplecounter = (triplecounter + 1); cout << triplecounter << "\n\n"; For one thing, 500^3/1024/1024 = 238 MEGABYTES OF RAM. Because you want any of the three sides to have a maximum size of 500, then you really only need to limit the size of the hypotenuse, since it is always going to be greater than either of the two other sides. If you can generate all combinations with the code you already have and you don't care which side is the hypotenuse and you don't care which non-hypotenuse side is which, for example 3-4-5 is the same as 4-3-5 is the same as 5-3-4, etc., then you could generate all three sides and place the smaller of the three sides in memberA, the next smallest side in membeB, and the hypotenuse in memberC of a struct and store the struct in a container. Once you have generated a triple and sorted the sizes, then search the container for any triple already found for the value of one of the members of the current triple. If any triple already found contains any of the members values in the current triple as the same member value, then don't add the current triple to the container. Since the hypotenuse size will always be in memberC and there can only be one set of non-hypotenuse sides that will generate the give hypotenuse and still be a Pythagorean triple, then that's the member I'd compare, but it really shouldn't matter if the sides are sorted by length, because I don't believe there is any Pythagorean triple that has any of the three sides equal to any of the three sides of another Pythagorean triple unless the hypotenuse of one triple is used as the non-hypotenuse side of another triple. You are probably getting a stack overflow because of the size of the "used" array which (assuming 1 byte bools) looks like it weighs in at about 125,000,000 bytes (approx 119 MB... I guess Orbode is thinking 2 byte bools?). That is way too much to allocate on the stack. So you can do a couple things: 1. You could do dynamic memory allocation of the used array: bool ***used; used = new bool**[500]; for( int i = 0; i < 500; ++i ) used[i ] = new bool*[500]; for( int j = 0; j < 500; ++j ) used[i ][j] = new bool[500]; for( int k = 0; k < 500; ++k ) used[i ][j][k] = false; You could allocate a contiguous chunk instead of all the smaller allocations but you are still wasting a lot of space since there are only 386 triples that I found using your code and the above method. Plus you then need to remember to delete all that memory. 2. Create a triple/triangle object (method hinted at by elad) and overload the == operator. Push your triples into a container, checking first to make sure that a match does not already exist in the container. This would only require you to store the 386 objects and that's all. Since you basically already have a "working" program, I'll show you what I was thinking of: class triple int side1, side2, hypotenuse; triple(int s1 = 1, int s2 = 1, int hyp = 1) : side1(s1), bool operator()() const return side1*side1+side2*side2==hypotenuse*hypotenuse; friend bool operator==(const triple& lhs,const triple& rhs); friend ostream& operator<<(ostream& os,const triple& rhs); bool operator==(const triple& lhs,const triple& rhs) return ( lhs.hypotenuse == rhs.hypotenuse && ( ((lhs.side1==rhs.side1) && (lhs.side2==rhs.side2)) || ((lhs.side1==rhs.side2) && (lhs.side2==rhs.side1)) ostream& operator<<(ostream& os,const triple& rhs) return os << hyp << ' ' << side1 << ' ' << side2; vector<triple> triples; for( int i = 1; i <= 500; ++i ) for( int j = 1 j <= 500; ++j ) for( int k = 1; k <= 500; ++k ) triple trip(i,j,k); if( trip() && (find(triples.begin(),triples.end(),trip) == triples.end()) ) cout << "A Pythagorean triple is " << trip << endl; cout << triples.size() << endl << endl; 386 triple objects only takes up 4636 bytes (assuming 4 byte ints) then you also have the vectors overhead which is constant regardless of the number of elements it stores so we wind up with a big space savings doing it this way. BTW, there was one other problem with your code I saw (if you had been able to get it to work as posted): bool used[500][500][500] = {false}; for (side1 = 1; side1 <= 500; side1++) { for (side2 = 1; side2 <= 500; side2++) { for (hypo = 1; hypo <= 500; hypo++) { if (used[side1][side2][hypo] == false) { if (used[side2][side1][hypo] == false) { if ((hypo * hypo) == (side1 * side1) + (side2 * side2)) { used[side1][side2][hypo] = true; Tell me, is used[500][500][500] a valid array element? You seemed to forget that arrays are 0-based in C/C++ so the indicies can only go up to 499 and still be valid. I think you meant to put used[side1-1][side2-1][hypo-1] instead. Originally Posted by elad Since the hypotenuse size will always be in memberC and there can only be one set of non-hypotenuse sides that will generate the give hypotenuse and still be a Pythagorean triple That's not true, take these triplets for example 7 24 25 and 15 20 25 both have hypotenuse 25 with different set of non-hypotenuse legs Dave Evans Originally Posted by Denied88 Hey guys I'm working on a Pythagorean Triple program. In my program everything worked and I found all the possible Pythagorean triples below the value 500. Instead of generating all possible triplets and eliminating equivalent ones, why not just generate unique triplets in the first place: Suppose a is the shortest side, b is the next side, (and c is the hypotenuse): Let a go from 1 to 498 (obviously, could be less than 498, but what the heck) Let b go from a to 499 (obviously, could be less than 499, but what the heck) Let c go from b+1 to 500 Calculate a-squared, b-squared and c-squared and do the comparison.
{"url":"http://cboard.cprogramming.com/cplusplus-programming/66214-unknown-software-exception-printable-thread.html","timestamp":"2014-04-19T08:07:01Z","content_type":null,"content_length":"20195","record_id":"<urn:uuid:1e717212-19d2-4216-92c0-29bc31ed9a2d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Eigenvalues and Eigenvectors Symmetric Matrices For a real symmetric matrix all the eigenvalues are real. If only the dominant eigenvalue is wanted, then the Rayleigh method maybe used or the Rayleigh quotient method maybe used. The Rayleigh methods may fail however if the dominant eigenvalue is not unique. If all the eigenvalues are wanted but not the eigenvectors then Given's bisection algorithm can be used after first tridiagonalizing the symmetric matrix. If all the eigenvalues and corresponding eigenvectors are wanted then Jacobi's cyclic method can be used or if the matrix is tridiagonal then the QL algorithm as presented below may be applied. Function List • int Eigenvalue_Rayleigh_Method( double *A, int n, double *eigenvalue, double x[ ], double x0[ ], double tolerance, int max_tries ) Rayleigh's method is a variant of the power method for estimating the dominant eigenvalue of a symmetric matrix. The process may not converge if the dominant eigenvalue is not unique. Given the n×n real symmetric matrix A and an initial estimate of the eigenvector, x0, the method then normalizes x0, calculates x = Ax0 and sets µ = x^Tx0. The process is then repeated after setting x0 = x until the relative absolute change in µ is less than the preassigned tolerance at which time the process terminates successfully or until the number of attempts exceeds max_tries at which time the process terminates unsuccessfully. The function Eigenvalue_Rayleigh_Method is used if the matrix A is stored as a full symmetric matrix, the function Eigenvalue_Rayleigh_Method_lt is used if the matrix A is stored in lower triangular form and the function Eigenvalue_Rayleigh_Method_ut is used if the matrix A is stored in upper triangular form. The function returns the number of iterations performed if successful and -1 if more than max_tries iterations are necessary, -2 if the initial vector or subsequent vector is 0 and -3 if the estimate for the dominant eigenvalue is 0. • int Eigenvalue_Rayleigh_Method_lt( double *A, int n, double *eigenvalue, double x[ ], double x0[ ], double tolerance, int max_tries ) This routine is the same as Eigenvalue_Rayleigh_Method( ), described above, with the exception that the matrix A is stored in lower triangular form. • int Eigenvalue_Rayleigh_Method_ut( double *A, int n, double *eigenvalue, double x[ ], double x0[ ], double tolerance, int max_tries ) This routine is the same as Eigenvalue_Rayleigh_Method( ), described above, with the exception that the matrix A is stored in upper triangular form. • int Rayleigh_Quotient_Method( double *A, int n, double *eigenvalue, double x0[ ], double tolerance, int max_tries ) Rayleigh's quotient method is a variant of the inverse power method for estimating the dominant eigenvalue of a symmetric matrix. The process may not converge if the dominant eigenvalue is not unique. Given the n × n real symmetric matrix A and an initial estimate of the eigenvector, x[0], the method then normalizes x[0], calculates µ = x[0]^TAx[0] and solves ( A - µI ) x = x[0] for x. The process is then repeated after setting x[0] = x until the relative absolute change in µ is less than the preassigned tolerance at which time the process terminates successfully or until the number of attempts exceeds max_tries at which time the process terminates unsuccessfully. This routine has better convergence properties than that of the Rayleigh method and is usually used after the Rayleigh method has obtained a close estimate of the dominant eigenvalue and corresponding eigenvector. The function returns the number of iterations performed if successful and -1 if more than max_tries iterations are necessary, -2 if the initial vector or subsequent vector is 0, -3 if the estimate for the dominant eigenvalue is 0 and -4 if there is not enough memory available for working storage. • int Givens_Bisection_Method( double diagonal[ ], double off_diagonal[ ], double eigenvalues[ ], double relative_tolerance, int n ) Given the n×n real tridiagonal symmetric matrix A with diagonal, diagonal, and off-diagonals, off_diagonal, the routine Givens_Bisection_Method uses Gerschgorin's theorem to obtain bounds for the eigenvalues and then the bisection method to estimate the eigenvalues within the user specified relative tolerance, relative_tolerance. The off-diagonal elements begin at off_diagonal[1], off_diagonal[0] is set to 0. The results are returned in the array eigenvalues. The function returns 0 if successful and -1 if there is not enough memory for working storage. • int QL_Tridiagonal_Symmetric_Matrix( double diagonal[ ], double off_diagonal[ ], double *U, int n, int max_iteration_count ) Given the n×n real tridiagonal symmetric matrix A with diagonal, diagonal, and off-diagonals, off_diagonal, the routine QL_Tridiagonal_Symmetric_Matrix uses QL algorithm with implicit shifts of the origin to estimate the eigenvalues and eigenvectors. The off-diagonal elements begin at off_diagonal[1], off_diagonal[0] is set to 0. The eigenvalues are returned in the array diagonal. The n×n matrix U should be set to the transformation matrix if the original matrix was tridiagonalized or set to the identity matrix if the original matrix is tridiagonal. Upon return, U contains the eigenvectors of the original matrix, the i^th column being the eigenvector corresponding the the i^th eigenvalue, diagonal[i]. The function returns a 0 if successful and a -1 if the process failed to converge within max_iteration_count iterations. • void Jacobi_Cyclic_Method( double eigenvalues[ ], double *eigenvectors, double *A, int n ) Given the n × n real symmetric matrix A, the routine Jacobi_Cyclic_Method calculates the eigenvalues and eigenvectors of A by successively sweeping through the matrix A annihilating off-diagonal non-zero elements by a rotation of the row and column in which the non-zero element occurs. The input matrix A is modified during the process. The eigenvalues are returned in the array eigenvalues which should be dimensioned at least n in the calling routine. The eigenvectors are returned in the n×n matrix eigenvectors, the i^th column being the eigenvector corresponding the the i^th eigenvalue, eigenvalue[i]. C Source Code
{"url":"http://www.mymathlib.com/matrices/eigen/symmetric.html","timestamp":"2014-04-18T02:58:42Z","content_type":null,"content_length":"11725","record_id":"<urn:uuid:8c71984d-c8f3-4a08-b349-f85bbe420e2f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
] Prove $\int_{C} f dx + g dy + h dz$ Thank you! Last edited by SoHokQ; January 6th 2011 at 04:18 PM. We have: $\displaystyle\int_{C} f dx + g dy + h dz=\displaystyle\int_{C} abla F(\;\vec{r}\<img src=$\cdot d\;\vec{r}\;,\quad (\;\vec{r}=(x.y,z)\ Now, apply the Gradient theorem Fernando Revilla Please don't erase a question after it has been resolved! Other people can learn from the problems.
{"url":"http://mathhelpforum.com/calculus/167568-prove-line-integral.html","timestamp":"2014-04-17T14:10:03Z","content_type":null,"content_length":"36761","record_id":"<urn:uuid:29c3abe2-009a-4f7d-9617-c876d7d5ef63>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Mathematical Logic, 3rd Edition - In Proc. Joint International Conference and Symposium on Logic Programming , 1998 "... Defeasible logic is an important logic-programming based nonmonotonic reasoning formalism which has an efficient implementation. It makes use of facts, strict rules, defeasible rules, defeaters, and a superiority relation. Representation results are important because they can help the assimilation o ..." Cited by 21 (13 self) Add to MetaCart Defeasible logic is an important logic-programming based nonmonotonic reasoning formalism which has an efficient implementation. It makes use of facts, strict rules, defeasible rules, defeaters, and a superiority relation. Representation results are important because they can help the assimilation of a concept by confining attention to its critical aspects. In this paper we derive some representation results for defeasible logic. In particular we show that the superiority relation does not add to the expressive power of the logic, and can be simulated by other ingredients in a modular way. Also, facts can be simulated by strict rules. Finally we show that we cannot simplify the logic any further in a modular way: Strict rules, defeasible rules, and defeaters form a minimal set of independent ingredients in the logic. 1 Introduction Normal forms play an important role in computer science. Examples of areas where normal forms have proved fruitful include logic [10], where normal forms o... - Journal of Functional and Logic Programming , 1998 "... This paper proposes a number of models for integrating stochastic constraint solvers into constraint logic programming systems in order to solve constraint satisfaction problems efficiently. Stochastic solvers can solve hard constraint satisfaction problems very efficiently, and constraint logic ..." Cited by 5 (1 self) Add to MetaCart This paper proposes a number of models for integrating stochastic constraint solvers into constraint logic programming systems in order to solve constraint satisfaction problems efficiently. Stochastic solvers can solve hard constraint satisfaction problems very efficiently, and constraint logic programming allows heuristics and problem breakdown to be encoded in the same language as the constraints. Hence their combination is attractive. Unfortunately there is a mismatch in the kind of information a stochastic solver provides, and that which a constraint logic programming system requires. We study the semantic properties of the various models of constraint logic programming systems that make use of stochastic solvers, and give soundness and completeness results for their use. We describe an example system we have implemented using a modified neural network simulator, GENET, as a constraint solver. We briefly compare the efficiency of these models against the propagation base... - IN: PROCEEDINGS OF THE AUSTRALASIAN CONFERENCE ON COMPUTER SCIENCE , 2001 "... We present a logic of modeling the dynamics of beliefs in cryptographic protocols. Differently from previous proposals, our logic is situation based in which a protocol is viewed as a finite sequence of actions performed by various principals at different situations, and each action is a primitive t ..." Cited by 4 (0 self) Add to MetaCart We present a logic of modeling the dynamics of beliefs in cryptographic protocols. Differently from previous proposals, our logic is situation based in which a protocol is viewed as a finite sequence of actions performed by various principals at different situations, and each action is a primitive term in the language. Therefore, it becomes possible to model the dynamic change of each principal's beliefs at each step of the protocol within the logic system. Our logic has a precise semantics and is sound with respect to the underlying axiomatic system. , 1998 "... Declarative meta-programming is vital, since it is the most promising means by which programs can be made to reason about other programs. A metaprogram is a program that takes another program, called the object program, as data. A declarative programming language is a programming language based on a ..." Cited by 2 (0 self) Add to MetaCart Declarative meta-programming is vital, since it is the most promising means by which programs can be made to reason about other programs. A metaprogram is a program that takes another program, called the object program, as data. A declarative programming language is a programming language based on a logic that has a model theory. A meta-program operates on a representation of an object... "... Causal relations of various kinds are a pervasive feature of human language and theorising about the world. Despite this, the specification of a satisfactory general analysis of causal relations has long proved difficult. The research described in this thesis is an attempt to provide a formal logica ..." Add to MetaCart Causal relations of various kinds are a pervasive feature of human language and theorising about the world. Despite this, the specification of a satisfactory general analysis of causal relations has long proved difficult. The research described in this thesis is an attempt to provide a formal logical theory of causal relations, in a broad sense of ‘causal’, which includes various atemporal explanatory and functional relations, in addition to causation between temporally ordered events; and which involves not only necessity associated with physical laws, but also necessity associated with laws and constraints of various other types. The key idea which motivates the analysis is that many types of causal relation have in common certain underlying abstract properties, regardless of the nature of the participants involved. These properties can be expressed via an axiomatisation, initially viewed as applicable to ‘event causation’, but subsequently re-interpreted in a more abstract and general way. Given the wide variety of models for the axioms, there are not likely to be powerful general methods for computing the causal relationships defined: instead it is likely to
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1070491","timestamp":"2014-04-17T07:28:17Z","content_type":null,"content_length":"23875","record_id":"<urn:uuid:611ef736-c16e-4dd0-8563-b412768d6b35>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Time points for Cubic GMM Jonathan Codell posted on Thursday, February 07, 2013 - 2:24 pm I have 12 data points (T1-T12) and would like to model for potential cubic growth in GMM. I managed to fit a decent linear and quadratic growth models but am unclear on to determine the appropriate times in the syntax for a cubic model: i s | T1@0 T2@1 T3@2 T4@3.........T12@11 i s q | T1@0 T2@.01 T3@.04 T4@.09......T12@1.21 What would the appropriate syntax time be for a cubic model? i s q cub | T1@? T2@?..........T12? Thank you for your assistance. Linda K. Muthen posted on Thursday, February 07, 2013 - 2:31 pm Yes, just add a fourth growth factor. Jonathan Codell posted on Thursday, February 07, 2013 - 2:46 pm Thanks for the quick response. Can you clarify exactly what you mean though. As shown in my initial post I already have four growth factors listed (i s q cub) before the "|" symbol, but I am unclear on what I should list after the "@" symbol for each of the 12 time points. Thank you for your explanation. Linda K. Muthen posted on Thursday, February 07, 2013 - 3:48 pm You list the linear time scores after the | symbol. I just noticed you did this incorrectly for the quadratic model. The time scores are always the linear and Mplus computes the others. Keep the on a small scale like 0 .1 .2 etc. Jonathan Codell posted on Thursday, February 07, 2013 - 11:33 pm Thank you for the clarification. It looks like I was making this more complicated than needed. I really appreciate your help. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=14&page=11807","timestamp":"2014-04-20T05:45:45Z","content_type":null,"content_length":"21531","record_id":"<urn:uuid:dc9ae6f2-6fd9-4a15-8a4e-cf8e3003d581>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
Reviewing for intro PDEs Hey everyone, I'm taking intro ODEs right now, and am taking intro PDEs next semester. I would like to know what i should review from calc III for this course. I took calc III over the summer at a community college and didn't learn very much, if i'm being honest with myself. I think I am good as far as anything from calc I or II goes, and from this intro course in DEs as well. The title of this post could more suitably be "What from calc III is used in an intro PDEs course?" (and PDEs in the more general sense -- I plan on taking much more applied math in the coming three years). So, what is used in partial differential equations form the typical calculus III course? Thanks in advance
{"url":"http://www.physicsforums.com/showthread.php?t=539778","timestamp":"2014-04-18T08:30:55Z","content_type":null,"content_length":"22478","record_id":"<urn:uuid:7d95a863-e844-441a-b420-5738c3c564d3>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
A Computational Implementation of Leibniz's Paper of 1690 In this project, I used PROVER9 to prove the theorems described by Leibniz in his unpublished paper of 1690. This paper is Leibniz's most mature development of his "calculus of concepts". Actually, Leibniz presents more of an algebra than a calculus; his system provides us with one of the earliest axiom sets for semi-lattices (although he neglected to include the axiom of associativity for the sum operation). Leibniz's paper appears in translation in the collection Leibniz: Logical Papers, edited by G.H.R. Parksinson, and published by Clarendon Press (Oxford) in 1966. A fascimile of the paper can be downloaded here (WARNING: 20 MB PDF file). The Tools You Will Need NOTE: The files below all work correctly with the June 2007 version of PROVER9. They may also work with later versions, but they were all checked with the June2007 version. You can download a gzipped tarball of this version here and compile it yourself. Assuming that you have installed PROVER9, you can download the input and output files linked in below and process them. The standard command-line syntax for running PROVER9 is: prover9 < theorem.in or, if you want to append PROVER9's output to a file: prover9 < theorem.in > theorem.out If you need further help in running PROVER9, you can consult the manual that goes with the June 2007 version of PROVER9, which can be accessed here. Similarly, one can use MACE4 to find models of the premises used in the proof of each theorem. This provides a finite verification of the consistency of the axioms and definitions. The standard command-line syntax for running MACE4 is (unless otherwise noted): mace4 -c -N 8 -p 1 < model.in or, if you want to append MACE4's output to a file, mace4 -c -N 8 -p 1 < model.in > model.out For your convenience, I've also included links to the PROVER9 and MACE4 output files for each theorem, for the runs we have executed. You can compare your runs to ours. The Theorems in Leibniz's Paper of 1690 In this section, I use PROVER9 to prove the theorems in Leibniz's unpublished paper of 1690 mentioned above. A list of Leibniz's theorems, as formulated in the Parkinson translation, is linked in here (in PDF). Note: (1) Leibniz failed to include the axiom of Associativity for the summation operator ⊕ on concepts. (2) Since the existential quantifier was unavailable to him, Leibniz doesn't use the quantifier in the formulation of any of these theorems. Prior to representing these theorems in PROVER9, I translated them theorems into modern notation. You can download the list of theorems in modern notation here (in PDF). Notes: In these translations, (1) I included the axiom of Associativity for ⊕ in the Basis of the system, and (2) I used the existential quantifier in the formulation of the relation of inclusion ( In the input files below, you will see that I used Sum(x,y) for x⊕y and IsIn(x,y) for x y. So for example, Leibniz's Definition 3 appears in the translation of his original paper as: That A ‘is in’ L or, that L ‘contains’ A, is the same as that L is assumed to be coincident with several terms taken together, among which is A. In my translation into modern notation, this definition becomes: x y ≡ ∃z(x⊕z = y) And when translated into PROVER9 notation, this becomes: all x all y (IsIn(x,y) <-> (exists z (Sum(x,z) = y))). And so on for the other definitions, axioms, and theorems. Note that all of Leibniz's propositions are derivable from the following basis: Idempotence: all x (Sum(x,x) = x). Symmetry: all x all y (Sum(x,y) = Sum(y,x)). Associativity: all x all y all z (Sum(Sum(x,y),z) = Sum(x,Sum(y,z))). Dfn Inclusion: all x all y (IsIn(x,y) <-> (exists z (Sum(x,z) = y))). In terms of these definitions, we can now use PROVER9 to prove the following theorems. The reader should examine how the theorems as presented in the original and in modern notation are represented in PROVER9's syntax. • Proposition 01: Original Formulation: If A=B, then B=A. Modern Notation: x=y → y=x Note: This theorem can be proved from the laws of identity alone, which are built into the logic of PROVER9. Thus it can be proved from the empty set of premises, and so no input model files are • Proposition 02: Original Formulation: If A ≠ B then, B ≠ A Modern Notation: x ≠ y → y ≠ x Note: This theorem can be proved from the laws of identity alone, which are built into the logic of PROVER9. Thus it can be proved from the empty set of premises, and so no input model files are • Proposition 03: Original Formulation: If A = B and B = C, then A = C. Modern Notation: [x = y & y = z] → x = z Note: This theorem can be proved from the laws of identity alone, which are built into the logic of PROVER9. Thus it can be proved from the empty set of premises, and so no input model files are • Proposition 04: Original Formulation: If A = B and B ≠ C, then A ≠ C. Modern Notation: x = y & y ≠ z → x ≠ z Note: This theorem can be proved from the laws of identity alone, which are built into the logic of PROVER9. Thus it can be proved from the empty set of premises, and so no input model files are • Proposition 05: Original Formulation: If A is in B, and A = C, then C is in B. Modern Notation: [x y & x = z] → z y Note: This theorem can be proved from the laws of identity alone, which are built into the logic of PROVER9. Thus it can be proved from the empty set of premises, and so no input model files are • Proposition 06: Original Formulation: If C is in B and A = B, then C is in A Modern Notation: [x y & z = y] → x z Note: This theorem can be proved from the laws of identity alone, which are built into the logic of PROVER9. Thus it can be proved from the empty set of premises, and so no input model files are • Proposition 07: Original Formulation: A is in A Modern Notation: x x • Proposition 08: Original Formulation: A is in B, if A = B Modern Notation: x = y → x y • Proposition 09: Original Formulation: If A = B, then A⊕C = B⊕C Modern Notation: x = y → [x⊕z = y⊕z] Notes: (1) This theorem can be proved from the laws of identity alone, which are built into the logic of PROVER9. Thus it can be proved from the empty set of premises, and so no input model files are provided. (2) Leibniz explicitly mentions that this theorem cannot be ‘converted’, meaning that the converse of this theorem is not a theorem. MACE indeed finds a countermodel to the • Proposition 10: Original Formulation: If A = L and B = M, then A⊕B = L⊕M Modern Notation: [x = z & y = w] → x⊕y = z⊕w Notes: (1) This theorem can be proved from the laws of identity alone, which are built into the logic of PROVER9. Thus it can be proved from the empty set of premises, and so no input model files are provided. (2) Leibniz explicitly mentions that this theorem cannot be ‘converted’, meaning that the converse of this theorem is not a theorem. MACE indeed finds a countermodel to the • Proposition 11: Original Formulation: If A = L and B = M and C = N, then A⊕B⊕C = L⊕M⊕N Modern Notation: [x = u & y = v & z = w] → x⊕y⊕z = u⊕v⊕w We skip this theorem as it is a generalization of Theorems 9 and 10. • Proposition 12: Original Formulation: If B is in L, then A⊕B will be in A⊕L Modern Notation: y z → [x⊕y x⊕z] • Proposition 13: Original Formulation: If L⊕B = L, then B will be in L Modern Notation: x⊕y = x → y x • Proposition 14: Original Formulation: If B is in L, then L⊕B = L Modern Notation: y x → x⊕y = x • Proposition 15: Original Formulation: If A is in B and B is in C, then A is in C Modern Notation: [x y & y z] → x z • Proposition 15 Corollary: Original Formulation: If A⊕N is in B, then N is in B Modern Notation: [x⊕z y] → z y • Proposition 16: Original Formulation: If A is in B and B is in C and C is in D, then A is in D Modern Notation: [x y & y z & z w] → x w We omit this theorem as it is a trivial generalization of Proposition 15. • Proposition 17: Original Formulation: If A is in B and B is in A, then A = B Modern Notation: [x y & y x] → x = y • Proposition 18: Original Formulation: If A is in L and B is in L, then A⊕B will be in L Modern Notation: [x z & y z] → x⊕y z • Proposition 19: Original Formulation: If A is in L and B is in L and C is in L, then A⊕B⊕C is in L Modern Notation: [x z & y z & w z] → x⊕y⊕w z We omit this theorem as it is a trivial generalization of Proposition 18. • Proposition 20: Original Formulation: If A is in M and B is in N, then A⊕B will be in M⊕N Modern Notation: [x z & y w] → x⊕y z⊕w • Proposition 21: Original Formulation: If A is in M and B is in N and C is in P, then A⊕B⊕C will be in M⊕N⊕P Modern Notation: [x u & y v & z w] → x⊕y⊕z u⊕v⊕w We omit this theorem as it is a trivial generalization of Proposition 20. • Proposition 22: Original Given two disparate terms, A and B, to find a third term C which is different them and which together with them makes up the subalternants A⊕C and B⊕C: that is, although neither Formulation: of A and B is in the other, yet one of A⊕C and B⊕C is in the other. Modern Notation: [x y & y x] → ∃z(z ≠ x & z ≠ y & (x⊕z y⊕z y⊕z x⊕z)) • Proposition 23: Original Formulation: Given two disparate terms, A and B, to find a third term C different from them such that A⊕B = A⊕C Modern Notation: (x y & y x) → ∃z(z ≠ x & z ≠ y & x⊕y = x⊕z) • Proposition 24: (Exercise) Original To find several terms which are different, each to each, as many as shall be desired, such that from them there cannot be composed a term which is new, i.e., different from Formulation: any of them.
{"url":"http://mally.stanford.edu/cm/leibniz/","timestamp":"2014-04-16T07:22:04Z","content_type":null,"content_length":"25433","record_id":"<urn:uuid:a1ad0e3f-6c16-47b9-816b-5661d192d3b3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Excel Formulas Some users of the QI Macros miss the opportunity to use Excel's formulas to prepare the data for charting or analysis. Here are some useful ways to use Excel to your advantage: Most companies need to analyze defects in relationship to overall volume (e.g., repeat calls to a call center, repeat visits to an emergency room for the same problem, returned product, etc.) Although p and u control charts will show this kind of data, it's also acceptable to turn the numbers into a ratio and use an XmR (Individuals and Moving Range) chart. Most hospitals keep track of patient falls and they report this as falls per thousand patient days. If the data looks like the following, how do you express it as a ratio? Simply click on the cell to right of the data and enter the formula starting with an equal sign:: B2 is number of falls C2 is number of patient days C2/1000 yields the number of thousand patient days. Putting the formula in parenthesis tells Excel to do this first, then divide it into B2, otherwise the answer would be incorrect. This formula could also have been written as: Then, simply move the mouse over the lower right corner of the cell containing the formula (look for the cursor to change to a plus shape) and double click. Excel will copy the formula down to the last non-blank cell. Then just select the labels in column A, hold the ctrl key, and select the ratios in column D to draw an XmR chart using the control chart wizard. Text Formulas For Excel Let's use Excel's functions to split a cell containing a first and last name into two cells. Simply, click on the cell next to the full name and insert a text function. It took me a while to find this function (MID): I often find it's easiest to start with a simple formula and then enhance it to do what I want. In this case, let's select just the first name of the first person by putting A2, 1, and 5 as the parameters to the function. The MID function will select "Wayne" from the name in cell A2. Of course, this only works for the first person. It won't work for the next one. So, we'll want to set the length to be up to the first blank. There are two functions that search for strings inside other strings (FIND, SEARCH). Let's use SEARCH. Rather than try to build this into the existing formula directly, I often use a different cell so that I can see the result. In this case I'm telling Excel to find a blank (" ") in A2 starting at position 1, giving the answer, 6: Six is one greater than the five I need, so I'll have to subtract one from the result. Now I just copy the SEARCH formula from cell D2 and insert it into the MID formula where the "5" used to be: If I copy and paste the formula for the other names, we get the first name for each person. Excel will show the formula in fx: The formula for the last name is similar, except that we use the SEARCH+1 as the starting point and use the LEN function for maximum length of the name: Now, we've effectively split the first and last name out of the full name. Time or Date data is another format that people can tweak to their needs. If you want to know the difference between two times (e.g., a start and end time), simply input a formula to subtract one from the other (B2-A2 = elapsed time): Then, if you want to convert the time or date format into minutes, you will need a formula that converts days, minutes and seconds into a single value. There are 1440 minutes in a day and 60 per hour. DAY, HOUR and MINUTE convert dates or times into a count you can multiply and add to get the result you want: What if we want to convert times or dates into hours? The formula is very similar: This gives us the elapsed time in various formats that can be easily graphed using the QI Macros: Logic Formulas for Excel Sometimes we need some IF-THEN-ELSE logic. Recently, a client was trying to figure out how to evaluate a report to show Pass/Fail so that it could be counted with a PivotTable. To do this, we had to add some formulas. In this case, a certain value "X" had to be between two values. The first formula evaluates if Prod1's X value is less than 0.05 and puts the word "Pass" or "Fail" in the Cell: Of course, this formula won't work for all of the products in the report which have other specification limits: So, we had to expand the formula to check the product name and choose the right limits: The Pass/Fail results can now be counted easily with a PivotTable. If you don't know what function to use, Excel can show you the way. Simply select an empty cell and click on Insert-Function (Excel 2000-2003 left; Excel 2007-2010 right): Excel will show you the functions available: Here's My Point Sometimes data has to be manipulated to provide the right starting point for analysis. Sometimes you need a simple mathematical formula, sometimes text, sometimes IF-THEN-ELSE logic. Regardless, sometimes you will need a basic grasp of Excel formulas and functions to make your life easier. Have fun exploring Excel's functions. If you're struggling with how to get the data in a format you want, send me an email with your data and your problem. I can't do all of your work for you, but I can be a short cut to get you started Create these charts and diagrams yourself in just seconds using the QI Macros for Excel...
{"url":"http://www.qimacros.com/free-excel-tips/excel-formulas/","timestamp":"2014-04-18T02:58:35Z","content_type":null,"content_length":"27421","record_id":"<urn:uuid:dec4bfb0-ef8c-44c3-a079-fd66dd95f429>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
La Mirada Science Tutor Find a La Mirada Science Tutor ...I have done three years of physics research, all of which was in MATLAB. I've simulated synchronization properties of nano-cantilevers, created quasicrystals, and simulated ion trap dynamics. I spend approximately 5-7 hours a day coding in MATLAB. 26 Subjects: including physics, ACT Science, physical science, geometry ...I work as a Teacher's Assistant and Tutor for mathematics and have done so for about 3 years at the university. I am a member of the mathematics and physics fellowship at my university and am an active member with the youth in my community. I have tutoring experience in all of my listed subject areas. 14 Subjects: including psychology, physics, calculus, algebra 2 Hello, My name is Justin. I am a teaching credential candidate at California State University Long Beach (CSULB) for Physics and am a year away from earning a Physics BA. I am also a physics Supplemental Instructor at a university, as well as a tutor in Math and Physics at the university's Learning Assistance Center. 10 Subjects: including physics, chemistry, calculus, geometry ...NASA and JPL. I have a broad background in basic and advanced sciences. I have both undergraduate and graduate degrees in sciences I have taught physical sciences to both 8th and 9th graders. 11 Subjects: including biology, geometry, prealgebra, anatomy ...As part of my volunteer work with various clubs, I tutored elementary school and middle school aged children in Mathematics, English, Spelling, Science, and History. I am currently finishing up my freshman year of college. I am pursuing a major in chemistry, as well as a minor in mathematics and possibly a minor in physics as well. 24 Subjects: including chemistry, ACT Science, geometry, biology
{"url":"http://www.purplemath.com/la_mirada_science_tutors.php","timestamp":"2014-04-19T10:09:56Z","content_type":null,"content_length":"23843","record_id":"<urn:uuid:f221f98e-29f6-4fd4-a41d-be467e63c02e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
greatest common divisor of two polynomials f,g ? Is it unique ? December 8th 2010, 06:47 AM greatest common divisor of two polynomials f,g ? Is it unique ? Define the greatest common divisor of two polynomials $f,g.$ Is it unique ? Find the greatest common divisor of the polynomials f : $x^4 + x - 2$and $g = x^2 - 1$ The polynomial $f = x^4 - x^3 -7x^2 + x + 6$has four real zeros, given that 1 and 3 are zeros, find the other two ? This is a past paper without solutions, it would be great if you could help me with these to revise for my exam, thanks alot ! December 8th 2010, 07:44 AM a) The gcd is the highest order polynomial that divides ( with no remainder) both f and g. It is unique. b) $x^4+x-2 = (x^2-1)(x^2+1) + (x-1)$ and $(x^2-1)=(x-1)(x+1)$ so x-1 is gcd c) Since 1 and 3 zeroes you know that (x-1)(x-3) divides f. So divide f by $(x-1)(x-3)=x^2-4x+3$ : $x^4-x^3-7x^2+x+6 = (x^2-4x+3)(x^2+3x+2)$ so the other two zeroes are the zeroes of $x^2+3x+2= (x+1)(x+2)$, namely -1 and -2. ( I think this is correct - I forgot how that long division of polynomial is a pain :) )
{"url":"http://mathhelpforum.com/algebra/165682-greatest-common-divisor-two-polynomials-f-g-unique-print.html","timestamp":"2014-04-18T14:25:15Z","content_type":null,"content_length":"6033","record_id":"<urn:uuid:684dc12a-7d09-491d-a83c-4efbe21e8bcf>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
If no singularity, what’s inside a big black hole? That's true for a stationary observer but not for the infalling one. For him it takes finite proper time to cross the event horizon. Finite proper time if the horizon is eternal --- but the point is that it isn't. Consider the following statements, and tell me where the logic goes off the rails: 1. An asymptotic observer never sees an infalling observer cross the event/dynamical horizon. 2. The horizon evaporates in a finite time. 3. The asymptotic observer will see the infalling observer still there after the horizon evaporates. 4. Therefore from the asymptotic observer's point of view, she doesn't cross the horizon either, and will live to see it completely evaporate. This calculation can indeed be pushed all the way until the semi-classical approximation breaks down, and I think it's correct. I think this paper by Krauss ( or Phys.Rev.D76:024005,2007) says the same thing, though I'm not sure I entirely agree with the details (event horizon vs. dynamical horizon, and therefore the interpretation). (Btw, I am in no way invested in the original genesis of this problem --- I just think this scenario is worth thinking about as a thought experiment and might be informative on matters in general, not necessarily including the issue of what replaces a singularity...)
{"url":"http://www.physicsforums.com/showthread.php?t=226671&page=4","timestamp":"2014-04-19T04:30:35Z","content_type":null,"content_length":"84246","record_id":"<urn:uuid:58ac6b4f-3a9e-48f9-8c6d-cdaccd66253e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Projections for Packers Rookies Before the NFL Draft we had Paul Bessire on our morning talk show Railbird Central for an interview. Now that the Draft is over, the time is appropriate to re-visit his unique statistical projections for players selected by the Packers. The projections from Bessire at PredictionMachine.com are distinctive because they're objectively based on mathematical formula. According to PredictionMachine.com: "To come up with statistical inputs for rookies, we run a very complex set of algorithms that factor college stats, previous utilization and strength of competition, combine measurables, role and expected utilization of the player's NFL team (in this case an average NFL team) and previous performance of similar rookies at that position in general." What Bessire provides is some outside-the-box thinking but not without basis. Packers fans will be happy to hear, for example, that Vanderbilt's Casey Hayward is Bessire's No.1 ranked cornerback in this year's Draft class. Yup, ahead of the likes of Morris Claiborne and Dre The following are the projected 2012 statistics for each new Packers player assuming they start all 16 games for an average NFL team, which is obviously unrealistic to expect to actually happen for most rookies but is used as a method of comparison. • USC's Nick Perry (the No. 2 rated defensive end by PredictionMachine.com)––40.7 tackles, 9.7 TFL, 7.7 sacks • Michigan State's Jerel Worthy (the No. 4 rated defensive tackle)––24.4 tackles, 7.8 TFL, 3.6 sacks, 0.3 FF, 1.0 blocked kicks • Vanderbilt's Casey Hayward (the No. 1 rated cornerback)––64.9 tackles, 3.6 TFL, 0.1 sacks, 6.6 INTs, 13.6 PBUs • Iowa's Mike Daniels (the No. 9 rated defensive tackle)––28.9 tackles, 6.2 TFL, 4.6 sacks, 0.3 FF • N.C. State's Terrell Manning (the No. 6 rated outside linebacker)––65.5 tackles, 9.4 TFL, 4.6 sacks • Tennessee-Chattanooga's B.J. Coleman (the No. 16 rated quarterback)––246.9 of 491.6 for 2,629 yards (50.2%), 15.6 TDs, 20.6 INTs, 5.3 YPA • Vanderbilt's Sean Richardson (the No. 14 rated safety)––72.1 tackles, 3.6 TFL, 1.2 sacks, 1.2 sacks, 1.3 INTs, 3.4 PBUs Jerron McMillian was not included in the rankings. • 0 points (1 like | 1 dislike) Comments (18) So I guess we don't start Coleman. Of course not. The Nick Hill era has begun. This is kind of interesting. It would be cool to see an analysis of how Hayward ended up #1 in CB ranking. Pack has 3 in the top 30 overall so I hope this is right! What about our safety from Maine? I'd be interested in hearing where the rankings sat comparable players, such as Reyes and Still relative to Worthy, and Irvin and Curry relative to Perry. Football Outsiders' "SackSEER" also likes Nick Perry, projecting 28 sacks in his first 5 years. Production. Sweet. GBP 4 LIFE While this is "neat", let's see this guy put his mathematical equations to the test: Run his simulation on the first 64 players drafted.... in the 2010 draft. Then, compare the results to actual data. You could even take the "real world" results, and use a percentage of that to reflect how often the players were truly on the field. It would be interesting to see how they stack up to the real production. You can't compare his work because he projects those players starting 16 games in what's supposed to be an average team. Just too many variables to predict anything. Even using algorithms or what have you, before you see how that guy plays in the NFL and how his team will use him? it's a guess. Kinda exactly my point. It seems like a waste of time to project things when you already know you can't produce anywhere near accurate numbers. It's playing with numbers and math for amusement, there is no usefulness to be found in it. So you're saying... the internet is for amusement and there is no usefulness to be found in it... you don't say! what you are refering to is evidence-based practice, or EBP. Perhaps I will use this topic for my next research paper. Reminds me of the old Rocky and Bullwinkle Show,"Watch me pull a rabit out of my hat. Presto, chango....... No doubt about it - I've gotta get a new hat!" It looks like TT and crew drafted good potential quality. Now coaching, education, motivation, integration, scheme, and execution all need to intersect to realize that potential. Wrong safety. UDFA we snagged. GOOD DRAFT NEED A FULL BACK NOW TO GET RUSHES BACK UP AGAIN I have never been happier with a draft class. Great job by all scouts, TT, and McCarthy. Go Pack Go Log in to comment, upload your game day photos and more! Not a member yet? Join free. If you have already commented on Cheesehead TV in the past, we've created an account for you. Just verify your email, set a password and you're golden.
{"url":"http://cheeseheadtv.com/blog/statistical-projections-for-packers-rookies","timestamp":"2014-04-17T07:20:22Z","content_type":null,"content_length":"128871","record_id":"<urn:uuid:9103a101-358d-43b1-aae4-0e938d2259b5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Stepping Science: Estimating Someone's Height from Their Walk Key concepts Do you ever find that you need to walk faster to keep up with some people whereas you have to decrease your pace to walk with others? This is likely because of the difference in leg length between you and the person you are walking with. In this science activity you'll get to investigate just how much faster or slower different people walk, and see if you can use the relationship between a person's walking pace and their height to estimate your own height. A pedometer is an instrument that is often used by joggers and walkers to tell them how far a distance they have gone. On some pedometers, when a person sets the instrument to record an outing they must enter their height into the pedometer to get an accurate reading. Why is height an important variable for measuring how far a person has walked? One part of the answer has to do with ratios. Our bodies have many interesting ratios. For example, when your arms are outstretched, the distance from the tip of one hand to the other is usually about equal to your height. There are other ratios as well, each describing how one part of the body relates to another in size. Because the length of a person's legs is related to a their height by a ratio, the latter will affect how long of a step they take. The longer each step, the more distance a person travels when taking the same number of steps while walking, jogging or running. This ratio, combined with the motions involved in walking and running, is used by pedometers to calculate traversed distances. • 20 feet of straight sidewalk or hallway • Sidewalk chalk or two small objects to mark off 20 feet of distance • Tape measure • At least three volunteers to walk a short distance (Ideally, they should be different heights.) • Pen or pencil • Scrap piece of paper • Calculator • Find a place that has 20 feet of straight sidewalk or find a straight hallway that is at least 20 feet long. • Using a tape measure, measure out a distance of 20 feet and mark the beginning and end points with a piece of sidewalk chalk (if you are using a sidewalk) or mark them each with small objects that will not be moved (if you are using an indoor hallway). • Measure a volunteer's height. How tall are they? Write their height down on a scrap piece of paper. • Ask the volunteer to walk from the beginning to the end of the 20-foot course you marked at a normal pace and stride. As they do, count the number of steps they take. How many steps did you count? Write down the answer. • Repeat this process for at least two more volunteers. How tall is each volunteer? Did they take a similar number of steps or was there variation? Be sure to write the results down. • For each volunteer, figure out their step length (in feet) by dividing 20 feet by the number of steps each took. What was the step length for each volunteer? • For each volunteer, figure out their ratio of step length to height by dividing their step length by their height (both in feet). What numbers do you get for this ratio? Are they similar for the different volunteers or is there variation? Average the step length to height ratio for all of your volunteers. Be sure to write your answers down. • Lastly, use your results to estimate your own height. Walk from one end to the other of your 20-foot course while counting the number of steps you take. Divide 20 feet by the number of steps you took. What was your step length? Then divide your step length by the volunteers' average ratio of step length to height. Based on your data, what is your estimated height? • Have someone measure your actual height. How does your actual height compare with your estimated height? How accurate was your estimate? • Extra: Try this activity with a greater number of people. For example, you could go to a park with a jogging path or a similar location with an adult where you can ask for volunteers as they pass by. Try to collect data from at least 10 volunteers. Is there much variation in the ratio of step length to height when comparing many people? Does collecting more data make your height estimation more accurate? • Extra: You could do this activity again, but this time have volunteers walk slowly, moderately fast or very fast. How does a person's speed affect their step length? • Extra: The human body has many other interesting ratios, such as those mentioned in the Background of this activity. You could look into other ratios in the human body and come up with an activity like this one to investigate them. See the More to Explore section for relevant resources. What other ratios are consistently found in the human body from person to person? Observations and results When you divided your volunteers' step lengths by their heights, did you get a ratio value close to 0.4? Were you able to roughly estimate your height based on this, accurate to within a couple The measurements of a pedometer are based on the hypothesis that all people have common ratios and proportions, even if they are different heights. In this activity you should have found this hypothesis to be pretty accurate. On average, adults have a step length of about 2.2 to 2.5 feet. In general, if you divide a person's step length by their height, the ratio value you get is about 0.4 (with a range from about 0.41 to 0.45). This is why you can take a person's step length and divide it by about 0.43 to roughly estimate their height—the estimated height will likely be within two inches of (and probably much closer to) their actual height. More to explore Pedometers for Kids Track Physical Activity, from Peaceful Playgrounds Simple Ratios of the Human Body, from Kim Moldofsky at Bedtime Math How to Determine Stride for a Pedometer by Height and Weight, from Amy Sutton at the Houston Chronicle Keeping Up, from Science Buddies This activity brought to you in partnership with Science Buddies
{"url":"http://www.scientificamerican.com/article/bring-science-home-estimating-height-walk/","timestamp":"2014-04-16T22:56:37Z","content_type":null,"content_length":"60990","record_id":"<urn:uuid:a54b4b4c-db09-460b-a67b-ee373de3d5da>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Penns Grove Trigonometry Tutor Find a Penns Grove Trigonometry Tutor ...I taught Prealgebra with a national tutoring chain for five years. I have taught Prealgebra as a private tutor since 2001. I completed math classes at the university level through advanced 12 Subjects: including trigonometry, calculus, writing, geometry ...For the SAT, each student receives a 95-page spiral-bound book of strategies, notes, and practice problems that I created from scratch after a rigorous analysis of the test. As a Pennsylvania certified teacher in Mathematics, I was recognized by ETS for scoring in the top 15% of all Praxis II Ma... 19 Subjects: including trigonometry, calculus, statistics, geometry ...I took part in this program where students from my university taught a group of public school children how to make a model rocket and how it worked. I remember I got the chance to instruct a small group of children on the names of all the parts of the rocket and a basic explanation of how they f... 16 Subjects: including trigonometry, Spanish, calculus, physics ...At my college, I tutored beginning French students. Besides that, I have a great love for France, it's language, culture and shared history between our two countries. I would be happy to share that love with any student if they wished. 33 Subjects: including trigonometry, English, physics, French ...I taught full time for five years at the College of William and Mary, leaving to join my wife in Delaware. Since moving, I have been writing extensively while working part time as an SAT tutor. I have learned a few tricks, and I know my stuff, but you will find I am a very down-to-earth person ... 22 Subjects: including trigonometry, reading, algebra 1, English
{"url":"http://www.purplemath.com/Penns_Grove_Trigonometry_tutors.php","timestamp":"2014-04-20T02:33:28Z","content_type":null,"content_length":"24277","record_id":"<urn:uuid:e9e08fdc-433e-4fb6-b470-635eaed0aa25>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
Battling B.C.'s math education crisis Educators have devised new approaches to engage kids years after a curriculum-reform initiative was kiboshed under Christy Clark UBC math professor George Bluman is used to speaking to Chinese audiences. Last spring, he gave a series of lectures in China about teaching calculus. In a recent phone interview with the Georgia Straight, he also revealed that he has worked with three postdoctoral students from villages in China, including one from Inner Mongolia. So it shouldn’t come as a surprise that in late October he was invited to speak to a group of Chinese-speaking parents at the Burnaby Public Library about the state of math education in B.C. Their nonprofit academic organization, the Educational Quest Society of Canada, was created in June “to provide Chinese communities with professional suggestions concerning education…and to exert an influence on improving and reforming the elementary and secondary education in British Columbia”. “I was the only non-Chinese person present,” Bluman said with a chuckle. “They are very concerned about the decline in education.” At the meeting, Bluman expressed his opposition to the elimination of mandatory Grade 12 math exams in B.C. In 2004, the provincial government made this test optional; in 2011, it cancelled all optional Grade 12 exams, which means there’s no standardized Grade 12 math test in B.C. anymore. "At their annual articulation meeting in 2007, the math teacher representatives from each college and university—public and private, including reps from Adult Basic Education and BCIT—without dissent, wanted them to be continued for mathematics," Bluman said. "They forwarded a strongly worded motion on this to the B.C. minister of education." He cited research conducted at UBC demonstrating that students who had written the optional tests performed much better in first-year calculus courses. And according to a survey he conducted, public-school math teachers want the Grade 12 exams reinstated; UBC student senators echoed this view in a separate survey. Moreover, Bluman noted that B.C., unlike most other jurisdictions, allows secondary-school educators to teach math regardless of their qualifications in this area. “You don’t have to be knowledgeable in the subject,” he said. “It’s a real problem.” Meanwhile, the Educational Quest Society of Canada (EQSC) has published a report chronicling how B.C. students’ performance in math has deteriorated in the 21st century. The Pan-Canadian Assessment Program testing of Grade 8 students in 2010 showed that B.C. registered a score of 481, which was well below the Canadian average of 500 and significantly behind the top three provinces: Quebec (515), Ontario (507), and Alberta (495). “Moreover,” the report notes, “BC students performed below the Canadian average on all four of the mathematics sub-domains: numbers and operations, geometry and measurement, patterns and relationships, and data management and probability.” B.C. also fell behind the Canadian average in the 2009 math-test results of 15-year-olds, according to Program of International Student Assessment results. B.C.’s score of 523 was four points below the national average and 11 points below the score achieved in 2000. This decline followed the B.C. government’s decision to halt reevaluation of the math curriculum shortly after the B.C. Liberals took power. One member of EQSC, Pi Yuan, told the Straight by phone from Burnaby that he teaches math and science at a private educational centre. He rattled off nine major concerns about math education in B.C., including the elimination of the requirement to include calculus in Grade 12 mathematics. He claimed that a reduction in standards has diminished the value of a B.C. diploma. “If you compare the mathematics textbooks the students are using now and the textbooks that were used 10 or 20 years ago, you can see that the content is getting less and less [difficult],” Yuan He also claimed that the elimination of the Grade 12 math exam can undermine a student’s chance of getting accepted to university. “A lot of students have concerns about the fairness of the marking,” Yuan maintained. “If the students have a very nice, fair teacher and a good marker, maybe the mark is high. But if the student is taught by a strict teacher or a callous teacher, maybe the mark is Sitting in her office at UBC’s Point Grey campus, math-department outreach coordinator Melania Alvarez bluntly told the Straight that there’s a “crisis” in math education in B.C. Alvarez, winner of this year’s Canadian Mathematical Society award for promoting math learning, travels across the province to support schools and teachers in their math education. “I think we really need to change some things, because otherwise, I don’t see us moving forward,” she said. Foremost is the culture around mathematics. She noted that people don’t routinely announce that they don’t know how to write or that they hate reading books. But parents will often tell their kids how much they hate math, even though most young children love the subject. “Being math phobic is culturally acceptable,” Alvarez said. “I’m sorry to say, the media promotes this.” Alvarez is education coordinator at the Pacific Institute for the Mathematical Sciences, a consortium created by eight universities. It puts on two summer camps: one for kids making the transition from elementary school to secondary school, and another for high-school students. And for the past 15 years, the institute has hosted a free educational event called “Math Mania” several times a year in school gymnasiums that includes games, puzzles, kaleidoscopes, and various interactive events. On October 27, the UBC faculty of education invited families to attend a math fair, which featured numerous activities for children. In one Clue-like game, participants had to figure out which tourist stole a priceless ruby from the tomb of King Ramses. Students also learned how math is integral to Coast Salish weaving. Two UBC education professors who attended the fair, Cynthia Nicol and Jo-ann Archibald, explained to the Straight how they worked with aboriginal residents of Haida Gwaii on a program to connect math to the community and local culture. Nicol mentioned that they worked with carvers and elders to learn how mathematics influenced the Haida Nation, then incorporated what they learned into lessons. “Some of it was taking the kids outside to the beach, to the land, helping them imagine other possibilities to studying math in a textbook,” Nicol said. Archibald described how this emphasis on linking to the land could better engage aboriginal students. As an example, she said it’s possible to base a lesson on the number of logs that have been cut and removed, and then equate that to the impact on the local environment. “You’re connecting math with social issues,” Archibald stated. Alvarez often emphasizes that just as it takes time to excel in sports or in music, it also takes considerable effort to do well in math. “Most kids believe that if they cannot solve a problem in five minutes—or in two minutes or in 30 seconds—then they are no good in math,” she commented. “We need to change that.” She pointed out that students feel empowered when they excel in math. And she said it’s important for teachers to set expectations high and not pigeonhole students as slow learners because they will not perform as well as they can. Alvarez also acknowledged that many teachers don’t feel comfortable with their level of math knowledge—and she pointed out that they must be supported with professional-development opportunities. “Many of them have told me that they tried to avoid math when they were student teachers but that they really regret that,” she stated. “Unfortunately, the institutions allowed for that.” One thing is clear: knowledge of math is increasingly important in the 21st-century economy. UBC math professor Arvind Gupta is the CEO and scientific director of Mitacs Inc., a national nonprofit organization funded by federal and provincial governments and the private sector. It encourages graduate students to work with companies to understand their problems and propose solutions, which then become the students’ thesis projects. Gupta told the Straight by phone that the program began with math students but has since expanded to include people in everything from anthropology to zoology. Nowadays, occupations ranging from architecture to medicine to journalism to engineering all require significant math skills. Gupta pointed out that the genomic revolution is really about the application of mathematics to life sciences and that math is even becoming more important in the social sciences. He noted that former U.S. president Bill Clinton’s speech at the most recent Democratic national convention was loaded with arithmetic, winning rave reviews from the public and giving Barack Obama a boost in the polls. “If you go back to that movie A Beautiful Mind, who would have thought that a movie about a mathematician would win so many awards?” he stated. “I think there’s actually a hunger for this kind of thing.” To stimulate kids’ interest in the subject, Mitacs is backing a stage production called Math Out Loud, which recently played in Vancouver and Surrey. Written and directed by Vancouver actor Mackenzie Gray, the zany show features two students who time-travel. In various vignettes, they encounter characters ranging from Cleopatra to Christopher Columbus and learn how math influences everything from art to game shows to the sounds coming out of the radio. “What we want to do is figure out a way to re-engage kids, “ Gupta said. “It’s great to have your music on your iPhone, but what you’re really doing is carrying around a very sophisticated piece of Nov 1, 2012 at 5:58 pm Never in human history has the percentage of students they expect to do Math at the level they want been able to do so. Frankly, their strengths are in other subjects, and there is nothing wrong with Really, hardly anyone needs to know calculus. What are going to use it for, despite what some loudmouths in BCès (dying) tech industry, we are not all going to be programming apps for a living. It may also be a mistake to take the concerns of Asian parents a group known for unreasonable histrionics when it comes to Math, too seriously. Nov 1, 2012 at 7:47 pm It is true that most people manage very well without knowing much mathematics. Some understanding of fractions and percentages is enough for many of us. But this misses the point. It would be good if more of us understand the basics of logic and proof, just as we learn about other central parts of our civilisation - language, law, morality. In my experience, when mathematics is not grinding people down, they can see why it can be interesting. My father taught me how to draw in perspective which made me very happy and when I learned later that this is called projective geometry I became even happier. Nov 2, 2012 at 6:45 am If you cannot do mathematics, you cannot reason. There's no such thing as :"emotional intelligence," that is just memorizing a disconnected, unrelated set of authoritarian premises, like Alex in a Clockwork Orange. Your stomach should feel bad if someone ______. is it any great coincidence that with the decline of mathematics, that is, the system by which we reason and see if a connected series of statements lead to a conclusion (we call it a "proof"), we have seen an increase in touchy-feely, emotionality? People who don't know mathematics are doomed to be the slaves of those who do. Nov 2, 2012 at 9:18 am Many years ago I graduated in science, and yes I took my calculus, algebra, geometry and possibly did not fully appreciate (at that time) how these important subjects permeate through all the sciences including business, biology, health sciences, engineering etc.. I was shocked by your article how our math education has moved onto a slippery slope, downward. It is sad to see BC falling behind Ontario, Quebec. Math is important. We should all demand higher standards in math education. Many universities now demand completion of Grade 12 mathematics to even enter into a university program. Look at your entry exams for law, medicine and business and engineering. Professional degrees require a basic understand of mathematics, many jobs requires math skills, and above all it opens doors to better jobs and greater variety of jobs. It is very short sighted to lower the standards of required mathematics in our high school education system Nov 2, 2012 at 2:40 pm As an engineering student I don't need to compose or decompose much poetry these days, but I'm better off for having been taught. In real life I have yet to need to identify the themes of a short story, but I'm glad I learned how. History comes in handy from time to time, although I don't use it every day. I appreciate the importance of keeping up with current events, even though I vote in an election less than once each year. Music class was fun and I still enjoy playing an instrument, but for me it's not central to making a living. These days my bookshelves are filled more with textbooks and research papers than with literary fiction, yet I'm glad to have been introduced to both. But of all the things in the school curriculum, there are these which stand out in importance to my daily life: Reading, writing, science, and mathematics! It took me a long time to realize that I had been thinking about math the wrong way for most of my life. For most of my education, I had thought of it as a difficult class that I often did poorly in, and one which my life would be easier without. Only later did I realize the distinction between math class (a nuisance on my report card), and mathematics (a fascinating realm full of practical tools that I could freely experiment with). And once I understood that math works for me, rather than the other way around, I began to appreciate it a lot more. I am very thankful to have learned math, because I use geometry, algebra, and calculus every day, and I'd be very lost in my work without them. It's not something I could have picked up if I hadn't started from a young age, even though I didn't see the value of it then, and neither did many of my friends. Math saves me a great deal of time and effort, is essential to building machines that work, and gives me valuable insight. I have even come to believe that learning algebra, probability, and especially calculus is very important in modern society. In the same way that reading works of literature informs one's perspective, so does better understanding the relationship between quantities and rates of change. It makes it easier for to see through false advertising, identify political or corporate doublespeak, and find flaws in weak arguments. Math is as essential to education as literature, even if neither of them are used every day in many careers. The point made in the article about how "math education often focuses primarily on the 'grammar'" is very apt. Nov 2, 2012 at 7:59 pm If everyone working in the BC government put together had half a brain they'd just emulate the Kumon math program, which works very well. Unfortunately there's no chance in hell that the BCTF would let anything remotely resembling efficient de-centralized private eduction into their precious failed schools. The bureaucrats and public school teachers will continue pretending to not see the solution. Maybe it'll change someday, but in the meantime, If you care about your kids education, invest in it. You get what you pay for. Nov 2, 2012 at 10:51 pm Richard, Math does have a lot to do with reasoning Funny, you say there is no such thing as " Emotional Intelligence" yet, you then compare it to something in that idiotic movie, a Clockwork Orange, I reason that if there is no such thing, then you cannot compare it to anything. Sure, Emotional intelligence may be just some new buzz phrase, but you know some people are certainly a lot more or less intelligent than , others with their emotions, whether they are Math Genius's or not . It's just a TV show, but think Sheldon from the Big Bang, he is a Genius , but socially he is complete Idiot . Just my opininion, I do not care if you agree or disagree, Nov 4, 2012 at 7:32 am Morgus6: Every last piece of property on the planet has been determined using calculus, not to mention all the air and sea travel being reliant on it to succeed. Mathematics and philosophy went commonly hand in hand up to the time of Bertrand Russell. The understanding of math logic hones the brain. Nov 5, 2012 at 1:13 pm Do not look at the current low academic standards at all. Look beyond. Students can excel if they want to, we have superb textbooks on math. All one needs to do is pick up a textbook and do more than the minimum required by the teacher. Do the practice problems, yes, all of them, make the foldable, look up who Mr. Pyhtagoras was. Discover. Be curious. Engage. Run a math journal. Find out why all things fall at the same speed regardless of their masses (till terminal velocity) Do less bullsh** with your buddies. Have a clue and some determination. Nov 8, 2012 at 10:48 am As a former private college instructor with over 20 years of service, I can unreservedly vouch for the fact that the standard of education in B.C. has declined significantly - and not just in math. I taught practical "hands-on" and related theory courses. They required some basic high school math with a small amount of calculus, as well as some basic writing skills. Truthfully, if I had presented the same level of depth in my final years that I did in the earlier years I would have had a failure (or perhaps dropout) rate of 100%. There was constant pressure to "soften" things because apparently it doesn't look good for the college (and hence the instructor) to have high failure rates. Particularly embarrassing was the fact that many students for whom English was second language (ie Asian and Scandinavian students) outperformed local students. It's quite clear that our basic K-12 education system is in decline compared to many other developed countries. I feel for our young people today. Our corporate media based spoon-fed culture has told them they can do anything and be anything they want - and then they come out of colleges/universities with tens of thousands of dollars in student loans and McStarbuck's minimum wage jobs with which to pay them off. We need to give them a chance to compete in this world, and that has to start with a significant investment in public education. Do the math. It will pay off. Nov 11, 2012 at 7:58 pm I think that the main problem with BC Math education is that too often Math is not taught by Mathematicians. Teachers do the best they can with what was given to them, but most of them did not get proper Math education themselves and the problem perpetuates. They can’t teach lessons they were never taught or show the beauty they were never shown. Students get the idea that Math is in the same league as alchemy and witchcraft. To break the cycle, let us try to find Mathematicians, who are willing to teach Math, but do not have the teacher’s training, and offer them a few months free accelerated teacher’s training. Let’s go to universities and do the same for the fourth year Math students. I urge our government to start acting right now, before we the parents bring a class action lawsuit that our children’s right to proper Math education is violated. Feb 7, 2013 at 1:32 pm I feel in many ways the basic math being taught in high school is over-kill, especially for those who are not going on to post secondary education. Always upping the ante produces isn't the answer. I say return to basics and do it well; forget pushing Canadian youth into Asian ideals of math perfection :) Apr 14, 2013 at 10:29 pm Not everyone is good at math or needs it to flip burgers at a fast food restaurant. I think they should bring back 'Math Essentials' which was a easier math geared for those not on the post-secondary trajectory. The BC Schools are so busy upping the ante they have forgotten those who need only basic math. I give them a F --.
{"url":"http://www.straight.com/article-822531/vancouver/battling-bcs-math-crisis?qt-most_popular=1","timestamp":"2014-04-19T07:23:05Z","content_type":null,"content_length":"126981","record_id":"<urn:uuid:1a7b4f4b-a69f-48df-9513-b1c31634d741>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: If a and b are positive numbers, find the maximum value of f(x)=x^a*(1-x)^b, 0 less than or equal to x less than or equal to 1. Your answer may depend on a and b. What is the maximum value? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ebd9f36e4b021de86fc8756","timestamp":"2014-04-18T20:55:31Z","content_type":null,"content_length":"40002","record_id":"<urn:uuid:a6a2fb85-131c-4977-bab8-90c04d7f713d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Decoherence induced spontaneous symmetry breaking Karpat, Göktuğ and Gedik, Zafer (2009) Decoherence induced spontaneous symmetry breaking. Optics Communications, 282 (22). pp. 4460-4463. ISSN 0030-4018 PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader Official URL: http://dx.doi.org/10.1016/j.optcom.2009.07.066 We study time dependence of exchange symmetry properties of Bell states when two-qubits interact with local baths having identical parameters. In case of classical noise, we consider a decoherence Hamiltonian which is invariant under swapping the first and second qubits. We find that as the system evolves in time, two of the three symmetric Bell states preserve their qubit exchange symmetry with unit probability, whereas the symmetry of the remaining state survives with a maximum probability of 0.5 at the asymptotic limit. Next, we examine the exchange symmetry properties of the same states under local, quantum mechanical noise which is modeled by two identical spin baths. Results turn out to be very similar to the classical case. We identify decoherence as the main mechanism leading to breaking of qubit exchange symmetry. Repository Staff Only: item control page
{"url":"http://research.sabanciuniv.edu/12077/","timestamp":"2014-04-19T12:02:04Z","content_type":null,"content_length":"17028","record_id":"<urn:uuid:736b27e2-7233-4db9-b78a-72cf7f3dfb68>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
ANNOUNCEMENT FOR 1997 Advanced Course on Pseudodifferential Operators and Their Applications 1 Date: 25 June-5 July 1997 Location: TICMI (Tbilisi) Roland Duduchava (University of Tbilisi, Georgia) Summary: Pseudodifferential operators (PsDOs) on manifolds (definition, properties). Boundedness of PsDOs in anisotropic Sobolev spaces with weight. Factorisation of matrix elliptic symbols and Fredholm properties of PsDOs. Asymptotics of solutions to elliptic systems of pseudodifferential equations. Application to some problems in elasticity. B.-Wolfgang Schulze (University of Potsdam, Germany) Summary: The lectures present the basic methods of PsDOs for solving elliptic and parabolic problems on configurations with singularities (conical, edge, corners, cuspidal etc.). Essential tools are the Mellin transform, meromorphic operator-valued symbols and weighted wedge Sobolev spaces with asymptotics. The calculus is aimed at constructing parametrices or inverses with the calculus and to illustrate the connection to concrete models in applied sciences. Advanced Course on Theory of Elasticity 2 Date: 16-25 September 1997 Location: TICMI (Tbilisi) Veronique Lods, Gerard Tronel (Universite P. et M.Curie, France) Summary: During the past decades, substantial progress has been made in the mathematical analysis of three-dimensional elasticity and in the understanding of the two-dimensional linear and nonlinear theories of plates and shells by means of the technique of asymptotic analysis. The lectures will thoroughly review these recent developments. Tamas Vashakmadze (University of Tbilisi, Georgia) Summary: Construction of finite models (e.g. such as von Karman, Reissner, Kirchhoff, etc) without simplifying hypotheses. Investigation of problems of error estimation, convergence and effective solvability of two-dimensional models corresponding to the reduction methods (e.g. Theories of Vekua, Babushka). Some similar generalizations for piezoelastic and electric elastic plates and shells. New numerical processes for solving of some two-dimensional problems in above sense. Coordinator: George Jaiani These courses are suitable for advanced graduate students or recent Ph.D.'s. The participants will also have an opportunity to give 20-minute talkes on their own work at a mini-symposium which will take place during the Advanced Course. Lectures and abstracts of the talks will be published and distributed among the lecturers and participants after Advanced Course. The registration fee for participants is 400 USD which includes all local expenses during the Advanced Course. A restricted number of participants will be awarded grants. Further information: TICMI, I.Vekua Institute of Applied Mathematics of Tbilisi State University,University Str. 2, Tbilisi 380043, Georgia e.mail: jaiani@viam.sci.tsu.ge Tel.:+995 32 305995
{"url":"http://www.emis.de/journals/TICMI/ann97.htm","timestamp":"2014-04-16T07:19:27Z","content_type":null,"content_length":"4944","record_id":"<urn:uuid:2fdd3faa-e53b-4431-9285-7a64d3a9f383>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
South Plainfield Trigonometry Tutor Find a South Plainfield Trigonometry Tutor ...Whether we are working on high school geometry proofs or GRE vocabulary, one of my goals for each session is to keep the student challenged, but not overwhelmed. I assign challenging homework after every lesson, and provide periodic assessments and progress reports. To ensure that students have... 34 Subjects: including trigonometry, English, reading, writing ...I am currently a Chemistry and Physics teacher at a middle school and find absolute joy in teaching my students. I am quite patient, friendly and persistent with my students and am known for breaking down dry, convoluted concepts and processes into simple, easy-to-understand terms and language. ... 27 Subjects: including trigonometry, English, reading, writing ...I am an motivated teacher who can teach you to your understanding. I am an educator who motivates and educates in a fun, focused atmosphere. I look forward to help and educate all students who are willing to learn. 6 Subjects: including trigonometry, calculus, algebra 1, geometry ...I can tutor both high school and college level math classes. I am available afternoons, evenings, and most weekends. I have good communication skills as well as teaching skills by building up through my graduate program as a Research Assistant as well as Teaching Assistant. 8 Subjects: including trigonometry, calculus, geometry, algebra 1 ...I have been tutoring for the STEM (Science, Technology, Engineering and Mathematics) for about a year. I tutor Math, Biology, Chemistry, Anatomy, Physics, Trigonometry, Algebra, Pre-calculus, Calculus, and Elementary (K-6th). I am very patient and persistent. I tend to change complicated problems to easy ones, by changing them into different steps. 13 Subjects: including trigonometry, reading, chemistry, calculus
{"url":"http://www.purplemath.com/South_Plainfield_trigonometry_tutors.php","timestamp":"2014-04-17T19:57:03Z","content_type":null,"content_length":"24420","record_id":"<urn:uuid:ecaabb88-c1b0-410f-a314-3d528b424c11>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
User Urs Schreiber bio website ncatlab.org/nlab/show/… visits member for 4 years, 6 months seen 16 hours ago stats profile views 3,088 I am a postdoc in maths with a degree in theoretical physics. I am interested in mathematical structures in quantum field theory and string theory, see
{"url":"http://mathoverflow.net/users/381/urs-schreiber?tab=questions","timestamp":"2014-04-18T00:44:01Z","content_type":null,"content_length":"74239","record_id":"<urn:uuid:e751798e-1e32-4101-bc34-03e114bf6622>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Hopkinton, MA Math Tutor Find a Hopkinton, MA Math Tutor ...Permutations, combinations, and basic probability, 12. Matrices. Often coming between Algebra I and Algebra II, Geometry is the study of the properties and uses of geometric figures in two and three dimensions. 9 Subjects: including prealgebra, algebra 1, algebra 2, geometry ...I guide students in preparing for MCAS, SAT AP and other related exams. I completed my Master's in Organic Chemistry and won a Coca Cola gold medal for securing highest marks in Chemistry at Goa university, India. I have worked as a lecturer in Organic Chemistry in Dyanprassrak's Mandal's college, India. 13 Subjects: including algebra 2, biology, precalculus, trigonometry ...Many problems on the SAT are unlike any that students may have experienced in their Math classes at school. I help my students with "SAT Math" by a) expanding and deepening their understanding of the ideas behind Mathematics; b) showing them how to think on their feet and apply basic Mathematica... 14 Subjects: including discrete math, differential equations, C, linear algebra ...As a Math teacher for 13 years, I have developed organizational and study skills methods that have been very effective. Studying for Math incorporates many of the study skills that are transferable to other subjects but there are unique skills for learning and preparing for Math. I have also taught MCAS classes which incorporated test taking strategies. 9 Subjects: including algebra 1, prealgebra, GRE, GED ...In addition to private tutoring, I have taught summer courses, provided tutoring in Pilot schools, assisted in classrooms, and run test preparation classes (MCAS and SAT). Students tell me I'm awesome; parents tell me that I am easy to work with. My style is easy-going; my expectations are real... 8 Subjects: including geometry, algebra 1, algebra 2, precalculus Related Hopkinton, MA Tutors Hopkinton, MA Accounting Tutors Hopkinton, MA ACT Tutors Hopkinton, MA Algebra Tutors Hopkinton, MA Algebra 2 Tutors Hopkinton, MA Calculus Tutors Hopkinton, MA Geometry Tutors Hopkinton, MA Math Tutors Hopkinton, MA Prealgebra Tutors Hopkinton, MA Precalculus Tutors Hopkinton, MA SAT Tutors Hopkinton, MA SAT Math Tutors Hopkinton, MA Science Tutors Hopkinton, MA Statistics Tutors Hopkinton, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Hopkinton_MA_Math_tutors.php","timestamp":"2014-04-20T08:39:18Z","content_type":null,"content_length":"23917","record_id":"<urn:uuid:50eefbe8-ff46-461e-89d4-70a01b871653>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [Public WebGL] The Newly Expanded Color Space Issue [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Public WebGL] The Newly Expanded Color Space Issue • To: Steve Baker <steve@sjbaker.org> • Subject: Re: [Public WebGL] The Newly Expanded Color Space Issue • From: Thatcher Ulrich <tu@tulrich.com> • Date: Tue, 7 Sep 2010 23:05:33 +0200 • Cc: Chris Marrin <cmarrin@apple.com>, public webgl <public_webgl@khronos.org> • Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=e8cvEOvd6LnqUx79/EgUBuRwMczoJBbtdVQ0rcCI+0Q=; b=VzE+S2ER/ L0xiA31hww4souFHovSsoFcQuvQp9yW6renVWV4E/KhdcFek4odqpm8lz LDaKK2qsStWQ5Aqo7SD7I+7COWDIfmX8ndRr4j9jCwK6HeDPYdp4pYmtC0no/XaQb57k Cs5d2xpSD/ePu9UV9grbl/0eRBRwenvUTbUOU= • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=c6fabSW80LK2JuRUMnpaPOTTyvV57Ctzo/BqkJEY29pJOkBuL+1YdaM/C7ciuBtBmc emC8JfQ5JAxEGSbqEYTFqHr+HmmrwkzH8HSqxwmlDVKj/lsHYKFOMZ+yNlPelnm1Mict • In-reply-to: <4C867411.5060807@sjbaker.org> • References: <98F41DBC-E940-46BA-8813-133917D81C6A@apple.com> <4C867411.5060807@sjbaker.org> • Sender: owner-public_webgl@khronos.org On Tue, Sep 7, 2010 at 7:19 PM, Steve Baker <steve@sjbaker.org> wrote: > I think we have to take this one teeny-tiny step at a time. First, let > us establish once and for all what color space GLSL shaders operate in. > Those of us who think that the color space of WebGL shaders can be > anything other than linear...please convince me that you are right by > clearly answering the following three blindingly simple and eminently > practical questions. > If you are right, then the answers should be easy, clear, intuitive and > (above all) correct. > Suppose I want a 50% blend between two images, or to make it > super-simple: a 50% blend between a black texel and a white texel. > That's a good choice of example since (0,0,0) and (1,1,1) are black > and white, respectively, in both linear and sRGB space - so we don't > have to concern ourselves with the thorny question of what the input > texture color space is. Here are three ways I could get 50/50 > blend of two colors: OK, as I indicated before, 50% perceptual blend and 50% physical blend are two different things, so the question is malformed. But I think I know what you mean, so I can rephrase the question and give an answer. I'm going to rephrase your question as, "how would you generate 50% physical light intensity via an sRGB framebuffer by blending black and white using a factor of 0.5?" Here we go: var black = vec3(0, 0, 0); var white = vec3(1, 1, 1); var linear_result = mix(black, white, 0.5); var srgb_result = pow(blend_linear, 1 / 2.2); // srgb_result ~= 0.73, ... > QUESTION 1: > In a linear space GLSL, I could write: > x.rgb = mix ( vec3(0,0,0), vec3(1,1,1), 0.5 ) ; > (Just to save you looking it up, the OrangeBook says that mix(x,y,a) > returns "(x*(1-a)+y*a), ie the linear blend of x and y" - those are > the exact words at the very top of page 124 of my early edition - > and that is what every single GPU on the entire planet actually does). > In my world, x.rgb will be (0.5,0.5,0.5) - which (with what I'm > proposing) will automatically become pow(vec3(0.5,0.5,0.5),1.0/2.2) > when it's composited into the frame buffer to produce the > perceptually (and mathematically) correct result: (0.73,0.73,0.73). > Here is the question: How do you get a perceptually correct 50% mix > of any two colors in your sRGB-space shader such that a 50/50 blend > of black and white produces (0.73,0.73,0.73) in the final composited > image? Please answer in the form of actual GLSL code. > QUESTION 2: > If I now wish to produce the same 50% mix result by alpha-blending a > (white) RGBA polygon onto a (black) background. > In linear space I'll be using *gl.blendFunc ( gl.SRC_ALPHA, > gl.ONE_MINUS_SRC_ALPHA )* and a 50% alpha value, and again, the > result (after compositing) should be (0.73,0.73,0.73) - which is > again, perceptually and mathematically correct. > What code (blend modes, shader code, etc) would I have to use in > sRGB-space to get the same result on-screen? // Then draw your poly. > QUESTION 3: > If I make a 200 pixel x 200 pixel quadrilateral (on-screen) and in > the vertex shader, I assign the two leftmost vertices the per-vertex > color (0,0,0) and the two rightmost the color (1,1,1). The final > color of the pixel in the center of the polygon (after compositing, > etc) should be the perceptually correct value (0.73,0.73,0.73). > In linear space, that's what I get if the fragment shader simply > passes the interpolated color to the output with no additional > processing - the interpolation across the polygon produces > (0.5,0.5,0.5) in the middle and the compositing gamma correction > turns that into (0.73,0.73,0.73). > How does this work in your sRGB color space world? If the answer > isn't (0.73,0.73,0.73) then how do you propose I fix it so that > simple per-vertex lighting will work correctly? Bonus: What should > dFdx(color) return at the center pixel of the polygon? See this diagram; ramp is done in both sRGB (perceptually linear) and Linear (physically linear). (View Source if you want to see the code.) > I await your replies with great excitement. I await your interpretation of this exercise with great excitement. You are currently subscribed to public_webgl@khronos.org. To unsubscribe, send an email to majordomo@khronos.org with the following command in the body of your email:
{"url":"https://www.khronos.org/webgl/public-mailing-list/archives/1009/msg00119.html","timestamp":"2014-04-21T05:01:47Z","content_type":null,"content_length":"10693","record_id":"<urn:uuid:e9476d8e-11be-49c5-bc66-b69ff6f90538>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 16 - In Proceedings of the 43rd IEEE Conference on Decision and Control , 1986 "... 1. Introduction. How many times must a deck of cards be shuffled until it is close to random? There is an elementary technique which often yields sharp estimates in such problems. The method is best understood through a simple example. EXAMPLE1. Top in at random shuffle. Consider the following metho ..." Cited by 93 (11 self) Add to MetaCart 1. Introduction. How many times must a deck of cards be shuffled until it is close to random? There is an elementary technique which often yields sharp estimates in such problems. The method is best understood through a simple example. EXAMPLE1. Top in at random shuffle. Consider the following method of mixing a deck of cards: the top card is removed and inserted into the deck at a random position. This procedure is - Ann. Stat , 1991 "... This paper analyzes the Gibbs sampler applied to a standard variance component model, and considers the question of how many iterations are required for convergence. It is proved that for K location parameters, with J observations each, the number of iterations required for convergence (for large K ..." Cited by 38 (10 self) Add to MetaCart This paper analyzes the Gibbs sampler applied to a standard variance component model, and considers the question of how many iterations are required for convergence. It is proved that for K location parameters, with J observations each, the number of iterations required for convergence (for large K and J) is a constant times - Ann. Appl. Prob , 1993 "... this paper, we examine this rate of convergence more carefully. We restrict our attention to the case where ..." , 2004 "... For a class of stationary Markov-dependent sequences (ξn,ρn) ∈ R 2, we consider the random linear recursion Sn = ξn + ρnSn−1, n ∈ Z, and show that the distribution tail of its stationary solution has a power law decay. An application to random walks in random environments is discussed. MSC2000: pri ..." Cited by 7 (0 self) Add to MetaCart For a class of stationary Markov-dependent sequences (ξn,ρn) ∈ R 2, we consider the random linear recursion Sn = ξn + ρnSn−1, n ∈ Z, and show that the distribution tail of its stationary solution has a power law decay. An application to random walks in random environments is discussed. MSC2000: primary 60K15; secondary 60K20, 60K37. , 2003 "... Motivated by multivariate random recurrence equations we prove a new analogue of the Key Renewal Theorem for functionals of a Markov chain with compact state space in the spirit of Kesten [Ann. Probab. 2 (1974) 355–386]. Compactness of the state space and a certain continuity condition allows us to ..." Cited by 6 (4 self) Add to MetaCart Motivated by multivariate random recurrence equations we prove a new analogue of the Key Renewal Theorem for functionals of a Markov chain with compact state space in the spirit of Kesten [Ann. Probab. 2 (1974) 355–386]. Compactness of the state space and a certain continuity condition allows us to simplify Kesten’s proof considerably. "... To my colleague and friend Allan Gut on the occasion of his retirement Abstract. We give a survey of a number of simple applications of renewal theory to problems on random strings and tries: insertion depth, size, insertion mode and imbalance of tries; variations for b-tries and Patricia tries; Kho ..." Cited by 1 (1 self) Add to MetaCart To my colleague and friend Allan Gut on the occasion of his retirement Abstract. We give a survey of a number of simple applications of renewal theory to problems on random strings and tries: insertion depth, size, insertion mode and imbalance of tries; variations for b-tries and Patricia tries; Khodak and Tunstall codes. 1. "... After applying a certain space and time transformation, a (semimartingale) reflecting Brownian motion without drift in a cone, whose reflection directions are radially homogeneous, becomes a Markov additive process. This observation is a simple manifestation of the invariance of such processes unde ..." Add to MetaCart After applying a certain space and time transformation, a (semimartingale) reflecting Brownian motion without drift in a cone, whose reflection directions are radially homogeneous, becomes a Markov additive process. This observation is a simple manifestation of the invariance of such processes under a scaling. Markov additive processes are familiar in queueing theory, especially in Matrix Analytic Methods. The answers to some important questions about reflecting Brownian motion may be guessed by analogy with well-known results in Matrix Analytic Methods. , 2010 "... Use the template preface.tex together with the Springer document class SVMono (monograph-type books) or SVMult (edited books) to style your preface in the Springer layout. A preface is a book’s preliminary statement, usually written by the author or editor of a work, which states its origin, scope, ..." Add to MetaCart Use the template preface.tex together with the Springer document class SVMono (monograph-type books) or SVMult (edited books) to style your preface in the Springer layout. A preface is a book’s preliminary statement, usually written by the author or editor of a work, which states its origin, scope, purpose, plan, and intended audience, and which sometimes includes afterthoughts and acknowledgments of assistance. When written by a person other than the author, it is called a foreword. The preface or foreword is distinct from the introduction, which deals with the subject of the work. Customarily acknowledgments are included as last part of the preface. Place(s), month year , 711 "... We consider a time-homogeneous Markov chain Xn, n ≥ 0, valued in R. Suppose that this chain is transient, that is, Xn generates a σ-finite renewal measure. We prove the key renewal theorem under condition that this chain has asymptotically homogeneous at infinity jumps and asymptotically positive dr ..." Add to MetaCart We consider a time-homogeneous Markov chain Xn, n ≥ 0, valued in R. Suppose that this chain is transient, that is, Xn generates a σ-finite renewal measure. We prove the key renewal theorem under condition that this chain has asymptotically homogeneous at infinity jumps and asymptotically positive drift.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3267639","timestamp":"2014-04-18T14:19:07Z","content_type":null,"content_length":"31843","record_id":"<urn:uuid:7d69ea9a-d892-473a-9f2d-5c3ee0bad485>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] packaging scipy (was Re: Simple financial functions for NumPy) [Numpy-discussion] packaging scipy (was Re: Simple financial functions for NumPy) Brian Granger ellisonbg.net@gmail.... Mon Apr 7 16:54:15 CDT 2008 > 3) Some don't like the bloat (in disk space or download sizes) of > adding things to numpy. In my case, as long as the addition doesn't > make installations any more difficult I don't care. For the great > majority, the current size or anything within an order of magnitude > is not an important issue. For the 56Kb modem people, perhaps we can > construct a numpy-lite, but it shouldn't be the standard > distribution. I don't mind the financial functions going into numpy. > I think it's a good idea since a lot of people may find that very > handy to be part of the core distribution, probably many more than > worry about more exotic packages, and likely many more than care > about fft, random and linear algebra. The only problem is that if we keep adding things to numpy that could be in scipy, it will _never_ be clear to users where they can expect to find things. It is already bad enough. How do I explain to a user/student/scientist that ffts and linear algebra are in numpy, but that integration and interpolation are in scipy. That doesn't make any sense to them. Oh but wait, linear algebra and ffts are also in scipy! Random numbers - take a guess - wrong, they are in numpy. As far as I am concerned, financial fucntions are completely outside the conceptual scope that numpy has established = arrays, fft, linalg, random. In fact, they are far outside it. Simply putting things into numpy because of convenience (numpy is easier to install) only encourages people to never install or use scipy. If scipy that much of a pain to install and use - we should spend our time improving More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/032540.html","timestamp":"2014-04-16T14:44:05Z","content_type":null,"content_length":"4684","record_id":"<urn:uuid:d4079ec4-d8e1-41bf-bf9a-290ad2fd9028>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
work needed to pump water out a tank January 31st 2009, 07:34 PM #1 Junior Member Jan 2009 a tank is formed by revolving the curve y=x^2 ; 0<=x<=4 ; around y-axis if filled with water with weight 10000 N/m^3 find work needed to empty the tank by pumping it to the top ? the problem is that i dont have the height to set the integral is it right if i use the radius as 4? this way ill get V=PI r^2 dy W=(?) *F integral limits ==?? any help or hints pls Here's a hint for the height: If the farthest on the x-axis (horizontally) you can go is 4, then would it not be correct that the farthest on the y-axis (vertically) you can go is y= $x^2 = 4^2 = 16$ ? But, if the tank is full when the work begins, then the distance is only a distance $x^2$ (starting at x=4) from the maximum height. So what do you think the height is? And as a result, what does your work integral look like? actually my work integral is : Force*volume*hight of slab*dy ]with integral limit to cover the whole hieght from ur hint i got the limit of integral from 0-16 (cuz im pumping out from the top) *do u think its correct if i use it as follows w= (16*PI)(10000)(16-y)dy [integral 0-16]??? thank you You're right $W = (16*10000)\pi \int_{0}^{16} [16 - y] ~dy$ Do that and you're finished. thanks alot for ur help January 31st 2009, 07:50 PM #2 January 31st 2009, 08:02 PM #3 Junior Member Jan 2009 January 31st 2009, 08:09 PM #4 January 31st 2009, 08:12 PM #5 Junior Member Jan 2009
{"url":"http://mathhelpforum.com/calculus/71051-work-needed-pump-water-out-tank.html","timestamp":"2014-04-21T00:29:03Z","content_type":null,"content_length":"41116","record_id":"<urn:uuid:e8963ccf-f48f-4f1c-9969-856e981f0bf6>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
IACR News 05:22 [Pub][ePrint] On Diffie-Hellman-like Security Assumptions, by Antoine Joux and Antoine Rojat Over the past decade bilinear maps have been used to build a large variety of cryptosystems. In parallel to new functionalities, we have also seen the emergence of many security assumptions. This leads to the general question of comparing two such assumptions. Boneh, Boyen and Goh introduced the Uber assumption as an attempt to offer a general framework for security assessment. Their idea is to propose a generic security assumption that can be specialized to suit the needs of any proof of protocols involving bilinear pairing. Even though the Uber assumption has been only stated in the bilinear setting, it can be easily restated to deal with ordinary Diffie-Hellman groups and assess other type of protocols. In this article, we explore some particular cases of the Uber assumption; namely the n-CDH-assumption, the nth-CDH- assumption and the Q-CDH-assumption. We analyse the relationships between those cases and more precisely from a security point of view. Our analysis does not rely on any special property of the considered group(s) and does not use the generic group model.
{"url":"https://www.iacr.org/news/index.php?p=detail&id=2354","timestamp":"2014-04-21T14:43:48Z","content_type":null,"content_length":"21785","record_id":"<urn:uuid:4bf7228f-b65a-425e-bf70-1b924a55d48a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Mindfulness Mathematics Teaching as a Deep Learning Process Richard Brady Sidwell Friends School The Mindfulness Bell, No. 38, 39-40 (2005) During the June, 2004 ... Mathematics Curriculum Grades 7-8 Flemington-Raritan Regional School District Flemington, New Jersey Mathematics Curriculum Grades 7-8 Gregory Nolan, Superintendent Daniel Bland, Assistant ... Protocol for Classroom Observations 2004 Annenberg Institute for School Reform at Brown University Reprinted from u003Cwww. annenberg institute. org/tools/using_data/peer_observation/protocols. html ... Collaborative action research on the learning and teaching of ... Collaborative action research on the learning and teaching of algebra: a story of one mathematics teachers development Scope and Sequence Algebra and Functions Geometry and Trigonometry Statistics and Probability Discrete Mathematics Algebra I - Part 1 Mathematics Curriculum Guide Revised 2008. Available at www.rcs.k12.va.us. Roanoke County Public Schools does not discriminate with regard to race, color, national ... Gr 6 Weeks 28-36 pp 82-108!xxx 103 G r a d e 6 MATHEMATICS Essentials Week by Week WEEK Solve This! Probability Pizzazz Fraction Action One week Joe watched TV for 5 1 4 hours and Amy watched TV for 4 hours. Chapter 2 Brain Teasers Chapter 2 Brain Teasers In this chapter, we cover problems that only require common sense, logic, reasoning, and basicno more than high school levelmath ... HOW CAN YOU TELL IF A SHARK LIKES YOU? {HOW CAN YOU TELL IF A SHARK LIKES YOU? Find the greatest common factor (GCF) for each pair of numbers. Write the tetter next to the answer in the box containing the ... September 1 2 3 4 1st Day of School Assignment:-Print Syllabus and Signature Page from website - Due Thurs.-Complete Student Information Sheet - Due Thurs. Assignment:-Signed ... AAMT 2005 conference proceedings Keeping learning on track: Formative assessment and the regulation of learning AAMT 2005 conference proceedings 1 1 HOW DOES CURRICULUM AFFECT LEARNING? schools matter. This statement is a truism to most. However, it must be followed by a statement of why schools matter ... Making Mathematical Arguments u00a9EDC, 2001 Making Mathematical Arguments 3 Making Mathematical Arguments Teacher Tips BEFORE YOU BEGIN: Suggested Reading from Teachers Guide If your time is ... Making Mathematical Arguments u00a9EDC, 2001 Making Mathematical Arguments 3 Making Mathematical Arguments Teacher Tips BEFORE YOU BEGIN: Suggested Reading from Teachers Guide If your time is ... Gr 6 Weeks 28-36 pp 82-108!xxx 103 G r a d e 6 MATHEMATICS Essentials Week by Week WEEK Solve This! Probability Pizzazz Fraction Action One week Joe watched TV for 5 1 4 hours and Amy watched TV for 4 hours. Mathematics Curriculum Grades 7-8 Flemington-Raritan Regional School District Flemington, New Jersey Mathematics Curriculum Grades 7-8 Gregory Nolan, Superintendent Daniel Bland, Assistant ...
{"url":"http://www.cawnet.org/docid/answer+key+to+book+e+of+middle+school+math+with+pizzazz/","timestamp":"2014-04-21T02:56:10Z","content_type":null,"content_length":"47572","record_id":"<urn:uuid:87082927-9b30-4625-9f89-8c4799caa224>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
The effect of uniaxial strain on graphene nanoribbon carrier statistic Armchair graphene nanoribbon (AGNR) for n=3m and n=3m+1 family carrier statistic under uniaxial strain is studied by means of an analytical model based on tight binding approximation. The uniaxial strain of AGNR carrier statistic models includes the density of state, carrier concentration, and carrier velocity. From the simulation, it is found that AGNR carrier concentration has not been influenced by the uniaxial strain at low normalized Fermi energy for n=3m and n=3m+1. In addition, the carrier velocity of AGNR is mostly affected by strain at high concentration of n≈3.0×10^7 and 1.0 × 10^7 m^−1 for n=3m and n=3m+1, respectively. The result obtained gives physical insight into the understanding of uniaxial strain in AGNR. GNR; Uniaxial strain; Carrier statistic Graphene has attracted numerous research attention since it was isolated in 2004 by Novoselov et al. [1]. Due to its unique hexagonal symmetry, graphene posses many remarkable electrical and physical properties desirable in electronic devices. It is the nature of graphene that it does not have a bandgap, which has limited its usage. Therefore, efforts to open up a bandgap has been done by several methods [2-4]. The most widely implemented method is patterning the graphene into a narrow ribbon called graphene nanoribbon (GNR) [4]. Recently, strain engineering have started to emerge in graphene electronics [5]. It is found that strain applied to graphene can modify its band structure, thus, altering its electronic properties [6-8]. In fact, uniaxial strain also helps in improving the graphene device’s electrical performance [9]. Similar characteristics have been observed when strain is applied to conventional materials like silicon (Si), germanium (Ge), and silicon germanium (SiGe) [10]. Strain in graphene can be characterized by two major varieties, namely uniaxial and shear. This strain behaves differently on graphene depending on the edge shape, namely zigzag or armchair [8]. The presence of the strain effect in graphene is by the G peak that splits and shifts in the Raman spectrum [11,12]. It is worth noting that strain in graphene may unintentionally be induced during the fabrication of graphene devices. Computational modeling and simulation study pertaining to strain graphene and GNR for both the physical and electrical properties have been done using few approaches such as the tight binding model and the ab initio calculation [6,13]. An analytical modeling approach has also been implemented to investigate the strain effect on GNR around the low-energy limit region [14,15]. However, most of the previous works have only focused on the electronic band structure, particularly the bandgap. As the carrier transport in GNR has a strong relation with this electronic band structure and bandgap, it is mandatory to investigate the strain effect on the carrier transport such as carrier density and velocity. Therefore, in this paper, an analytical model representing uniaxial strain GNR carrier statistic is derived based on the energy band structure established by Mei et al. [15]. The strain effect in our model is limited to low strain, and only the first subband of the AGNR n=3m and n=3m+1 families is considered. In the following section, the analytical modeling of the uniaxial strain AGNR model is presented. Uniaxial strain AGNR model The energy dispersion relation of GNR under tight binding (TB) approximation incorporating uniaxial strain is represented by Equation 1 taken from reference [15]. The TB approximation is found to be sufficient in the investigation for small uniaxial strain strength. This is because the state near the Fermi level is still determined by the 2p[z] orbitals that form the π bands when the lattice constant changes [6]: where , , t[0]=−2.74 eV is the unstrained hopping parameter, a=0.142 nm is the lattice constant and t[1] and t[2] are the deformed lattice vector hopping parameter of the strained AGNR. ε is the uniaxial strain [15]. Using the first-order trigonometric function, Equation 1 can further be simplified to the following equation: To model the bandgap, at k[x]=0, Equation 2 is reduced to [15] Thus, the bandgap is obtained as the following equation [15]: The energy dispersion relation from Equation 2 can further be simplified to Equation 5 will be the basis in the modeling of strain GNR carrier statistic. GNR density of state (DOS) is further derived. The DOS that determines the number of carriers that can be occupied in a state of the system [16] is yielded as in Equation 7: In the modeling of the strain GNR carrier concentration, energy dispersion relation is approximated with the parabolic relation, . By substituting the normalized Fermi energy as , the strain AGNR carrier concentration model is derived and represented by To further evaluate the intrinsic carrier velocity in response to the uniaxial strain, the following definition is referenced [17]: The Fermi velocity, v[f], is modeled as in reference [18]. Thus, v[f] is obtained as the following equation: Hence, using the intrinsic velocity model defined in Equation 9, the strain AGNR intrinsic carrier velocity yields the following equation: The analytical model presented in this section is plotted and discussed in the following section. Results and discussion The energy band structure in respond to the Bloch wave vector, k[x], modeled as in Equation 1 which was established by Mei et al. [15], is plotted in Figure 1 for n=3m and n=3m+1 family, respectively. For each simulation, only low strain is tested since it is possible to obtain experimentally [12]. It can be observed from both figures that there is a distinct behavior between the two families. For n=3m, the separation between the conduction and valence bands, which is also known as bandgap, increases with the increment of uniaxial strain. On the contrary, the n=3m+1 family exhibits decrements in the separation of the two bands. It is worth noting that the n=3m+1 family also shows a phase metal-semiconductor transition where at 7% of strain strength, the separation of the conduction and valence bands almost crosses at the Dirac point. This is not observed in the n=3m family [15]. Figure 1. Energy band structure of uniaxial strain AGNR (a)n=3m and (b)n=3m+1 for the model in Equation 1. The hopping integral t[0] between the π orbitals of AGNR is altered upon strain. This causes the up and down shift, the σ^∗ band, to the Fermi level, E[F][19]. These two phenomena are responsible for the bandgap variation. It has been demonstrated that GNR bandgap effect with strain is in a zigzag pattern [14]. This observation can be understood by the shifting of the Dirac point perpendicular to the allowed k lines in the graphene band structure and makes some bands closer to the Fermi level [7,8]. Hence, the energy gap reaches its maximum when the Dirac point lies in between the two neighboring k lines. The allowed k lines of the two families of the AGNR have different crossing situations at the K point [8]. This may explain the different behaviors observed between n=3m and n=3m +1 family. To further evaluate, the GNR bandgap versus the GNR width is plotted in Figure 2. Within the uniaxial strain strength investigated, the bandgap of the n=3m family is inversely proportional to the GNR width. The narrow bandgap at the wider GNR width is due to the weaker confinement [20]. The conventional material of Si and Ge bandgaps are also plotted in Figure 2 for comparison. In order to achieve the amount of bandgap similar to that of Si (1.12 eV) or Ge (0.67 eV), the uniaxial strain is projected to approximately 3% for the n=3m family. A similar observation can be seen for n=3m+1 with 2% uniaxial strain. However, a higher strain resulted in a different kind of observation. For example at 4% uniaxial strain, the phase transition from metallic to semiconductor occurs at a GNR width of approximately 3m. The phase transition is not observed in AGNR n=3m[15]. When higher strain is applied, the phase transition occurs at a lower width. The difference in GNR width for the phase transition to occur depends on the subband spacing effect with GNR width [21]. The constitution of the phase transition suggests that the GNR bandgap can be tuned continuously between the metal and semiconductor by applying strain. Figure 2. Bandgap of AGNR in respond to the width for (a)n=3m and (b)n=3m+1. Based on the energy band structure, the analytical model representing the DOS of strained AGNR is derived as in Equation 7. It is necessary to understand the DOS of strain AGNR as it will give insight on the amount of carriers that can be occupied in a state. The analytical model for strained AGNR is shown in Figure 3 for the first subband for the two AGNR families. It appears that the patterns of DOS are essentially the same for both AGNR families. It can be observed from Figure 3a,b that the Van Hove singularities are present at the band edge. For AGNR with n=3m, the increment of strain increases the DOS remarkably. However, when ε=3%, despite the wide bandgap, the DOS substantially decreases. This is the reason for changing the band index, p, which corresponds to the bandgap [15]. In the case of n=3m+1, the DOS exhibits the opposite. In fact, when the strain strength made the band approach the transition phase, the DOS reduces significantly; at the same time, the bandgap approaches zero. Figure 3. DOS varying the uniaxial strain strength in AGNR (a)n=3m and (b)n=3m+1. To assess the effect of strain on AGNR carrier concentration, the computed model as in Equation 8 as a function of η is shown in Figure 4. Apparently, the amount of carriers increases when the AGNR n =3m is added with uniaxial strain. Conversely, AGNR n=3m+1 shows a reduction in carrier concentration upon strain. Most notably, for AGNR n=3m, the carrier concentration converges at low η within the investigated strain level. Meanwhile, the carrier concentration exhibits considerable effect upon the strain when the Fermi level lies at 3 k[B]T away from the conduction or valence band edge. The same observation was achieve in AGNR n=3m+1. Figure 4. Uniaxial strained AGNR carrier concentration as a function of normalized Fermi energy for (a)n=3m and (b)n=3m+1. To assess the carrier velocity effect with carrier concentration upon the strained AGNR, the analytical model in Equation 10 is plotted in Figure 5. It can be seen from Figure 5a,b that the GNR carrier velocity decreases and increases with the applied uniaxial strain for AGNR n=3m and AGNR n=3m+1 families, respectively. Inspection of these figures also showed that the uniaxial strain mostly affected the carriers at high concentration. This is evident by the curves that tend to converge until n≈3×10^7m^−1 and has an almost constant velocity at 1.8 × 10^5 ms ^−1. When the concentration is high enough, the uniaxial strain starts to give a considerable effect to the velocity. This is supported by the previous observation in Figure 4 where the effect of the strain is infinitesimal at low η. In fact, the applied strain also affects the degeneracy approach. The strained AGNR n=3m approach degenerated later compared to the unstrained AGNR. A similar behavior was also observed in the AGNR n=3m + 1 family except that strained AGNR approaches degeneracy faster compared to their unstrained counterparts. This indicates that uniaxial strain is beneficial at a high concentration regime. Nonetheless, this is not unreasonable for low-dimensional nanostructures like GNR since it is mostly in the degenerated realm particularly for narrow width. Figure 5. Uniaxial strained AGNR carrier velocity in response to carrier concentration for (a)n=3m and (b)n=3m+1. The energy in response to the Fermi velocity of strained AGNR is shown in Figure 6. It can be observed that the effect of the strain on the Fermi velocity for both AGNR families is dramatic. Both AGNR n=3m and n=3m+1 have appreciable reduction in the Fermi velocity when the uniaxial strain increases as can be seen in Figure 6a,b. This reduction is attributed to the decrements in the π orbital overlap [22] in the AGNR band structure. As a consequence, the mobility is predicted to be degraded [23] as a result of the strong effect in the interaction of the strained carbon atoms [18,23]. Figure 6. Fermi velocity effect to the energy band structure of uniaxial strain AGNR for (a)n=3m and (b)n=3m+1. In this paper, the uniaxial strain AGNR for n=3m and n=3m + 1 family carrier statistic is analytically modeled, and their behaviors are studied. It is found that uniaxial strain gives a substantial effect to AGNR carrier statistic within the two AGNR families. The AGNR carrier concentration has not been influenced by the uniaxial strain at low normalized Fermi energy. It is also shown that the uniaxial strain mostly affects carrier velocity at a high concentration of n≈3.0×10^7 m ^−1 and n≈1.0×10^7 m ^−1 for n=3m and n=3m+1, respectively. In addition, the Fermi velocity of the AGNR n=3m and n=3m+1 exhibits decrements upon the strain. Results obtained give physical insight on the understanding of the uniaxial strain effect on AGNR. The developed model in this paper representing uniaxial strain AGNR carrier statistic can be used to further derive the current-voltage characteristic. This computational work will stimulate experimental efforts to confirm the finding. Authors’ contributions ZJ carried out the analytical modelling and simulation studies. RI participated in drafting and improving the manuscript. Both authors read and approved the final manuscript. The authors would like to acknowledge the financial support from the Research University grant of the Ministry of Higher Education (MOHE), Malaysia under project number R.J130000.7823.4F146. Also, thanks to the Research Management Centre (RMC) of Universiti Teknologi Malaysia (UTM) for providing excellent research environment in which to complete this work. Sign up to receive new article alerts from Nanoscale Research Letters
{"url":"http://www.nanoscalereslett.com/content/8/1/479","timestamp":"2014-04-20T06:56:30Z","content_type":null,"content_length":"101432","record_id":"<urn:uuid:102f45e2-f922-4461-b441-2d1ad8831c5d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Finite Abelian Groups September 28th 2013, 10:06 PM #1 Junior Member Jun 2011 Finite Abelian Groups Let G be an abelian group, define a subgroup S of G to be pure if for all m in Z. S intersection mG = mS. prove that if G is p-primary abelian group. then S is pure iff S intersection P^n S for all n >=0 Re: Finite Abelian Groups Please reread what you wrote. Something seems to be missing because "S intersection P^n S for all n >=0" is not a boolean expression. Also, what is P? Re: Finite Abelian Groups Let G be an abelian group, not necessarily primary. Define a subgroup S of G to be pure subgroup if, for all m in Z, S intersection mG = mS. Prove that if G is a p-primary abelian group, then a subgroup S of G is pure if and only if S intersection p^nS for all n>= 0. p is prime number. Re: Finite Abelian Groups I don't know how to test if the expression "S intersection p^nS for all n>=0" is true or false. That is not a boolean expression. That expression describes a collection of sets. It does not suggest that the sets need to exist. It does not suggest that the sets need to be equal to other sets. It is just a collection of sets. The intersection operator is not a comparison operator. Neither is the ^ operator. You are not comparing S intersection p^nS to anything. So, since that set is neither true nor false, the expression "If G is a p-primary abelian group, then a subgroup S of G is pure if and only if S intersection p^nS for all n>=0." is not a sentence (it can not be evaluated to true or false). Edit: I can guess at what you were trying to say. If $G$ is a $p$-primary abelian group, then a subgroup $S\le G$ is pure if and only if $S\cap p^n G = p^n S$ for all $n\ge 0$. If that is the case, then the argument seems pretty self-evident. The first direction of the implication $\Longrightarrow$ is by definition. Next, to show the reverse implication, assume you have $S\cap p^n G = p^n S$ for all $n\ge 0$ and show that $S\cap m G = m S$ for all $m\in \mathbb{Z}$. That seems like a straightforward argument to me. Where are you running into issues? Last edited by SlipEternal; September 29th 2013 at 11:04 AM. September 29th 2013, 06:48 AM #2 MHF Contributor Nov 2010 September 29th 2013, 10:02 AM #3 Junior Member Jun 2011 September 29th 2013, 10:53 AM #4 MHF Contributor Nov 2010
{"url":"http://mathhelpforum.com/advanced-algebra/222375-finite-abelian-groups.html","timestamp":"2014-04-21T05:12:09Z","content_type":null,"content_length":"40025","record_id":"<urn:uuid:65692309-906c-403b-ab8d-9656738fca9e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
'Multi-Dimensional' Trust Metrics I think what some people here are feeling around for are 'multi-dimensional trust metrics', or even better - trust metrics built on a quantale which is a little more interesting than the positive real numbers. An immediate advantage of multi-dimensional trust metrics is that you can rate different qualities simultaneously. For example, you can rate someone's theoretical skills, or their skills in Perl, or Python. Then, if you like, you can project these skills onto a single scale in any way that you like - for example, you may be interested in people who are good at Perl and Python so you can take PERL+PYTHON or PERL*PYTHON, or some other increasing function of the individual variables as your measure of how good this person is on the scale. Why do we need to think about more than just multiple dimensions (which could just as easily be some with several separate metrics)? Here's an example: In the case of a rating system (like Advogato)- the fact that I trust someone's programming skills doesn't necessarily mean that I trust them to be a good judge of the qualities/trustworthiness of others. This is relevant because the degree to which I trust the people that they trust is actually determined by how much I trust their ability to rate people. Thus the transitivity of trust in this system is not simply the 'product' of the transitivity of two (uninteracting) metrics. For people who are really interested - you can of course do logic on quantales! Advogato's trust metric I don't know if anyone else has noticed, but the definition of Advogato's turst metric given doesn't result in a determined answer. In most situations there will be many different maximal flows, each of which will result in a different result of who is certified and who isn't. Ford-Fulkerson makes no claims about which flow will result and neither does the description of Advogato's metric calculation. I can't be bothered reading through the code to see what actually happens, so I'll just ask you learned people to do it for me ;)
{"url":"http://advogato.org/person/danf/diary.html","timestamp":"2014-04-18T10:56:57Z","content_type":null,"content_length":"16564","record_id":"<urn:uuid:c0d81bfc-f617-41a4-b9ac-e22caddaaa39>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
valid sequent February 25th 2009, 12:11 PM #1 Jan 2009 valid sequent I have used the semantic tableau procedure to check if the following sequent is valid. I get the answer that it is valid, however I am not sure if it is indeed valid and I havent made an error while writing the tableau. I havent included the tableau in this post as I don't know how to include it but the sequent is the following: $<br /> (\exists x A(x)) \rightarrow B \vdash \forall x (A(x) \rightarrow B)<br />$ Is this sequent indeed valid? Another sequent: $<br /> (\forall x A(x)) \rightarrow B \vdash \exists x (A(x) \rightarrow B)<br />$ Also for this 2nd sequent I get the result that it is valid. Am I correct for both sequents? Thanks for any help. Last edited by sanv; February 25th 2009 at 12:59 PM. Assume x does not occur free in B. By the deduction and generalization theorems, $<br /> \exists x A \rightarrow B \vdash \forall x (A \rightarrow B),<br />$ $<br /> \{(\exists x A \rightarrow B), A\} \vdash B.<br />$ Since $A \vdash \exists x A$ (using a contraposition and verify this), the above formula is valid. $<br /> (\forall x A(x)) \rightarrow B \vdash \exists x (A(x) \rightarrow B)<br />$ 1. $<br /> \forall x A \rightarrow B \vdash \exists x (A \rightarrow B),<br />$ 2. $<br /> eg \exists x eg A \rightarrow B \vdash \exists x (A \rightarrow B),<br />$ 3. $<br /> eg \exists x eg A \rightarrow B \vdash A \rightarrow B,<br />$ 4. $<br /> \{(eg \exists x eg A \rightarrow B), A\} \vdash B, (\text{by deduction theorem})<br />$ Since $A \vdash eg \exists x eg A$ (verify this), the above formula is valid. February 26th 2009, 12:16 AM #2 Senior Member Nov 2008
{"url":"http://mathhelpforum.com/discrete-math/75732-valid-sequent.html","timestamp":"2014-04-18T07:56:20Z","content_type":null,"content_length":"36243","record_id":"<urn:uuid:451a6aaf-3705-4249-94fe-c97135e70631>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Novato Precalculus Tutor Find a Novato Precalculus Tutor ...On the surface, it appears to be focused on angles, lines, polygons, circles and arcs and how they relate. But at the heart of the subject is a Greek system of logic. Understanding this and how it works is the key to mastering Geometry. 18 Subjects: including precalculus, calculus, geometry, statistics GENERAL EXPERIENCE: As an undergraduate student at Florida International University, I often found myself tutoring my study groups in several subject areas, from Biochemistry to advanced Calculus, which greatly helped my performance in each class. By my senior year, I obtained a teaching assistant... 24 Subjects: including precalculus, chemistry, physics, calculus ...I can help your student ace the following standardized math tests: SAT, ACT, GED, SSAT, PSAT, ASVAB, TEAS, and more. I am an expert on math standardized testing, as stated in my reviews from previous students. I have worked on thousands of these types of problems and can show your student how to do every single one, which will dramatically increase their test scores! 59 Subjects: including precalculus, chemistry, reading, calculus I just recently graduated from the Massachusetts Institute of Technology this June (2010) with a Bachelors of Science in Physics. While I was there, I also took various Calculus courses and courses in other areas of math that built on what I learned in high school. I'm a definite believer in the value of knowing the ways the world works, and the value of a good education. 6 Subjects: including precalculus, physics, calculus, algebra 1 ...I've also done extensive research in higher level mathematics, including an REU at the University of Illinois in Geometric Group Theory and an REU at Trinity University in Semi-Regular Congruence Monoids. I presented both of my research results at MathFest and the MAA Undergraduate Poster Session, respectively. But beyond all the high level stuff, to me, math is cool. 11 Subjects: including precalculus, calculus, geometry, algebra 1
{"url":"http://www.purplemath.com/novato_precalculus_tutors.php","timestamp":"2014-04-20T11:12:30Z","content_type":null,"content_length":"24123","record_id":"<urn:uuid:ed77e2e6-f002-429b-b78c-91262958389d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Numpy or Boost problem Bruce Sherwood Bruce_Sherwood@ncsu.... Thu Dec 20 11:19:20 CST 2007 I'm not sure whether this is a Numpy problem or a Boost problem, so I'm posting to both communities. In old Numeric, type(sqrt(5.5)) was float, but in numpy, type(sqrt(5.5)) is numpy.float64. This leads to a big performance hit in calculations in a beta version of VPython, using the VPython 3D "vector" class, compared with the old version that used Numeric (VPython is a 3D graphics module for Python; see vpython.org). Operator overloading of the VPython vector class works fine for vector*sqrt(5.5) but not for sqrt(5.5)*vector. The following free function catches 5.5*vector but fails to catch sqrt(5.5)*vector, whose type ends up as numpy.ndarray instead of the desired vector, with concomitant slow conversions in later vector calculations: inline vector operator*( const double& s, const vector& v) { return vector( s*v.x, s*v.y, s*v.z); } I've thrashed around on this, including trying to add this: inline vector operator*( const npy_float64& s, const vector& v) { return vector( s*v.x, s*v.y, s*v.z); } But the compiler correctly complains that this is in conflict with the version of double*vector, since in fact npy_float64 is actually double. It's interesting and presumably meaningful to the knowledgeable (not me) that vector*sqrt(5.5) yields a vector, even though the overloading speaks of double, not a specifically numpy name: inline vector operator*( const double s) const throw() { return vector( s*x, s*y, s*z); } VPython uses Boost, and the glue concerning vectors includes the following: py::class_<vector>("vector", py::init< py::optional<double, double, double> >()) .def( self * double()) .def( double() * self) As far as I can understand from the Boost Python documentation, this is the proper way to specify the left-hand and right-hand overloadings. But do I have to add something like .def( npy_float64() * self)? Help would be much appreciated. Bruce Sherwood More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-December/030338.html","timestamp":"2014-04-19T09:46:32Z","content_type":null,"content_length":"4457","record_id":"<urn:uuid:e87b260f-2e70-481f-b5a8-1edcdae4b3d9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Math In Excel does anyone know how to do this August 15th 2012, 11:02 AM Math In Excel does anyone know how to do this You are going to roll four twenty-sided dice. If the rolls total to 20 or less, roll two more twenty-sided dice and add that to the total. For instance, if the total of my four die rolls is 32, then that is the score of the game. If the total is 17, then I roll two more dice and add that to the total. If these two bonus dice total 20, then the score of that game is 37. Of ten thousand games, what is the average score? I am suppose to do this in excel I sortive understand it but i dont know help please August 15th 2012, 02:10 PM Re: Math In Excel does anyone know how to do this Here's how to simulate the roll of a 20-sided die: enter in cell To simulate 4 such dice, enter the same formula in cells A1:D1 (you can do this with copy-paste). Then enter in cell E1. This is the sum of the four dice. Maybe you can see from this how to get two more "dice" in cells F1 and G1, and their sum in H1. I'll leave it to you to figure out how to use an "=if" to put the score of the game in cell I1. You can then use copy/paste to generate 1000 such games in rows 1-1000 of your spreadsheet. I'll leave it to you to figure out how to get the average. If you see a smiley above, it's because I entered a colon D and the system helpfully translated it to an icon I did not want. I don't know how to turn that off (he said, not smiling). August 15th 2012, 02:49 PM Re: Math In Excel does anyone know how to do this When you click "Go Advanced," there is a page section called "Additional Options." It has several checkboxes including "Disable smilies in text." August 15th 2012, 04:09 PM Re: Math In Excel does anyone know how to do this Does that turn the smileys off for me only, or are other readers of the post also affected? I.e., do other readers see a colon D or a smiley? As a test, here is a colon D I clicked the "disable smileys", so I see a colon D-- how about you? August 16th 2012, 02:03 AM Re: Math In Excel does anyone know how to do this I also see colon D. August 16th 2012, 03:05 AM Re: Math In Excel does anyone know how to do this Thanks, that indicates the "disable smileys" option affects all readers, not just the writer of the post.
{"url":"http://mathhelpforum.com/statistics/202193-math-excel-does-anyone-know-how-do-print.html","timestamp":"2014-04-19T12:16:46Z","content_type":null,"content_length":"7960","record_id":"<urn:uuid:7a4c8f9e-9d41-485e-82db-b4059901aafc>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Potomac, MD SAT Math Tutor Find a Potomac, MD SAT Math Tutor ...My tutoring style can adapt to individual students and will teach along with class material so that students can keep their knowledge grounded. I have a Master's degree in Chemistry and I am extremely proficient in mathematics. I have taken many math classes and received a perfect score on my SAT Math test. 11 Subjects: including SAT math, chemistry, geometry, algebra 2 ...I can tutor all high school math including calculus, pre-calculus, trigonometry, geometry and Algebra I&II; plus SAT, GRE, ACT and other standardized test preparation. I can also tutor Physics and first year Chemistry.I have tutored this subject successfully in the past. In the past I was both a faculty high school mathematics instructor and a junior college instructor. 28 Subjects: including SAT math, chemistry, calculus, physics ...Since then, I have built on my algebra knowledge with a wide array of advanced mathematics. Therefore, I am very comfortable with the basics of algebra. I took three semesters of calculus at The University of Maryland, and did well in all of them. 27 Subjects: including SAT math, calculus, physics, geometry ...I am eager to work with any student, and am confident that through hard work and dedication anyone can succeed in any subject or topic. I look forward to working with you, be it in Mathematics, German or History!!During my four years at Temple University, I tutored mathematics for over two years... 11 Subjects: including SAT math, geometry, German, algebra 2 With a background in architecture, I have been teaching science, Physics and Chemistry for the last 16 years. I am currently in my eleventh year of teaching Physics to high school students having previously taught Chemistry for seven years. I am certified to teach both subjects in New Jersey. 7 Subjects: including SAT math, chemistry, physics, algebra 1 Related Potomac, MD Tutors Potomac, MD Accounting Tutors Potomac, MD ACT Tutors Potomac, MD Algebra Tutors Potomac, MD Algebra 2 Tutors Potomac, MD Calculus Tutors Potomac, MD Geometry Tutors Potomac, MD Math Tutors Potomac, MD Prealgebra Tutors Potomac, MD Precalculus Tutors Potomac, MD SAT Tutors Potomac, MD SAT Math Tutors Potomac, MD Science Tutors Potomac, MD Statistics Tutors Potomac, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Potomac_MD_SAT_Math_tutors.php","timestamp":"2014-04-17T21:40:14Z","content_type":null,"content_length":"24041","record_id":"<urn:uuid:e018171d-3e19-40cf-ba99-f58ae772c7fb>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Sample Paper CBSE Mathematics Class X 2012 Sample Paper – 2012 Class – X Subject – MATHEMATICS Time allowed : 3 hours Maximum Marks : 80 General Instructions: All questions are compulsory. The question paper consists of 34 questions divided into four sections A, B, C and D. Section A comprises of 10 questions of 1 mark each, section B comprises of 8 questions of 2 marks each, section C comprises of 10 questions of 3 marks each and section D comprises 6 questions of 4 marks each. 1. I. Question numbers 1 to 10 in section A are multiple choice questions where you are to select one correct option out of the given four. 2. II. There is no overall choice. However, internal choice have been provided in 1 question of two marks, 3 questions of three marks each and 2 questions of four marks each. You have to attempt only one of the alternatives in all such questions. 3. III. Use of calculator is not permitted. Section – A Question numbers 1 to 10 carry 1 mark each. For each question, four alternative choices have been provided of which only one is correct. You have to select the correct choice. 1. Which of the following equations has 2 as a root? 1. 4x^2-12x+9=0 2. 6x^2+x-12=02 3. 9x^2-22x+8 = 0 4. x^2 -18x+77 = 0 2. If , a, 4 are in AP, the value of a is 1. 1 2. 13 3. If two tangents inclined at an angle 60° are drawn to a circle of radius 3 cm, then length of each tangent is equal to 1. cm 2. 2.√3 cm 3. 3√3 cm 4. 6 cm 4. To divide a line segment AB in the ratio 3 : 5, first a ray AX is drawn so that angleBAXis an acute angle and then at equal distances, points are marked on the ray AX such that the minimum number of these points is 1. 5 2. 6 3. 7 4. 8 5. In Fig. given, the length of a tangent to a circle is 24 cm from a point P and point P is at a distance of25 cm from Centre O, then radius is 1. 7cm 2. 6cm 3. 8cm 4. 5cm 6. A circle touches all the four sides of a quadrilateral PQRS whose three sides are 6 cm, 8 cm and 9 cm respectively, fourth side is 1. 6cm 2. 7cm 3. 8cm 4. 4cm 7. The areas of two circles are in the ratio 16 : 25. The ratio of their perimeter is 1. 16: 25 2. 25 : 16 3. 4 : 5 4. 5 : 4 8. The area of a circle is 49 π. Its circumference is 9. The angle formed by the line of sight with the horizontal when it is above the horizontal level is 1. vertical angle 2. angle of depression 3. angle of elevation 4. none of these 10. If the event cannot occur 1. 1 2. 2/3 3. ½ 4. 0 Section B Question numbers 11 to 18 carry 2 marks each. 1. Without finding the roots, comment on the nature of the roots of the quadratic equation px ^2 + 2 x+ q = 0. 2. If the numbers ‘A – 2, 4A -1 and 5A. + 2 are in AP, find the values of ‘A. For the AP: – 3, -7, -11, can we find directly a[30] – a[20] without actually finding a[30] and a[20]? Give reasons for your answer. 1. Show that tangent lines at the end points of a diameter of a circle are parallel. 2. Find the area of circle whose circumference is 66 cm. 3. Two cubes have their volumes in the ratio 8: 125. What is the ratio of their surface areas? 4. ‘A’ is a point all the y-axis whose ordinate is 5 and B is the point (- 3, 1). Calculate the length of AB. 5. Find the ratio in which the line segment joining the points (6, 4) and (1, -7) is divided by x-axis. 6. A card is drawn from a well-shuffled pack of 52 cards. Calculate the probability of getting 1. i. Neither a card of club nor a card of spade. 2. ii. Neither a card of spade nor an ace. Section C Question numbers 19 to 28 carry 3 marks each. 1. Find the roots of the equation 5x ^2 – 6x – 2 = 0 by the method of completing the square. 2. Determine a so that 2a + 1, a^2 + a + 1 and 3a^2 – 3a + 3 are consecutive terms of an AP. If the first term of an AP is 2 and the sum of first five terms is equal to one-fourth of the sum of the next five terms, find the sum of the first 30 terms. 1. Two tangents PQ and P R are drawn from an external point to a circle with centre O. Prove that QORP is a cyclic quadrilateral. 2. Draw a ∆ABC with side BC = 7 cm, angle B = 45° and angle A = 105°. Then, construct a triangle whose sides are times the corresponding sides of ∆ABC. 3. 23. Area of a sector of a circle of radius 36 cm is 54 π cm^2. Find the length of the corresponding arc of the sector. ( Leave your answer in π. ) Find the difference of the areas of sector of angle 120° and its corresponding major sector of a circle of radius 21 cm. 1. A cone of radius 4 cm is divided into two parts by drawing a plane through the mid-point of its axis and parallel to its base. Compare the volumes of the two parts. 2. A person, standing on the bank of a river, observes that the angle subtended by a tree on the opposite bank is 60^0. When he retreats 20 m from the bank, he finds the angle to be 30°. Find the height of the tree. Give your answer correct to 2- decimal places. 3. If D (), E (7, 3) and F () are the mid-points of sides of ∆ABC, find the area of the ∆ABC. 4. Show that the points P (0, -2), Q (3, 1), R (0, 4) and S (-3, 1) are the vertices of a square. Find the coordinates of the point Q on the x-axis which lies on the perpendicular bisector of the line segment joining the points A (-5, – 2) and B (4, – 2). Name the type of triangle formed by the points Q, A and B. 1. Two dice are thrown together. Find the probability that the product of the numbers on the top of the dice is 1. 6 2. 12 3. 7 Section – D Question numbers 29 to 34 carry 4 marks each. 1. A dealer sells a toy for Rs 31.25 and gains as much percent as the cost price of the toy .Find the cost price of the toy. In a class test, the sum of Kamal’s marks in Mathematics and English is 40. Had he got 3 marks more in mathematics and 4 marks less in English, the product of the marks would have been 360. Find his marks in two subjects separately. 1. K. Rajalingam Ramaswammy repays his total loan of Rs 1,18,000 by paying every month starting with the first installment of Rs1000. If he increases the installment by Rs100 every month, what amount will be paid by him in the 30th installment? What amount of loan does he still have to pay after the 30th installment? 2. Prove that the lengths of tangents drawn from an external point to a circle are equal. 3. Find the area of the circle excluding the area of triangle PQR in Fig. given alongside, if PQ = 24 cm, PR = 7 cm and O is the Centre of the circle. 4. A building is in the form of a cylinder surmounted by a hemispherical dome. The base diameter of the dome is equal to of the total height of the building. Find the height of the building, if it contains 67 m^3 of air. 5. A vertical tower stands on a horizontal plane and is surmounted by a vertical flagstaff of height h. At a point on the plane, the angle of elevation of the bottom and the top of the flagstaff are respectively. Prove that the height of the tower is . The angles of elevation of the top of a tower from two points at distances a and b from the base and on the same straight line with it are complementary. Prove that the height of the tower is..under root ”ab” Dr Samir Ranjan Bhowmik on Question Bank CBSE Clinical Biochemistry XII 2010 Posted in 2012, CBSE Sample Paper, Class X
{"url":"http://samplepaper.org/sample-paper-cbse-mathematics-class-x/","timestamp":"2014-04-16T22:48:06Z","content_type":null,"content_length":"55149","record_id":"<urn:uuid:39f44794-8e91-4c85-8988-fea49a2c8be3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Space vector decomposition SPACE VECTOR DECOMPOSITION This demo animates the motion of space vectors under balanced sinusoidal conditions, appearing as constant amplitude vectors rotating at the excitation frequency. The components depend on the choice of reference frame. In the stationary αβ frame, the components are time varying representing two-phase sinusoidal signals at stator frequency. In the rotating synchronous dq frame, the dq components are constant whose values depend on the orientation of the space vectors with respect to the dq axes. The state vectors I[s] (stator current) and λ[r] (rotor flux linkage) are shown in the common dq synchronous frame. When the d_axis is aligned with the rotor field --a process referred to as field orientation--, λ[rq] = 0 and the torque is expressed as T[e] = k[1] i[sd] i[sq]. The current i[sd][ ]is the field component and i[sq] is the torque component of the stator current space vector I[s]. © M. Riaz
{"url":"http://www.ece.umn.edu/users/riaz/animations/spavecdqclip.html","timestamp":"2014-04-19T17:01:40Z","content_type":null,"content_length":"5256","record_id":"<urn:uuid:1f28f2c9-c9b3-4dfc-aeeb-bad5b1965921>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Circle Geometry Question May 12th 2009, 03:09 AM #1 Junior Member Sep 2008 Circle Geometry Question Hi, I have a question on circle geometry... Its looks quite simple (I would think) but I dont know how to do it. Two circles of radii 5cm and 8cm touch each other externally. Calculate the length of the common tangent. If you draw the diagram correctly, then you will see a trapezoid with two right angles as base angles. The lengths of the two parallel sides are 5 & 8. The length of the summit is 13. Thus you need to find the length of the base. May 12th 2009, 03:45 AM #2
{"url":"http://mathhelpforum.com/geometry/88651-circle-geometry-question.html","timestamp":"2014-04-18T15:21:04Z","content_type":null,"content_length":"32363","record_id":"<urn:uuid:d830aa4c-825b-4f17-b7ab-f2cc81e0f8da>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Technical report Frank / Johann-Wolfgang-Goethe-Universität, Fachbereich Informatik und Mathematik, Institut für Informatik 4 search hits An abstract machine for concurrent Haskell with futures (2012) David Sabel We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructed by first adding monadic IO-actions to the machine and then in a second step we add concurrency. Our main result is that the abstract machine coincides with the original operational semantics of CHF, w.r.t. may- and should-convergence. On conservativity of concurrent Haskell (2011) David Sabel Manfred Schmidt-Schauß The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO. Counterexamples to simulation in non-deterministic call-by-need lambda-calculi with letrec (2009) Manfred Schmidt-Schauß Elena Machkasova David Sabel This note shows that in non-deterministic extended lambda calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors. Closures of may and must convergence for contextual equivalence (2008) Manfred Schmidt-Schauß David Sabel We show on an abstract level that contextual equivalence in non-deterministic program calculi defined by may- and must-convergence is maximal in the following sense. Using also all the test predicates generated by the Boolean, forall- and existential closure of may- and must-convergence does not change the contextual equivalence. The situation is different if may- and total must-convergence is used, where an expression totally must-converges if all reductions are finite and terminate with a value: There is an infinite sequence of test-predicates generated by the Boolean, forall- and existential closure of may- and total must-convergence, which also leads to an infinite sequence of different contextual equalities.
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/series/id/16122/start/0/rows/10/subjectfq/Formale+Semantik","timestamp":"2014-04-21T15:32:51Z","content_type":null,"content_length":"20859","record_id":"<urn:uuid:4b6cda2a-749e-4f57-824d-4838d6564e3b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Algebra in Middle School Replies: 1 Last Post: Jun 1, 1995 10:31 AM Algebra in Middle School Posted: Jun 1, 1995 10:31 AM Just a comment from my personal experience with algebra in the middle school. My daughter attended a gifted middle school and took the equivalent of what the high schools call Algebra 1-2 while there. She was actually given "credit" at the high school level. Her freshman year she took Algebra 3-4, sophomore year Geometry and during her junior year she took a true Precalculus course with an eye on AP Calculus next year in her senior year. The area high schools have found this process quite successful, as far as I can tell. A few years back they tried the more "traditional" sequence with the incoming freshmen taking Geometry when they had credit for Algebra 1-2 in middle school. They then took Algebra 3-4 during their sophomore year. They found this not as successful. The feeling was (as I heard it) that the freshmen were note mathematically mature enough for the "theory" in geometry. With the new sequence, things have gone much Herb Kasube Department of Mathematics Bradley University Peoria, IL 61625
{"url":"http://mathforum.org/kb/message.jspa?messageID=1474094","timestamp":"2014-04-18T21:38:03Z","content_type":null,"content_length":"15014","record_id":"<urn:uuid:9cbd1759-f5dd-46a8-8fbc-f9aec59a9186>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the Molality, Molarity, and Mole Fraction when mass percent is given An aqueous antifreeze solution is 40.0% ethylene glycol (C2H6O2) by mass. The density of the solution is 1.05 g/cm. Calculate the molality, molarity, and mole fraction of the ethylene glycol. Wed, 2012-03-21 07:29 Assume a 100 gram sample. This gives you 40 g of ethylene glychol dissolved in 100g of solution. Determine the molar mass of e.g and then find the number of moles of the e.g. Molality = moles of e.g. /kilograms of solution (be sure to convert 100 g to kg) Molarity - to find the molarity you will need to convert the mass of the solution to volume in liters. This can be done using the density. volume (in cm3) = density x mass of solution Next convert from cm3 to liters ( 1 cm3 = 1mL = .001 L) Molarity = moles of e.g. / volume of solution in liters Mole Fraction - we need to know the moles of water present. Since the solution contains 40g e.g. the other 60 grams must be water. Find the moles of water present. Mole Fraction = moles of e.g. / (moles of e.g. + moles of water)
{"url":"http://yeahchemistry.com/questions/find-molality-molarity-and-mole-fraction-when-mass-percent-given","timestamp":"2014-04-25T02:44:45Z","content_type":null,"content_length":"17022","record_id":"<urn:uuid:cab48ee3-57f1-4a49-afe4-6f05dc33292b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 86 To find the second apotome. Set out a rational straight line A, and let GC be commensurable in length with A. Then GC is rational. Set out two square numbers DE and EF, and let their difference DF not be square. Now let it be contrived that FD is to DE as the square on CG is to the square on GB. Then the square on CG is commensurable with the square on GB. But the square on CG is rational, therefore the square on GB is also rational. Therefore BG is rational. And, since the square on GC does not have to the square on GB the ratio which a square number has to a square number, therefore CG is incommensurable in length with GB. And both are rational, therefore CG and GB are rational straight lines commensurable in square only. Therefore BC is an apotome. I say next that it is also a second apotome. Let the square on H be that by which the square on BG is greater than the square on GC. Since the square on BG is to the square on GC as the number ED is to the number DF, therefore, in conversion, the square on BG is to the square on H as DE is to EF. And each of the numbers DE and EF is square, therefore the square on BG has to the square on H the ratio which a square number has to a square number. Therefore BG is commensurable in length with H. And the square on BG is greater than the square on GC by the square on H, therefore the square on BG is greater than the square on GC by the square on a straight line commensurable in length with BG. And CG, the annex, is commensurable with the rational straight line A set out, therefore BC is a second apotome. Therefore the second apotome BC has been found.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookX/propX86.html","timestamp":"2014-04-18T15:39:12Z","content_type":null,"content_length":"4924","record_id":"<urn:uuid:1d37d5f1-c1a8-495e-88b9-4abdac069e35>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Low Pressure Gas Flow in Vacuum Systems Molecular Flow Module For Modeling Low Pressure Gas Flow in Vacuum Systems In an ion implanter, the average number density of outgassing molecules along the beam path is used as a figure of merit to evaluate the design. It must be computed as a function of wafer angle, with rotation about one axis. Accurate Modeling of Low Pressure, Low Velocity Gas Flows The Molecular Flow Module is designed to offer previously unavailable simulation capabilities for the accurate modeling of low pressure, low velocity gas flows in complex geometries. It is ideal for the simulation of vacuum systems including those used in semiconductor processing, particle accelerators and mass spectrometers. Small channel applications (e.g. shale gas exploration and flow in nanoporous materials) can also be addressed. The Molecular Flow Module uses a fast angular coefficient method to simulate steady-state free molecular flows. You can model isothermal and nonisothermal molecular flows, and automatically calculate the heat flux contribution from the gas molecules. The discrete velocity method is also included in the module for the simulation of transitional flows. Two Methods for Modeling Free Molecular and Transitional Flows The Molecular Flow Module offers two alternatives to these methods, allowing you to solve for low velocity and low pressure flows in a manageable and accurate fashion. Two specific physics interfaces, configured to receive model inputs via the graphical user interface (GUI) to fully specify a set of equations, are available: Free Molecular Flows The Free Molecular Flow interface uses the angular coefficient method to model flows with Knudsen numbers that are greater than ten. This physics interface avoids solving the physics in the volumes of the modeled geometries, and requires meshing only of the surfaces. Completely diffuse scattering (total accommodation) and emission are assumed at all surfaces in the geometry, and flow is computed by integrating the flux arriving at a surface from all other surfaces in its line-of-sight. This means that the dependent variables exist only on the surfaces of the geometry, and the solution process is much faster than the DSMC method. Furthermore, it is not subject to statistical scatter. Number densities are reconstructed using a method included in the Free Molecular Flow Transitional Flows The Transitional Flow interface solves the Boltzmann BGK equation by employing a modified form of the Lattice Boltzmann/Discrete Velocity method to solve transitional flows. Unlike the DSMC method, the solutions are not subject to statistical noise. Diffuse reflection of gas molecules is also assumed at all surfaces, with molecules from all directions effectively adsorbed onto the surface and subsequently re-emitted according to Knudsenā s law. In this interface, the model geometry is meshed to discretize the physical space, and a velocity quadrature is chosen, which provides a set of dependent variables that represent a mesh in velocity space. Both the mesh and the quadrature can be independently adjusted to ensure the problem is resolved in both physical and velocity space. Additional images: Optimized Methods for Fast and Accurate Simulations Gases at low pressures cannot be modeled using conventional computational fluid dynamics tools. That is due to the fact that kinetic effects become important as the mean free path of the gas molecules becomes comparable to the length scale of the flow. Flow regimes are categorized quantitatively via the Knudsen number (Kn), which represents the ratio of the molecular mean free path to the flow geometry size for gases: Flow type Knudsen Number Continuum flow Kn<0.01 Slip flow 0.01<Kn<0.1 Transitional flow 0.1<Kn<10 Free molecular flow Kn>10 While the Microfluidics Module is used for modeling slip and continuum flows, the Molecular Flow Module is designed for accurately simulating flows in the free molecular flow and transitional flow regimes. Historically, flows in this regime have been modeled by the direct simulation Monte Carlo (DSMC) method. This computes the trajectories of large numbers of randomized particles through the system, but introduces statistical noise to the modeling process. For low velocity flows, such as those encountered in vacuum systems, the noise introduced by DSMC renders the simulations unfeasible. COMSOL uses alternative approaches: employing a discrete velocity method for transitional flows (using a Lattice Boltzmann velocity quadrature) and the angular coefficient method for molecular flows. Differential Pumping Differentially pumped vacuum systems use a small orifice or tube to connect two parts of a vacuum system that are at very different pressures. Such systems are necessary when processes run at higher pressures and are monitored by detectors that require UHV for operation. In this model, gas flow through a narrow tube and into a high vacuum chamber ... Molecular Flow in an Ion Implant Vacuum System This example shows how to model an ion implantation system using the Molecular Flow interface available in the Microfluidics Module. In ion implantation, outgassing molecules interact with the ion beam to produce undesirable species. The average number density of outgassing molecules along the beam path is used as a figure of merit to evaluate the ... Molecular Flow Through a Microcapillary Computing molecular flows in arbitrary geometries produces complex integral equations that are very difficult to compute analytically. Analytic solutions are, therefore, only available for simple geometries. One of the earliest problems solved was that of gas flow through tubes of arbitrary length, which was first treated correctly by Clausing. ... Rotating Plate in a Unidirectional Molecular Flow This model computes the particle flux, number density and pressure on the surface of a plate that rotates in a highly directional molecular flow. The results obtained are compared with those from other, approximate, techniques for computing molecular flows. Adsorption and Desorption of Water in a Load Lock Vacuum System This model shows how to simulate the time-dependent adsorption and desorption of water in a vacuum system at low pressures. The water is introduced into the system when a gate valve to a load lock is opened and the subsequent migration and pumping of the water is modeled. Outgassing Pipes This benchmark model computes the pressure in a system of outgassing pipes with a high aspect ratio. The results are compared with a 1D simulation and a Monte-Carlo simulation of the same system from the literature. Molecular Flow in an RF Coupler This model computes the transmission probability through an RF coupler using both the angular coefficient method available in the Free Molecular Flow interface and a Monte Carlo method using the Mathematical Particle Tracing interface. The computed transmission probability determined by the two methods is in excellent agreement with less than a 1% ...
{"url":"http://www.comsol.fi/molecular-flow-module","timestamp":"2014-04-17T15:55:39Z","content_type":null,"content_length":"87006","record_id":"<urn:uuid:9331ef0f-ed17-4e27-84ad-7a461ff07e83>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum 9j symbols? up vote 4 down vote favorite A formula for (SU2) quantum 6j symbols exists. A formula expressing ordinary (q=1) 9j symbols in terms of 6j symbols is long known. Unfortunately, combining both (I tried it myself) got tricky - the associated graph K3,3 is nonplanar, at least one knot-type crossing is needed and first of all, this ruins the symmetry. Can I find the quantum analogon of the standard sum over the product of three 6j symbols in the literature (or can someone post it here)? rt.representation-theory qa.quantum-algebra Regarding the symmetry (and with almost complete lack of understanding regarding the specific question): As long as you have a choice like which knot crossing you use, there is still hope for a symmetry if you use formulas that combine all such choices. Perhaps the easiest (but perhaps to easy) way to do something like this would be to make all choices and compute an average. – Johannes Hahn Jun 11 '13 at 12:14 Turaev writes that the standard definition of 9j symbols in terms of 6j symbols carries over directly to the quantum case (unlike the 3j symbols, which need separate consideration). This doesn't work? V.G. Turaev, Quantum Invariants of Knots and 3-Manifolds, page 343. – Carlo Beenakker Jun 11 '13 at 12:58 @ Carlo: This would mean that the standard definition already is "normalized" with respect to the annoying crossing. (I "pulled it through" in my own computations; if it's not needed at all - the better! :-) My lib has the book, I can look up the details. THX!) – Hauke Reddmann Jun 12 '13 at 10:57 add comment 1 Answer active oldest votes to save you a trip to the library, here's the relevant paragraph from V.G. Turaev, Quantum Invariants of Knots and 3-Manifolds, page 343. up vote 2 down vote add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory qa.quantum-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/133368/quantum-9j-symbols","timestamp":"2014-04-18T00:42:40Z","content_type":null,"content_length":"53479","record_id":"<urn:uuid:99df9d13-085e-4092-8298-9114bdb946f5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00117-ip-10-147-4-33.ec2.internal.warc.gz"}